You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Users can either set a fixed number of streams using --numStorageWriteApiStreams or let the runner/sink decide by using autosharding.
The Storage API sink's auto-sharding relies on GroupIntoBatches.withShardedKey() to determine a number of parallel shards beforehand, then creates one write stream for each shard. Beam's implementation of GroupIntoBatches simply creates one per DoFn instance/thread.
The DataflowRunner overrides GroupIntoBatches to offer a more fine-tuned experience where parallelism is determined using signals such as accumulated backlog. When backlog is huge, the increase in parallelism forces the sink to create many streams and can end up exhausting BigQuery's CreateWriteStream quota.
To handle this better, we should allow the user to specify a maximum number of streams:
Extend a new BigQuery option --maxNumStorageWriteApiStreams (only applicable for streaming writes with autosharding)
Extend a GroupIntoBatches.withShardedKey().withMaxNumShards() function
This is required for both the local Beam implementation as well as the DataflowRunner override
Issue Priority
Priority: 2 (default / most feature requests should be filed as P2)
Issue Components
Component: Python SDK
Component: Java SDK
Component: Go SDK
Component: Typescript SDK
Component: IO connector
Component: Beam YAML
Component: Beam examples
Component: Beam playground
Component: Beam katas
Component: Website
Component: Infrastructure
Component: Spark Runner
Component: Flink Runner
Component: Samza Runner
Component: Twister2 Runner
Component: Hazelcast Jet Runner
Component: Google Cloud Dataflow Runner
The text was updated successfully, but these errors were encountered:
What would you like to happen?
Users can either set a fixed number of streams using
--numStorageWriteApiStreams
or let the runner/sink decide by using autosharding.The Storage API sink's auto-sharding relies on GroupIntoBatches.withShardedKey() to determine a number of parallel shards beforehand, then creates one write stream for each shard. Beam's implementation of GroupIntoBatches simply creates one per DoFn instance/thread.
The DataflowRunner overrides GroupIntoBatches to offer a more fine-tuned experience where parallelism is determined using signals such as accumulated backlog. When backlog is huge, the increase in parallelism forces the sink to create many streams and can end up exhausting BigQuery's CreateWriteStream quota.
To handle this better, we should allow the user to specify a maximum number of streams:
--maxNumStorageWriteApiStreams
(only applicable for streaming writes with autosharding)Issue Priority
Priority: 2 (default / most feature requests should be filed as P2)
Issue Components
The text was updated successfully, but these errors were encountered: