You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The current implementation recreates a TableRow from the protobuf message when withPropagateSuccessfulStorageApiWrites(true) is used. For tables with a large number of rows this becomes a CPU intensive operation. Based on the assumption that the consumers of the resulting PCollection only need to have a single, or just a few, columns from the original row, the following change should significantly improve the CPU and memory cost of pipelines which need to have this optional output.
Create "withPropagateSuccessfulStorageApiWrites(@nonnullable Set columnsToPropagate)" in addition to keeping "withPropagateSuccessfulStorageApiWrites(boolean propagateSuccessfulStorageApiWrites). This will be a non-breaking API change.
Pass the set of the required columns all the way to the DoFn responsible for the generation of the successful writes. There should be no additional memory increases in any of the methods - this is a static DoFn configuration.
Use the set of required columns rather than the meta data from the proto message to create the output TableRow. There are two possible places where the optimization can occur (proto -> DynamicMessage and DynamicMessage -> TableRow). If the former can be changed it will produce the best outcome, but even if we can change the latter it will still reduce a lot of CPU cycles.
This approach will work for all the permutations of inputs and methods - writeProtos()/writeTableRows and AT_LEAST_ONCE/EXACTLY_ONCE.
Issue Priority
Priority: 3 (nice-to-have improvement)
Issue Components
Component: Python SDK
Component: Java SDK
Component: Go SDK
Component: Typescript SDK
Component: IO connector
Component: Beam YAML
Component: Beam examples
Component: Beam playground
Component: Beam katas
Component: Website
Component: Spark Runner
Component: Flink Runner
Component: Samza Runner
Component: Twister2 Runner
Component: Hazelcast Jet Runner
Component: Google Cloud Dataflow Runner
The text was updated successfully, but these errors were encountered:
What would you like to happen?
The current implementation recreates a TableRow from the protobuf message when withPropagateSuccessfulStorageApiWrites(true) is used. For tables with a large number of rows this becomes a CPU intensive operation. Based on the assumption that the consumers of the resulting PCollection only need to have a single, or just a few, columns from the original row, the following change should significantly improve the CPU and memory cost of pipelines which need to have this optional output.
Create "withPropagateSuccessfulStorageApiWrites(@nonnullable Set columnsToPropagate)" in addition to keeping "withPropagateSuccessfulStorageApiWrites(boolean propagateSuccessfulStorageApiWrites). This will be a non-breaking API change.
Pass the set of the required columns all the way to the DoFn responsible for the generation of the successful writes. There should be no additional memory increases in any of the methods - this is a static DoFn configuration.
Use the set of required columns rather than the meta data from the proto message to create the output TableRow. There are two possible places where the optimization can occur (proto -> DynamicMessage and DynamicMessage -> TableRow). If the former can be changed it will produce the best outcome, but even if we can change the latter it will still reduce a lot of CPU cycles.
This approach will work for all the permutations of inputs and methods - writeProtos()/writeTableRows and AT_LEAST_ONCE/EXACTLY_ONCE.
Issue Priority
Priority: 3 (nice-to-have improvement)
Issue Components
The text was updated successfully, but these errors were encountered: