-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Watermarks and Windowing Not Working with FlinkRunner and KinesisIO Read Transform #31085
Comments
@je-ik Hi, thanks for the tip, but just tried upgrading to 2.56 and still seeing the same error. I am able to get it to work if I set my parallelism to 2 but any other values wont work which poses issues with autoscaling on aws. I also notice that the flink UI is showing me this:
Im seeing this on the watermarks tab for each subtask. Any advice you could give would be very helpful |
Can you provide all the command-line flags, which you pass to the runner, please? |
I am running it through aws managed flink so kind of a black box there, however, the only pipeline option I am passing is After reading the linked issue, I was able to get it to work locally using beam_fn_api experiments + upgrading to 2.56, but not really sure what thats doing. I also noticed that this expirement is adding a bunch of operators and is resulting in higher backpressure and lower performance which means its most likely not a viable solution |
Strange. Seems loke the fix is not working in your case. Can you double-check that you run with 2.56.0 (e.g. that no dependency brings some older Beam version, shading, etc.). Other than that it might help to set lpg level to DEBUG and investigate logs around |
will take a look to ensure no earlier version is being brought in. I was seeing this log:
also just reran and saw this Does this give any indication into the issue? |
Not really, but it seems you run the correct 2.56.0 version. The noMoreSplits signal just tells that there is indeed no more work. However that should result in emission of final watermark and should not hold the watermark. Could you patch your Beam version to add more logs? Ideally where the reader emits/computes watermark - e.g. Line 273 in 2ca9af8
|
I am actually seeing the watermarks work now in the flink runner on the web UI. And also seeing the idle tasks from my source reader get finished which I believe is ideal. However, I am still not getting the logs that occur when my window gets triggered unless beam_fn_api is enabled. Is there something else I need to be doing to get the window to trigger? This works without issue in dataflow and directrunner |
Yes, that is how the fix should work.
Can you try setting |
where is autoWatermarkInterval set in beam? Is this a pipeline option or set in the kinesis reader somewhere? |
Pipeline option. E.g. |
that worked, thank you so much for your help! |
Hi, I'm facing a similar issue with Beam 2.56.0 and Flink 1.16.3 and Java SDK. The problematic pipeline has parallelism 4 and has many slowly updating global window side inputs (uses unbounded Another source is Kafka and input topics have 3 partitions. I can see that one of the subtasks become FINISHED after some time and only 3 subtasks are active. Some of them receive very few messages and it's a norm for some partitions not to receive data for some time, this is especially true for testing environment. I've tryied to set Any ideas how to fix this? Should I downgrade Beam version to workaround the problem? |
After all side inputs emit their first message then watermarks are emitted correctly? |
@je-ik thanks for a quick response! What I ment is that only when all subtasks of the source emit a record (see 'Records sent' in the image below) then I can see watermark on the next operator in the UI. Each side input is independent in this respect. Below you can see an example of the source that do not emit a watermark, since only 2 subtasks emitted a record. |
Understood, this is probably unrelated to the issue reported here, can you please create another one? It would be best if you could provide a simple pipeline that exhibits the behavior you observe. |
No, I don't think it is related to KafkaIO, more likely some subtlety related to refactoring of Flink runner sources, see #25525 |
@je-ik Do you have any suggestions how to workaround the issue when there are several idle Kafka partitions/topics in the topology that hold an overall progress? It used to work with Beam 2.45.0 and Flink 1.15.0, but it seems that behavior has changed since then and it does not work as expected in Beam 2.56.0 and Flink 1.16.3. |
I don't know if there are any workarounds, as the described behavior seems to be (unknown) bug. It needs further investigation. Could you please provide a simplified pipeline that is affected by this? |
I am experiencing what i believe is quite the same issue, using FixedWindow when running in FlinkRunner. I am also using processing time, and from checking tracing logs i can tell there's something wrong with the watermarks: |
@yardenbm do you have idle partitions, or do all partitions contain data at all times? |
I tested various versions of Apache Beam with Apache Flink 1.15.4 and the issue started to appear in 2.52.0. Apache Beam 2.46.0 - 2.51.0 does not have this issue with Flink 1.15.4. Hopefully, it will help. Meanwhile i'll try to create a minimal example that reproduces the problem. |
I could easily reproduce it locally with a test container of Confluent Kafka 7.6.0 and Flink 1.15.4. Here is the test case, see the comments in the code. @Test
void testBeamFromKafkaSourcesIssue() throws Exception {
// this topic receives data
String topicFull = "topic-in-1";
// this topic is empty - the main ingredient to reproduce the issue
// if I remove it from the input - it works on 2.56.0
String topicEmpty = "topic-in-2";
try (AdminClient adminClient = KafkaAdminClient.create(kafkaProperties.buildAdminProperties())) {
adminClient
.createTopics(List.of(
new NewTopic(topicFull, 3, (short) 1),
new NewTopic(topicEmpty, 3, (short) 1)
))
.all()
.get(5, TimeUnit.SECONDS);
}
try (KafkaProducer<String, String> producer = new KafkaProducer<>(kafkaProperties.buildProducerProperties())) {
producer.send(new ProducerRecord<>(topicFull, 0, null, "payload-0")).get();
producer.send(new ProducerRecord<>(topicFull, 1, null, "payload-11")).get();
producer.send(new ProducerRecord<>(topicFull, 2, null, "payload-222")).get();
producer.send(new ProducerRecord<>(topicFull, 0, null, "payload-0")).get();
}
PipelineOptions opts = PipelineOptionsFactory.create();
opts.setRunner(TestFlinkRunner.class);
Pipeline pipeline = Pipeline.create(opts);
String bootstrapServers = String.join(",", kafkaProperties.getBootstrapServers());
PCollection<KafkaRecord<String, String>> readFullTopic = pipeline
.apply("ReadTopic1", createReader(topicFull, bootstrapServers));
PCollection<KafkaRecord<String, String>> readEmptyTopic = pipeline
.apply("ReadTopic2", createReader(topicEmpty, bootstrapServers));
PCollectionList.of(List.of(readFullTopic, readEmptyTopic))
.apply("Flatten", Flatten.pCollections())
.apply("ToString", MapElements.into(strings()).via(r -> r.getKV().getValue()))
.apply("LogInput", ParDo.of(LogContext.of("Input")))
.apply("Window", Window.into(FixedWindows.of(Duration.standardSeconds(3))))
.apply("Count", Count.perElement())
.apply("LogOutput", ParDo.of(LogContext.of("Counts")));
pipeline.run();
}
private static KafkaIO.Read<String, String> createReader(String kafkaTopic, String bootstrapServers) {
return KafkaIO.<String, String>read()
.withBootstrapServers(bootstrapServers)
.withTopic(kafkaTopic)
.withKeyDeserializer(StringDeserializer.class)
.withValueDeserializer(StringDeserializer.class)
.withConsumerConfigUpdates(Map.of(
AUTO_OFFSET_RESET_CONFIG, "earliest"
));
}
@AllArgsConstructor(staticName = "of")
static class LogContext<T> extends DoFn<T, T> {
private final String prefix;
@ProcessElement
public void processElement(ProcessContext c) {
System.out.printf("%s: Element: %s, pane: %s, ts: %s%n", prefix, c.element(), c.pane(), c.timestamp());
c.output(c.element());
}
} Output from
2.51.0 (expected)
|
Great, thanks! Looks like empty sources (before emitting first element) do not update downstream watermark. This is consistent with the observation above. I will look into it. |
@yelianevich Is this with default flags? Does this change when using |
@je-ik I do not specify any additional flags in the test, so I think it uses defaults. I run the same test using The test with
|
Yeah. Thia narrows it down to the source API. I was not able to identify a comnit to 2.52.0 that could cause this, but I'll use your test locally. Thanks. 👍 |
@yelianevich @yardenbm Hi, can you please try applying #31391 and verify it fixes the issue? |
@je-ik sorry for dummy question, is the only way to get binaries is by building it locally and then run the test? Is there a faster way to get binaries? |
You would have to clone the sources and build it with $ ./gradlew :runners:flink:1.16:publishToMavenLocal -Ppublishing |
@je-ik I verified the issue by deploying the latest master with your changes, it works as expected now. Thanks for a prompt fix. |
Closed via #31391 |
What happened?
When there are idle subtasks in flink, they dont propagate watermarks to downstream operators and thus windowing function that are based on watermarks never get triggered. I can see that when setting parallelism exactly equal to the number of kinesis shards, the problem doesnt exists, however, if this number is different, then I see the flink UI showing no watermarks and my windows never get triggered.
I also have custom DoFns that output with timestamp before so in theory, that should be used as the watermark for windowing, however, this is not the case.
When using native flink, I have seen solutions such as using methods like "withIdlenss", but these dont exist in beam. Is there something I am missing in my kinesis config or is this a known issue with the read transform,
This only occurs on the flink runner and not the direct or dataflow runner. Its also possible this isnt an issue with the kinesis io reader, but maybe the windowing function should ignore watermarks from idle upstream tasks.
Issue Priority
Priority: 2 (default / most bugs should be filed as P2)
Issue Components
The text was updated successfully, but these errors were encountered: