Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(opentelemetry): resolve lazy sampling + distributed tracing bug [backport 2.10] #10029

Merged
merged 2 commits into from
Aug 2, 2024

Conversation

github-actions[bot]
Copy link
Contributor

@github-actions github-actions bot commented Aug 1, 2024

Backport 47aad1c from #9974 to 2.10.

Description

With ddtrace v2.8.0 sampling decisions are no longer made when a root span is created. Instead sampling decisions are lazily evaluated when trace information is expected to leave a process. The ddtrace library sets a sampling decision on root spans when one of the following conditions are met:

  • an unsampled trace is about to be serialized and sent to the datadog agent here
  • an unsampled trace is about to generate and send distributed tracing headers (via datadog's HttpPropagator) here
  • a process is about to fork before a sampling decision is made here

Spans generated using OpenTelemetry API (via ddtrace TracerProvider) do not use the ddtrace HttpPropagator when propagating distributed traces. This leaves an edge case where the opentelemetry-api can propagate a distributed trace before a sampling decision is made. Since the default state of an opentelemetry span is unsampled, downstream services could receive a traceflag of 00 and drop spans that should have been kept. This could result in missing spans/incomplete traces in the Datadog UI.

Fix

This PR ensures that sampling decisions are ALWAYS made before a SpanContext is extracted from an OpenTelemetry span. Since Span.get_span_context() is the only mechanism to extract/propagate tracing information (ex: sampling decision, trace_id, span_id, etc.) from an OpenTelemetry Span, making a sampling decision here will ensure the OpenTelemetry API never propagates an undefined sampling decision. If Span.get_span_context() is never invoked, then OpenTelemetry spans will continue to be lazily sampled on serialization (just like Datadog spans).

TODO

There are many cases where trace information from a Datadog span can escape a process before a sampling decision is made (ex: via threads, spawned processes, manual context propagation). For these scenarios we ask users to manually sample spans (via these docs). Ideally user's should NOT need to kno the internal workings of tracer sampling and they should not be required to call tracer.sample(span) in their applications to resolve missing span issues. Sampling decisions should ALWAYS be made when tracing internals access Span.context.sampling_priority for the first time.

With this approach an invalid/undefined sampling priority is never returned. The Datadog Span Context will always return a consistent sampling decision.

cc: @brettlangdon, @zacharycmontoya, @ZStriker19

Risk

This change makes "lazy sampling" less "lazy" (for the OpenTelemetry API). Span tags and resource names set after Span.get_span_context() is called will NOT be used to make a sampling decision.

Checklist

  • PR author has checked that all the criteria below are met
  • The PR description includes an overview of the change
  • The PR description articulates the motivation for the change
  • The change includes tests OR the PR description describes a testing strategy
  • The PR description notes risks associated with the change, if any
  • Newly-added code is easy to change
  • The change follows the library release note guidelines
  • The change includes or references documentation updates if necessary
  • Backport labels are set (if applicable)

Reviewer Checklist

  • Reviewer has checked that all the criteria below are met
  • Title is accurate
  • All changes are related to the pull request's stated goal
  • Avoids breaking API changes
  • Testing strategy adequately addresses listed risks
  • Newly-added code is easy to change
  • Release note makes sense to a user of the library
  • If necessary, author has acknowledged and discussed the performance implications of this PR as reported in the benchmarks PR comment
  • Backport labels are set in a manner that is consistent with the release branch maintenance policy

@github-actions github-actions bot requested review from a team as code owners August 1, 2024 14:30
@datadog-dd-trace-py-rkomorn
Copy link

datadog-dd-trace-py-rkomorn bot commented Aug 1, 2024

Datadog Report

Branch report: backport-9974-to-2.10
Commit report: 1df1484
Test service: dd-trace-py

✅ 0 Failed, 340 Passed, 0 Skipped, 1m 20.65s Total Time

@pr-commenter
Copy link

pr-commenter bot commented Aug 1, 2024

Benchmarks

Benchmark execution time: 2024-08-01 15:17:50

Comparing candidate commit 3704195 in PR branch backport-9974-to-2.10 with baseline commit dade0b0 in branch 2.10.

Found 0 performance improvements and 0 performance regressions! Performance is the same for 221 metrics, 9 unstable metrics.

@romainkomorndatadog romainkomorndatadog enabled auto-merge (squash) August 2, 2024 10:38
@romainkomorndatadog romainkomorndatadog merged commit c4d6251 into 2.10 Aug 2, 2024
96 of 114 checks passed
@romainkomorndatadog romainkomorndatadog deleted the backport-9974-to-2.10 branch August 2, 2024 10:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants