Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Assert protocol is propagated #3292

Closed

Conversation

jaronoff97
Copy link
Contributor

Description:
This is a new check to assert that we are indeed setting the protocol globally in the generated scrape config file

Link to tracking Issue(s): n/a

Testing:

Documentation:

@jaronoff97 jaronoff97 added the Skip Changelog PRs that do not require a CHANGELOG.md entry label Sep 16, 2024
@jaronoff97 jaronoff97 marked this pull request as ready for review September 17, 2024 15:47
@jaronoff97 jaronoff97 requested review from a team September 17, 2024 15:47
for _, protocol := range cfg.PromConfig.GlobalConfig.ScrapeProtocols {
scrapeProtocols = append(scrapeProtocols, monitoringv1.ScrapeProtocol(protocol))
}
prom.Spec.CommonPrometheusFields.ScrapeProtocols = scrapeProtocols
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this is the correct thing to do. In my view, GlobalConfig should only affect the raw scrape configs, while the respective Prometheus fields (which only affect prometheus-operator CRs) should be separately configured.

The ambiguity of how GlobalConfig should affect the prometheus-operator world is why I was reluctant to include it in the first place.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can see that, however, i think the confusion / importance of doing it this way is that we don't have the luxury of having it automatically propagated to the prometheus instance. i.e. when a prometheus instance sets the global config and is being configured via the prometheus operator, any scrape configs generated by the prometheus CRDs to be added to the prometheus instance will be using the global config defined by said prometheus instance.

As I was writing this, i was wondering if this is necessary given it's the collector that's doing the scraping. So if a user sets the global config on the prometheus receiver in the collector, shouldn't that be used when scraping a target overriding the scrape_configs received by the TA?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I need to look into this, because if that's the case this would be unnecessary right? Otherwise, I do think we need this.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The way I see it, we inhabit two separate worlds here:

  1. The world of raw Prometheus configurations. A user can put a configuration in their prometheus receiver settings, and expect the TargetAllocator to use it. scrape_configs and global_configs apply here. This has nothing to do with Kubernetes per se, and works without TA as well.

  2. The world of Prometheus CRs in Kubernetes. This is specific to the Target Allocator and is configured via the OpenTelemetryCollector (and in the near future, TargetAllocator) CR. Internally, this is done by passing the configuration to a Prometheus CR and using that to generate scrape configs from ServiceMonitors and such.

My opinion is that world 2 should not be affected by configuration for world 1. If we want to set scrapeProtocols in the same way we would normally do on a Prometheus CR, then we should have a scrapeProtocols field on our CRs for this. See #1934 for reference.

Does that make sense?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah that makes sense to me, and would probably better in the long run anyway. I had to look through some more code to make sense of this, but I agree that this is probably the way to go.

@jaronoff97
Copy link
Contributor Author

Closing this in favor of the resolution of #1934

@jaronoff97 jaronoff97 closed this Sep 19, 2024
@jaronoff97 jaronoff97 deleted the e2e-protocol-propagation branch September 19, 2024 17:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Skip Changelog PRs that do not require a CHANGELOG.md entry
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants