Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Operator SDK generates ServiceMonitor CRs with bearerTokenFile, Prometheus skips them #6364

Closed
jhutar opened this issue Mar 16, 2023 · 14 comments
Assignees
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. triage/needs-information Indicates an issue needs more information in order to work on it.
Milestone

Comments

@jhutar
Copy link

jhutar commented Mar 16, 2023

Bug Report

What did you do?

Our Operator SDK generates ServiceMonitor CRs like this:

$ oc -n toolchain-host-operator get ServiceMonitor/host-operator-metrics-monitor -o yaml | yq .spec.endpoints
- bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
  path: /metrics
  port: https
  scheme: https
  tlsConfig:
    insecureSkipVerify: true

This means Prometheus Operator in openshift-user-workload-monitoring skips this one with message like this one:

$ oc -n openshift-user-workload-monitoring logs pod/prometheus-operator-865f7bf4b4-bgmrf | grep -i host-operator-metrics-monitor | head -n 1
level=warn ts=2023-03-14T11:24:25.737581454Z caller=operator.go:2255 component=prometheusoperator msg="skipping servicemonitor" error="it accesses file system via bearer token file which Prometheus specification prohibits" servicemonitor=toolchain-host-operator/host-operator-metrics-monitor namespace=openshift-user-workload-monitoring prometheus=user-workload

What did you expect to see?

spec:
  endpoints:
    - path: /metrics
      port: https
      scheme: https
      # The secret exists in the same namespace as this service monitor and accessible by the *Prometheus Operator*.
      bearerTokenSecret:
        name: host-operator-prometheus-user-workload
        key: token
      tlsConfig:
        insecureSkipVerify: true

What did you see instead? Under which circumstances?

bearerTokenFile is used

Environment

Operator type:

Kubernetes cluster type:

$ operator-sdk version

$ go version (if language is Go)

$ kubectl version

Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.1", GitCommit:"854f807d8a84dde710c062a5281bca5bc07cb562", GitTreeState:"clean", BuildDate:"2023-01-05T01:27:27Z", GoVersion:"go1.19.4", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.7
Server Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.6+5658434", GitCommit:"eddac29feb4bb46b99fb570999324e582d761a66", GitTreeState:"clean", BuildDate:"2022-11-09T10:31:39Z", GoVersion:"go1.18.7", Compiler:"gc", Platform:"linux/amd64"}

Possible Solution

N/A

Additional context

N/A

@theishshah theishshah added the triage/needs-information Indicates an issue needs more information in order to work on it. label Mar 20, 2023
@theishshah theishshah added this to the Backlog milestone Mar 20, 2023
@theishshah theishshah self-assigned this Mar 20, 2023
@openshift-bot
Copy link

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci openshift-ci bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 19, 2023
@openshift-bot
Copy link

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

@openshift-ci openshift-ci bot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 19, 2023
@openshift-bot
Copy link

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

@openshift-ci
Copy link

openshift-ci bot commented Aug 19, 2023

@openshift-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci openshift-ci bot closed this as completed Aug 19, 2023
@jhutar
Copy link
Author

jhutar commented Aug 19, 2023

/reopen

@openshift-ci openshift-ci bot reopened this Aug 19, 2023
@openshift-ci
Copy link

openshift-ci bot commented Aug 19, 2023

@jhutar: Reopened this issue.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@jhutar
Copy link
Author

jhutar commented Aug 19, 2023

/remove-lifecycle rotten

@openshift-ci openshift-ci bot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Aug 19, 2023
@openshift-bot
Copy link

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci openshift-ci bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 17, 2023
@jhutar
Copy link
Author

jhutar commented Nov 20, 2023

/remove-lifecycle stale

@openshift-ci openshift-ci bot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 20, 2023
@jhutar
Copy link
Author

jhutar commented Nov 20, 2023

Hello @theishshah . I have missed when you added "triage/needs-information", sorry. What info is needed?

@openshift-bot
Copy link

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci openshift-ci bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 18, 2024
@openshift-bot
Copy link

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

@openshift-ci openshift-ci bot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 20, 2024
@openshift-bot
Copy link

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

@openshift-ci openshift-ci bot closed this as completed Apr 19, 2024
Copy link

openshift-ci bot commented Apr 19, 2024

@openshift-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. triage/needs-information Indicates an issue needs more information in order to work on it.
Projects
None yet
Development

No branches or pull requests

3 participants