Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to create a production scope monitoring alert #1256

Closed
clement-chaneching opened this issue May 28, 2024 · 1 comment
Closed

How to create a production scope monitoring alert #1256

clement-chaneching opened this issue May 28, 2024 · 1 comment
Assignees
Labels
question Further information is requested

Comments

@clement-chaneching
Copy link

clement-chaneching commented May 28, 2024

TL;DR

Hello,
I deployed the foundation, but I dont understand how to take advantage of it to create a simple monitoring alert.
I want to monitor for example the deletion of buckets in the production folder containing multiple projects.
So I was thinking of creating a monitoring alert in the 2-envs-production folder to run some log queries in my centralized log sink log bucket created in 1-org.
But that doesnt actually seem possible since alert policies and logging metrics are project scoped?

I have added the prod projects into my monitoring scope into my prj-p-monitoring.
But not sure how I can use the log sink in prj-c-logging

Thanks for the help!

Expected behavior

Have a guideline/doc explaining how to take advantage of the different components of the foundation.

Observed behavior

No response

Terraform Configuration

Would be something like:

resource "google_monitoring_alert_policy" "bucket_deletion_alert_policy" {
  project      = module.env.monitoring_project_id
  display_name = "Bucket Deletion Alert Policy"
  combiner     = "OR"

  conditions {
    display_name = "Bucket Deletion Condition"
    condition_threshold {
      filter          = "metric.type=\"logging.googleapis.com/user/bucket_deletion_metric\""
      comparison      = "COMPARISON_GT"
      threshold_value = 0
      duration        = "60s"
    }
  }

  notification_channels = [google_monitoring_notification_channel.monitoring_email_channel.id]
}

resource "google_logging_metric" "bucket_deletion_metric" {
  project = module.env.logbucket_project_id
  name    = "bucket_deletion_metric"
  filter  = "resource.type=\"gcs_bucket\" AND protoPayload.methodName=\"storage.buckets.delete\" AND resource.labels.project_id:(\"prj-p\")"
}


### Terraform Version

```sh
N/A

Additional information

For now, I can just set this up in 4-projects/base_env and deploy this alert policy in every project.

@clement-chaneching clement-chaneching added the bug Something isn't working label May 28, 2024
@eeaton eeaton self-assigned this May 29, 2024
@eeaton eeaton added question Further information is requested and removed bug Something isn't working labels Jun 4, 2024
@eeaton
Copy link
Collaborator

eeaton commented Jun 4, 2024

For this use case i think it's actually much easier to create a log-based alert in your centralized logging project prj-c-logging. (GCP docs ) instead of the monitoring project prj-p-monitoring. All the audit logs you need are already stored in prj-c-logging, you just have to filter to the events and labels you need. This blueprint configures the aggravated log sink using a project destination, meaning that these logs are compatible with other logging features such as creating log based alerts, which are not supported with other log sink destinations such as a log bucket or storage bucket.

Run a log a query like the following from the Logs explorer console in prj-c-logging to verify the log filter (make sure you adjust the scope to include aggregated logs):

resource.type="gcs_bucket" AND
protoPayload.methodName="storage.buckets.delete" AND
logName:"projects/prj-p"

You can then create a log-based alert directly from the console, or in terraform. This stackoverflow shows examples of how to create an alert policy with condition_matched_log

FYI, we're also in the process of removing the recommendation for the environment wide monitoring projects. (#1200) Cloud Monitoring is typically used for workload performance metrics, so it makes more sense for individual workload teams to set monitoring scopes that are relevant for their workload, and there are few standard/universal use cases for an environment-wide monitoring scope that we can preconfigure in the blueprint.

I'll close this issue for now but feel free to re-open.

@eeaton eeaton closed this as completed Jun 4, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants