Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug Report] For loadbalancer submodule, updating health probes causes unexpected status 400 (400 Bad Request) with error: InvalidResourceReference...` #75

Closed
jinkang23 opened this issue Jul 17, 2024 · 1 comment · Fixed by #78
Assignees
Labels
bug Something isn't working

Comments

@jinkang23
Copy link

jinkang23 commented Jul 17, 2024

Describe the bug

When updating the existing heath probe resource that is referenced by Inbound Load balancer rules of frontend IP configuration causes the following error when running terraform apply:

Error: updating Load Balancer "lb-private" (Resource Group "rg-mygroup-001") for deletion of Probe 
"default_health_probe": performing CreateOrUpdate: unexpected status 400 (400 Bad Request) with error: 
InvalidResourceReference: Resource /subscriptions/<REDACTED>/resourceGroups/rg-mygroup-
001/providers/Microsoft.Network/loadBalancers/lb-private/probes/default_health_probe referenced by resource 
/subscriptions/<REDACTED>/resourceGroups/rg-mygroup-001/providers/Microsoft.Network/loadBalancers/lb-
private/loadBalancingRules/HA-ports was not found. Please make sure that the referenced resource exists, and that 
both resources are in the same region.

Here's an example of change made:

Before...

    "private" = {
      name     = "private"
      zones    = null #? westus does not support zones
      vnet_key = "transit"
      health_probes = {
        default_health_probe = {
          name                = "default_health_probe"
          protocol            = "Tcp"
          port                = 22
          interval_in_seconds = 5
        }
      }
      frontend_ips = {
        "ha-ports" = {
          name               = "private-vmseries"
          subnet_key         = "private"
          private_ip_address = "10.0.0.1"
          in_rules = {
            HA_PORTS = {
              name             = "HA-ports"
              port             = 0
              protocol         = "All"
              health_probe_key = "default_health_probe"
            }
          }
        }
      }
    }

After...

    "private" = {
      name     = "private"
      zones    = null #? westus does not support zones
      vnet_key = "transit"
      health_probes = {
        default_health_probe2 = {  #<-- updated this!
          name                = "default_health_probe2" #<-- updated this!
          protocol            = "Tcp"
          port                = 22
          interval_in_seconds = 5
        }
      }
      frontend_ips = {
        "ha-ports" = {
          name               = "private-vmseries"
          subnet_key         = "private"
          private_ip_address = "10.0.0.1"
          in_rules = {
            HA_PORTS = {
              name             = "HA-ports"
              port             = 0
              protocol         = "All"
              health_probe_key = "default_health_probe2"  #<-- updated this!
            }
          }
        }
      }
    }

Workaround is to add the following lifecycle block to azurerm_lb_probe resource:

resource "azurerm_lb_probe" "this" {
  for_each = merge(coalesce(var.health_probes, {}), local.default_probe)

  loadbalancer_id = azurerm_lb.this.id

  name     = each.value.name
  protocol = each.value.protocol
  port = contains(["Http", "Https"], each.value.protocol) && each.value.port == null ? (
    local.default_http_probe_port[each.value.protocol]
  ) : each.value.port
  probe_threshold     = each.value.probe_threshold
  interval_in_seconds = each.value.interval_in_seconds
  request_path        = each.value.protocol != "Tcp" ? each.value.request_path : null

  # this is to overcome the discrepancy between the provider and Azure defaults
  # for more details see here -> https://learn.microsoft.com/en-gb/azure/load-balancer/whats-new#known-issues:~:text=SNAT%20port%20exhaustion-,numberOfProbes,-%2C%20%22Unhealthy%20threshold%22
  number_of_probes = 1

  lifecycle {
    create_before_destroy = true
  }

}

Module Version

3.0.1

Terraform version

1.9.2

Expected behavior

No response

Current behavior

No response

Anything else to add?

No response

@acelebanski acelebanski self-assigned this Jul 23, 2024
@acelebanski acelebanski added the bug Something isn't working label Jul 23, 2024
@acelebanski acelebanski linked a pull request Jul 23, 2024 that will close this issue
4 tasks
@acelebanski
Copy link
Contributor

Hello @jinkang23, thanks for raising this. This issue will be fixed with PR #78.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants