Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Internal DNS resolution is not working as expected #12100

Closed
mircoianese opened this issue Sep 5, 2024 · 6 comments
Closed

[BUG] Internal DNS resolution is not working as expected #12100

mircoianese opened this issue Sep 5, 2024 · 6 comments

Comments

@mircoianese
Copy link

Description

I have two containers:

  • Container A:
    - Connected to Bridge network reverse_proxy_net
    - Connected to Bridge Internal network internal_net

  • Container B:
    - Connected to Bridge network container_b_net (for internet access)
    - Connected to Bridge Internal network internal_net (needs to communicate with Container A)

internal_net inspect (simplified):

    {
        "Name": "internal_net",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.21.0.0/16",
                    "Gateway": "172.21.0.1"
                }
            ]
        },
        "Internal": true,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "container_a_id": {
                "Name": "container_a",
                "IPv4Address": "172.21.0.3/16",
                "IPv6Address": ""
            },
            "container_b_id": {
                "Name": "container_b",
                "IPv4Address": "172.21.0.4/16",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {}
    }

container_b_net inspect (simplified):

    {
        "Name": "container_b_net",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.25.0.0/16",
                    "Gateway": "172.25.0.1"
                }
            ]
        },
        "Internal": true,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "container_b_id": {
                "Name": "container_b",
                "IPv4Address": "172.25.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {}
    }

With this configuration Container A is correctly able to resolve Container BIP, but the opposite is not true.

From Container B the nslookup container_a command resolves to itself, returning 172.25.0.2:

/container_b # nslookup container_a
Server:		127.0.0.11
Address:	127.0.0.11:53

Non-authoritative answer:
Name:	container_a
Address: 172.25.0.2

Non-authoritative answer:

If I remove the container_b_net from the docker compose file, resolution works, but I lose internet access (since internal_net is marked as internal).

If I try to curl directly using the container_a IP from container_b it works, so it's some sort of issue on DNS resolution.

If i try to curl using the container_a name from container_b it resolves the same as nslookup does. If i manually specify the internal network interface, the connection times out.

I'm running out of ideas and this seems a bug to me at this point.
Thank you for your time

Steps To Reproduce

  1. Configure two containers with a shared internal network and two different bridge public networks
  2. Container A should be able to communicate with Container B, but for some reason in my case the opposite is not true
  3. I expect both sides to be able to communicate with each others

Compose Version

Docker Compose version v2.27.1

Docker Environment

Client: Docker Engine - Community
 Version:    26.1.4
 Context:    default
 Debug Mode: false
 Plugins:
  buildx: Docker Buildx (Docker Inc.)
    Version:  v0.14.1
    Path:     /usr/libexec/docker/cli-plugins/docker-buildx
  compose: Docker Compose (Docker Inc.)
    Version:  v2.27.1
    Path:     /usr/libexec/docker/cli-plugins/docker-compose

Server:
 Containers: 6
  Running: 6
  Paused: 0
  Stopped: 0
 Images: 16
 Server Version: 26.1.4
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Using metacopy: false
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: systemd
 Cgroup Version: 2
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: d2d58213f83a351ca8f528a95fbd145f5654e957
 runc version: v1.1.12-0-g51d5e94
 init version: de40ad0
 Security Options:
  apparmor
  seccomp
   Profile: builtin
  cgroupns
 Kernel Version: 6.0.0-060000-generic
 Operating System: Ubuntu 22.04.4 LTS
 OSType: linux
 Architecture: x86_64
 CPUs: 16
 Total Memory: 62.5GiB
 Name: main-de
 ID: c1e6b6e0-afc6-4ca9-b28e-ea9b07a27e4c
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

Anything else?

No response

@ndeloof
Copy link
Contributor

ndeloof commented Sep 5, 2024

Can you please provide a sample compose.yaml file to demonstrate this issue ?

@mircoianese
Copy link
Author

mircoianese commented Sep 5, 2024

In this case I have two separate compose.yaml files:

Container A:

services:
  container_a:
    image: image_a
    container_name: container_a
    ports:
      - 127.0.0.1:15000:8849 # I use this for testing purpose with ssh tunnels
    networks:
      - reverse_proxy_net
      - internal_net

networks:
  reverse_proxy_net:
    name: reverse_proxy_net
    driver: bridge
    external: true
  internal_net:
    name: internal_net
    driver: bridge
    internal: true

Container B:

services:
  container_b:
    image: image_b
    container_name: container_b
    ports:
      - 127.0.0.1:17480:17480 # I use this for testing purpose with ssh tunnels
      - 127.0.0.1:17481:17481 # I use this for testing purpose with ssh tunnels
    networks:
      - internal_net
      - container_b_net

networks:
  container_b_net:
    name: container_b_net
    driver: bridge
  internal_net:
    name: internal_net
    driver: bridge
    external: true

(Tried also to remove the "ports" from container_b, but same behavior)

Thanks

@mircoianese
Copy link
Author

Hello, little update here.

If i run the container_b using docker run instead of docker compose, everything works as expected.

I tried also a simpler example:

  • container_a, container_b and container_c connected to the same bridge network network_1.
  • From container_a I try to curl container_b using the container name. as a response I get two IPv4 addresses: container_b and container_c (in random order).
  • If i stop container_c then (correctly) only container_b is resolved
  • If I run again container_c the problem persists
  • If I run container_c using a docker run command, everything works.

The same issue is present if I try to curl container_b from container_c, I get the same two IPv4 IPs (container_c itself and container_b, in random order).

@ndeloof
Copy link
Contributor

ndeloof commented Sep 11, 2024

I tried to reproduce with this compose file:

services:
  container_a:
    image: nginx
    container_name: container_a
    ports:
      - 127.0.0.1:15000:8849 # I use this for testing purpose with ssh tunnels
    networks:
      - internal_net
  container_b:
    image: nginx
    container_name: container_b
    ports:
      - 127.0.0.1:17480:17480 # I use this for testing purpose with ssh tunnels
      - 127.0.0.1:17481:17481 # I use this for testing purpose with ssh tunnels
    networks:
      - internal_net
      - container_b_net

networks:
  internal_net:
    name: internal_net
    driver: bridge
    internal: true
  container_b_net:
    name: container_b_net
    driver: bridge

And can't reproduce

$ docker inspect internal_net
...
        "Containers": {
            "2bab53f1c9b4c735aee79bbd84e3d2be459acc8591764409bb802c234fe5cc91": {
                "Name": "container_b",
                "EndpointID": "cda14f485d0086f4025860a55287c1fcbbfc4cc2bd12c892dde2a4f54beca845",
                "MacAddress": "02:42:ac:14:00:02",
                "IPv4Address": "172.20.0.2/16",
                "IPv6Address": ""
            },
            "fa6efad5bead312c8bda61869a3176c7b6ac18542f0809e10dbf47e8f23bb8b7": {
                "Name": "container_a",
                "EndpointID": "76252fd21bc290dc28fc89f41b696d5ca71021e74f47f807736111ec8f1dce46",
                "MacAddress": "02:42:ac:14:00:03",
                "IPv4Address": "172.20.0.3/16",
                "IPv6Address": ""
...   
$ docker compose exec container_b nslookup container_a
Server:		127.0.0.11
Address:	127.0.0.11#53

Non-authoritative answer:
Name:	container_a
Address: 172.20.0.3

@jhrotko
Copy link
Contributor

jhrotko commented Oct 8, 2024

@mircoianese could you try with the latest release?

@ndeloof
Copy link
Contributor

ndeloof commented Nov 8, 2024

Closing as can't reproduce and user didn't provided feedback.
Please open a new issue with a reproduction example if the issue persists

@ndeloof ndeloof closed this as completed Nov 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants