-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
dhcp-relay: Handling redundancy #2
Comments
If multiple upstream DHCP servers are configured, all servers should receive a copy of DISCOVER and REQUEST packets received as broadcast from the client. |
yoelcaspersen ***@***.***> writes:
If multiple upstream DHCP servers are configured, all servers should
receive a copy of DISCOVER and REQUEST packets received as broadcast
from the client.
This is a bit of a PITA for XDP (duplicating packets, that is), but
should be doable with the new-ish multicast support in devmap. Are we
expecting the multiple upstream DHCP servers to all be reachable via the
same upstream interface, or will there be different paths to them?
Should REQUESTs really be duplicated, though; isn't it the DISCOVER that
is broadcast, and the client then picks an upstream and REQUESTs that
server directly?
|
Both DISCOVER and REQUEST packets should be duplicated to all upstream DHCP servers if they are received as broadcast - it is the way DHCP implements redundancy. All servers that receive a DISCOVER packet will try to make the client an OFFER. Once the client decides which OFFER to use, it will broadcast the REQUEST containing the IP address of the offering DHCP server in the SIADDR field, signaling to other DHCP servers that their offers have been refused. From a compatibility perspective, the DHCP relay should support at least two, but preferably an arbitrary number of upstream DHCP servers. Those servers may use the same upstream path - does that mean the multicast support cannot help us? Having multiple DHCP servers is a common scenario for ISPs. Perhaps it would be wiser to aim for a way to send special packets (e.g. DHCP) to a user space daemon (with VLAN tags included one way or the other) instead of manipulating the packets in XDP? I am not sure how packets going the other way (from DHCP server to clients) should be handled in that scenario, though. |
yoelcaspersen ***@***.***> writes:
Both DISCOVER and REQUEST packets should be duplicated to all upstream
DHCP servers if they are received as broadcast - it is the way DHCP
implements redundancy. All servers that receive a DISCOVER packet will
try to make the client an OFFER. Once the client decides which OFFER
to use, it will broadcast the REQUEST containing the IP address of the
offering DHCP server in the SIADDR field, signaling to other DHCP
servers that their offers have been refused.
Ah, right, missed the bit where the REQUEST also serves as a "reject" to
the other servers.
From a compatibility perspective, the DHCP relay should support at
least two, but preferably an arbitrary number of upstream DHCP
servers. Those servers may use the same upstream path - does that mean
the multicast support cannot help us? Having multiple DHCP servers is
a common scenario for ISPs.
No, this should be possible. The multicast support is a bit convoluted,
but it should be possible to have multiple copies go to the same
interface and even have them end up with different destinations.
Perhaps it would be wiser to aim for a way to send special packets
(e.g. DHCP) to a user space daemon (with VLAN tags included one way or
the other) instead of manipulating the packets in XDP? I am not sure
how packets going the other way (from DHCP server to clients) should
be handled in that scenario, though.
Well we could do that as well, of course, but then the userspace daemon
would have to relay packets in both directions. Since it's still
basically routing and rewriting packets, though, we might as well do it
in BPF, no?
|
How do we support redundant upstream DHCP servers? Round-robin scheduling
between them? Heartbeat to make sure we know which one(s) are online? Let the
network handle it?
The text was updated successfully, but these errors were encountered: