-
Notifications
You must be signed in to change notification settings - Fork 423
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support & documentation on using Azure CNI with capz #467
Comments
This would be good for installing the CNI: kubernetes-sigs/cluster-api#3050 |
@mboersma is this one you'd be interested in working on? It's a relatively hi-pri one because it's a blocker for IPv6 (cc @jsturtevant) |
@CecileRobertMichon yes indeed, I can probably start on this tomorrow. |
/priority important-soon |
/assign |
/unassign |
An implementation question: is the version of Azure CNI currently configurable? |
@jackfrancis not sure I understand the question? There is no Azure CNI implementation currently. |
A reminder to use the "transparent" mode configuration when implementing a capz + Azure CNI scenario. This AKS Engine PR configures Azure CNI for "transparent" mode by default: The vanilla installation method that AKS Engine follows (download the public release tarball and untar/gzip it) delivers a default configuration of "bridge" mode. AKS Engine modifies this to "transparent" by sed replacing the appropriate value: https://github.com/Azure/aks-engine/blob/master/parts/k8s/cloud-init/artifacts/cse_config.sh#L288 Not pretty, but there you have it. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
Rotten issues close after 30d of inactivity. Send feedback to sig-contributor-experience at kubernetes/community. |
@CecileRobertMichon @nawazkh I have pushed my commit. The tests are passing and I spun up a few workload clusters from tilt with various privateIPConfiguration counts, public IP allocation enabled/disabled, etc. I have to sign off for the night but let me know what you think tomorrow. |
Here is a link to the Azure CNI Manager daemonset that can be used to install the Azure CNI quite easily. |
@dthorsen this is not a supported tool, don't recommend it for this use please. |
That is disappointing, because it seems to do exactly what is required. It was very easy to get up and running this way. Is there a way it could become a supported tool? Otherwise it may be something that makes sense for CAPZ to fork and support, but that seems somewhat redundant if there is already a tool that does this. The functionality needed is pretty trivial, basically just install the CNI binaries, drop the config, and sleep forever. It seems strange to support the CNI, but not a k8s native installation method. |
@rbtr @tamilmani1989 can you provide more information? In lieu of an official solution, why is wrapping a daemonset around the published manifest in the official Azure CNI repo not recommended? |
@jackfrancis @dthorsen in short, the image referenced by that manifest is unmaintained and not prod-ready. We built it for our CI/nonprod debugging with no intent of public use. It is technically capable of installing the CNI binary, but I can't support CAPZ using/recommending it for use. Maybe most importantly, we have moved past needing it internally and it isn't getting updates. I did previously try to engage here to offer what will be an official supported solution but that didn't seem to get anyone's attention. This tool is being used for CNI install in AKS already and could probably trivially be extended for CAPZ, but I need some clarification on the requirements to make that happen. I suspect this will be sufficient, but if some CAPZ folks would like to contribute, we accept PRs 🙂 |
It seems based on @dthorsen's experience that the now-abandoned installer image can be used as the set of functional requirements. @dthorsen did that container image simply install the ipam stuff with a sufficient amount of UX configuration to fulfill all of the needs of Azure CNI v1 on a capz-built node? E.g.:
@rbtr would it be possible to extend the tool AKS is using to (1) include v1 of the CNI components with (2) the configuration above? |
Sorry, I haven't been active at posting updates on this test effort but I am actively working on this! |
/milestone v1.8 |
/lifecycle active |
/milestone next |
@nawazkh: You must be a member of the kubernetes-sigs/cluster-api-provider-azure-maintainers GitHub team to set the milestone. If you believe you should be able to issue the /milestone command, please contact your Cluster API Provider Azure Maintainers and have them propose you as an additional delegate for this responsibility. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I created two other issues to track the branched out effort to enable Azure CNI on CAPZ Enabling Azure CNI v1
Azure CNI v1 with one NIC per nodeHigh Priority
Azure CNI v1 with multiple NIC per nodeHigh Priority Low Priority |
closing this issue in favor of epic tracking this effort #3611 |
@nawazkh: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/kind feature
Describe the solution you'd like
[A clear and concise description of what you want to happen.]
https://github.com/Azure/azure-container-networking/blob/master/docs/cni.md
Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
ContainerCreating
state withFailed to find the master interface
warning. Azure/azure-container-networking#1945A little lower priority issue, but must be solved to support multiple NICs on control plane.
Environment:
kubectl version
):/etc/os-release
):The text was updated successfully, but these errors were encountered: