diff --git a/blog/ai-assisted-coding-tools/index.html b/blog/ai-assisted-coding-tools/index.html
index 140c0da..2415ad5 100644
--- a/blog/ai-assisted-coding-tools/index.html
+++ b/blog/ai-assisted-coding-tools/index.html
@@ -17,13 +17,16 @@
In this post, we’ll explore how AI-assisted coding works, its benefits, potential challenges,
and tips for using it effectively in your development workflow. I’ll also share
examples with two solutions and compare them so it’ll help you to make decisions on what tools
-to use.">
AI-powered coding assistants are revolutionizing the way developers write software.
By providing contextual code suggestions and reducing repetitive tasks, these tools
can significantly increase productivity.
In this post, we’ll explore how AI-assisted coding works, its benefits, potential challenges,
and tips for using it effectively in your development workflow. I’ll also share
examples with two solutions and compare them so it’ll help you to make decisions on what tools
-to use.
The usage of coding assistants is often restricting by company policies.
+As a Red Hat employee, I am not allowed to leverage AI to contribute to our products and I want
+to make clear that this article is for information sharing only.
+My experiments are only on personal projects and the thoughts shared in this article are only my own.
In recent years, machine learning models trained on vast amounts of code have evolved into free or commercial tools,
which integrate seamlessly with popular code editors. These tools use natural language processing and deep learning
to provide real-time code suggestions as you write. Some of these tools also provide a chatbot which can help to explain
some code or even generate it for you based on the questions.
Coding Assistants analyze the code context and provide auto-suggestions, complete code snippets, or even entire functions.
diff --git a/categories/ai/index.html b/categories/ai/index.html
index 525f8f2..a630115 100644
--- a/categories/ai/index.html
+++ b/categories/ai/index.html
@@ -1,7 +1,7 @@
How AI-Assisted is Transforming Software Development
AI-powered coding assistants are revolutionizing the way developers write software. By providing contextual code suggestions and reducing repetitive tasks, these tools can significantly increase productivity. In this post, we’ll explore how AI-assisted coding works, its benefits, potential challenges, and tips for using it effectively in your development workflow. I’ll also share examples with two solutions and compare them so it’ll help you to make decisions on what tools to use.
-...
How AI-Assisted is Transforming Software Development
AI-powered coding assistants are revolutionizing the way developers write software. By providing contextual code suggestions and reducing repetitive tasks, these tools can significantly increase productivity. In this post, we’ll explore how AI-assisted coding works, its benefits, potential challenges, and tips for using it effectively in your development workflow. I’ll also share examples with two solutions and compare them so it’ll help you to make decisions on what tools to use.
-...
Developing cluster-api-provider-openstack with Tilt
This is a quick tutorial (mainly brain dump) on how I’m using Tilt do quickly iterate over my cluster-api-provider-openstack work.
+...
Developing cluster-api-provider-openstack with Tilt
This is a quick tutorial (mainly brain dump) on how I’m using Tilt do quickly iterate over my cluster-api-provider-openstack work.
...
Deploying OpenShift on OpenStack with an External Load-Balancer for your control plane in multiple Failure Domains
This is my second post of a series which will cover how you can distribute your OpenShift cluster across multiple datacenter domains and increase availability and performance of your control plane.
...
Deploying OpenShift with an External Load-Balancer for your control plane
This is my first post of a series which will cover how you can distribute your OpenShift cluster across multiple datacenter domains and increase availability and performance of your control plane.
...
SR-IOV network operator improvements for OpenStack
Stay tuned on our recent achievements in the Kubernetes and OpenStack space when running Fast-Datapath applications.
diff --git a/index.json b/index.json
index 03481d8..cb45575 100644
--- a/index.json
+++ b/index.json
@@ -1 +1 @@
-[{"content":"AI-powered coding assistants are revolutionizing the way developers write software. By providing contextual code suggestions and reducing repetitive tasks, these tools can significantly increase productivity. In this post, we\u0026rsquo;ll explore how AI-assisted coding works, its benefits, potential challenges, and tips for using it effectively in your development workflow. I\u0026rsquo;ll also share examples with two solutions and compare them so it\u0026rsquo;ll help you to make decisions on what tools to use.\nThe Rise of AI-Assisted Coding In recent years, machine learning models trained on vast amounts of code have evolved into free or commercial tools, which integrate seamlessly with popular code editors. These tools use natural language processing and deep learning to provide real-time code suggestions as you write. Some of these tools also provide a chatbot which can help to explain some code or even generate it for you based on the questions.\nCoding Assistants analyze the code context and provide auto-suggestions, complete code snippets, or even entire functions. By learning from repositories (public and/or private), these assistants understand programming patterns and offer relevant solutions.\nBenefits of Using AI Coding Assistants AI coding assistants can have a huge positive impact on the software development lifecycle. Some of the key benefits include:\nIncreased productivity: by suggesting common code snippets, functions, and boilerplate code, AI tools can speed up the development process. You spend less time on repetitive tasks and more time solving complex problems. Reduced syntax errors: AI suggestions can minimize syntax errors by offering code that adheres to best practices and standards. Note that generated syntax is not always right, which is why results are better with a powerful code editor. Learning opportunity: developers can learn new APIs, frameworks, and approaches by exploring the suggestions provided by the AI assistant. Enhanced focus: instead of switching between the IDE and browser for documentation or Stack Overflow answers, developers can stay focused on the code, with instant context-aware help. Challenges and Limitations Like anything with AI, coding assistants offer many benefits, they also come with certain challenges and limitations.\nCode quality concerns: tools suggest solutions that work but are not optimized or could introduce technical debt. The responsibility still lies with the developer to review and refine the code. I think that robots aren\u0026rsquo;t (for now) able to replace humans to write code that is well architectured and following all best practices. Security risks: code assistants might suggest insecure coding patterns or code with vulnerabilities. Developers need to remain vigilant, especially when it comes to security-critical code. I\u0026rsquo;ve experienced a few times where the assistant suggested very dangerous snippets involving the deletion of system files for example. Over-reliance: developers could become overly reliant on these tools, which could hinder deeper understanding of the code they write. This could be problematic for complex or novel problems where the AI may not provide helpful suggestions, especially when you work on new products and you need to create new concepts. Privacy and IP Issues: some concerns have been raised regarding how AI tools leverage open-source code and whether suggestions violate intellectual property or licensing agreements. In this post we\u0026rsquo;ll compare two models where one is open-source. Best Practices for Using AI Coding Assistants To get the most out of AI-powered coding tools, it’s important to follow a few best practices:\nUse AI as a complement, not a crutch: AI should assist in the development process but not replace thoughtful coding. Use the suggestions as a foundation, but always review and customize the code to fit your specific use case. Understand the code: don\u0026rsquo;t blindly accept suggestions. Make sure you understand the code being suggested and test it thoroughly. Focus on complex problem solving: let AI handle the repetitive or boilerplate tasks so you can spend more time on complex logic and design decisions. Stay up to date on best practices: AI tools will continue to improve, but developers need to stay informed about programming best practices, especially around security, performance, and maintainability. Keep learning, take trainings, review other\u0026rsquo;s code and also keep up with the models you\u0026rsquo;ve been using for code assistant. New releases often offer interesting features. My toolbox I\u0026rsquo;ve been playing with code generation for quite a while now and I want to share a couple of tools that I use:\nGithub Copilot: commercial solution for AI-powered coding assistant that provides real-time code suggestions directly within your code editor. Granite Code: open-source models that are decoder-only models designed for code generative tasks, trained with code written in 116 programming languages. I won\u0026rsquo;t explain how to setup your IDE to use these tools, I suggest to read their official documentation. Note that to run Granite models locally, you\u0026rsquo;ll certainly need a GPU if you want the code assistant to really be usable.\nBefore we start This is not a detailed comparaison between the two models but rather a quick example on that these tools can offer so you get an idea if you haven\u0026rsquo;t tried them yet.\nBenchmarking the models is not the goal here.\nGithub Copilot Copilot offers two extensions:\nGithub Copilot: provides inline coding suggestions as you type. GitHub Copilot Chat: provides conversational AI assistance. Let\u0026rsquo;s use the chat first:\nThe generated snippet just works.\nNow let\u0026rsquo;s see if we can print the Python version directly by commenting a function that doesn\u0026rsquo;t exist yet.\nHere is another example involving basic mathematics:\nNow let\u0026rsquo;s ask Copilot to generate unit tests for that script (using the \u0026ldquo;Generate tests button\u0026rdquo;):\nGranite Now let\u0026rsquo;s play with the Granite 8B code model. I\u0026rsquo;ve installed the Continue extension which provides both inline coding suggestions (if the model permits) and a chat.\nAs previously done, let\u0026rsquo;s first talk to the chat to initate the script.\nThe result is doing the job with almost the same result. You\u0026rsquo;ll notice that it didn\u0026rsquo;t create a function but rather printed directly what we asked for.\nLet\u0026rsquo;s see if it can find out what version of Python is running:\nAnd the other example:\nUnit tests:\nIf you noticed, the test will fail. It missed 5 which is in the first list and is odd. The test wasn\u0026rsquo;t as extended as Copilot suggested and in this case, wasn\u0026rsquo;t working.\nMy take on the models Again the examples were very basic (on purpose) so you can have a slight overview on what to expect. Let\u0026rsquo;s say it: comparing Granite 8B with Copilot isn\u0026rsquo;t fair. They aren\u0026rsquo;t the same at all and don\u0026rsquo;t have the same capabilities. However, better results could be obtained with Granite if we used 20 or 34 billions parameters. For an open-source model, it\u0026rsquo;s already providing really good results and can help to increase your productivity.\nConclusion AI-powered tools are transforming the way we write software by making coding more efficient, accessible, and collaborative. However, they\u0026rsquo;re not a silver bullet. Developers should use these tools thoughtfully, balancing the convenience of AI assistance with the need to maintain control and understanding of the code they write.\nBy adopting AI-assisted coding into your workflow, you can boost productivity and learn new approaches, but don\u0026rsquo;t forget that good coding practices and creativity will always remain central to successful software development.\nNow it\u0026rsquo;s up to you to test the tools, find models that fit your needs (and company policy!) which will hopefully help you.\nNote: the image of this post was generated by Llama3 using the Facebook chat. The prompt was \u0026ldquo;generate an image of a developer assisted by a robot\u0026rdquo;.\n","permalink":"https://my1.fr/blog/ai-assisted-coding-tools/","summary":"\u003cp\u003eAI-powered coding assistants are revolutionizing the way developers write software.\nBy providing contextual code suggestions and reducing repetitive tasks, these tools\ncan significantly increase productivity.\nIn this post, we\u0026rsquo;ll explore how AI-assisted coding works, its benefits, potential challenges,\nand tips for using it effectively in your development workflow. I\u0026rsquo;ll also share\nexamples with two solutions and compare them so it\u0026rsquo;ll help you to make decisions on what tools\nto use.\u003c/p\u003e","title":"How AI-Assisted is Transforming Software Development"},{"content":"This is a quick tutorial (mainly brain dump) on how I\u0026rsquo;m using Tilt do quickly iterate over my cluster-api-provider-openstack work.\nBefore you continue I won\u0026rsquo;t go into what CAPO, CAPI, Kind, ctlptl and Tilt are and how they work. I\u0026rsquo;ve just learnt about Tilt so this post will probably be updated from time to time. My environment always runs on latest stable Fedora, and latest dependencies (Kind, ctlptl, Tilt, etc). Check that your tools meet the latest requirements. Podman A couple of things I had to do regarding Podman:\nEnable the Podman socket: systemctl --user enable --now podman.socket And then in my zshrc I add:\nexport DOCKER_HOST=unix:///run/user/$(id -u)/podman/podman.sock Allow the local registry to be insecure, by editing /etc/containers/registries.conf and add: [[registry]] location = \u0026#34;localhost:5000\u0026#34; insecure = true Running out of inotify resources My default Fedora had too low values for both max_user_watches and max_user_instances, so I tweaked it a bit:\nsudo sysctl fs.inotify.max_user_watches=524288 sudo sysctl fs.inotify.max_user_instances=512 Deploy the Kind management cluster ctlptl create registry ctlptl-registry --port=5000 ctlptl create cluster kind --registry=ctlptl-registry I found ctlptl super useful as it handles the container registry, but you can also simply use Kind directly and deploy your own registry or e.g. use quay.io.\nCreate a Secret for clouds.yaml For now I\u0026rsquo;m creating the secret \u0026ldquo;manually\u0026rdquo;, but I know Tilt can do it for us.\nexport CLUSTER_NAME=dev export CAPO_DIRECTORY=~/go/src/github.com/kubernetes-sigs/cluster-api-provider-openstack # replace `my_cloud` by the name of your cloud in clouds.yaml source $CAPO_DIRECTORY/templates/env.rc ~/.config/openstack/clouds.yaml my_cloud cat \u0026lt;\u0026lt;EOF | kubectl apply -f - apiVersion: v1 data: cacert: ${OPENSTACK_CLOUD_CACERT_B64} clouds.yaml: ${OPENSTACK_CLOUD_YAML_B64} kind: Secret metadata: labels: clusterctl.cluster.x-k8s.io/move: \u0026#34;true\u0026#34; name: ${CLUSTER_NAME}-cloud-config EOF Prepare CAPI You need to create tilt-settings.yaml in the CAPI directory. This is an example of how it could looks like:\nbuild_engine: podman kind_cluster_name: kind provider_repos: - ../cluster-api-provider-openstack enable_providers: - openstack - kubeadm-bootstrap - kubeadm-control-plane debug: openstack: port: 31000 kustomize_substitutions: CLUSTER_TOPOLOGY: \u0026#34;true\u0026#34; CLUSTER_NAME: \u0026#34;dev\u0026#34; OPENSTACK_SSH_KEY_NAME: \u0026#34;emilien\u0026#34; OPENSTACK_BASTION_FLAVOR: \u0026#34;m1.large\u0026#34; OPENSTACK_CONTROL_PLANE_MACHINE_FLAVOR: \u0026#34;m1.large\u0026#34; OPENSTACK_NODE_MACHINE_FLAVOR: \u0026#34;m1.large\u0026#34; OPENSTACK_FAILURE_DOMAIN: \u0026#34;nova\u0026#34; OPENSTACK_IMAGE_NAME: \u0026#34;ubuntu-2204-kube-v1.28.5\u0026#34; OPENSTACK_CLOUD: foch_openshift OPENSTACK_DNS_NAMESERVERS: \u0026#34;1.1.1.1\u0026#34; NAMESPACE: \u0026#34;default\u0026#34; KUBERNETES_VERSION: \u0026#34;v1.28.5\u0026#34; template_dirs: openstack: - ../cluster-api-provider-openstack/templates Configure Virtual Studio Code In the CAPO directory, create .vscode/launch.json:\n{ \u0026#34;version\u0026#34;: \u0026#34;0.2.0\u0026#34;, \u0026#34;configurations\u0026#34;: [ { \u0026#34;name\u0026#34;: \u0026#34;Connect to OpenStack provider\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;go\u0026#34;, \u0026#34;request\u0026#34;: \u0026#34;attach\u0026#34;, \u0026#34;mode\u0026#34;: \u0026#34;remote\u0026#34;, \u0026#34;port\u0026#34;: 31000, \u0026#34;host\u0026#34;: \u0026#34;127.0.0.1\u0026#34;, \u0026#34;showLog\u0026#34;: true, \u0026#34;trace\u0026#34;: \u0026#34;log\u0026#34; } ] } Make sure you have the Go extension installed and also you need to install Delve, a debugger for Go.\nAfter that you can add breakpoints to your code and debug. Have a look at this guide for useful content.\nRun Tilt! tilt up Here is the URL to follow what Tilt will do, but basically it will do everything under the cover so when you change something in CAPO or CAPI or Tilt config, it\u0026rsquo;ll rebuild images and redeploy them in the management cluster.\nTo deploy a workload cluster, I do it from the UI:\nIn CAPO.clusterclasses, I apply the dev-test ClusterClass. in CAPO.templates, I create a development cluster. The cluster will now be deployed.\n","permalink":"https://my1.fr/blog/developing-cluster-api-provider-openstack-with-tilt/","summary":"\u003cp\u003eThis is a quick tutorial (mainly brain dump) on how I\u0026rsquo;m using Tilt do quickly iterate over my cluster-api-provider-openstack work.\u003c/p\u003e","title":"Developing cluster-api-provider-openstack with Tilt"},{"content":"This is my second post of a series which will cover how you can distribute your OpenShift cluster across multiple datacenter domains and increase availability and performance of your control plane.\nBackground If you haven\u0026rsquo;t read it, please have a look at the first post.\nFailure Domains Failure Domains help to spread the OpenShift control plane across multiple (at least 3) domains where each domain has a defined storage / network / compute configuration. In a modern datacenter, each domain has its own power unit, network and storage fabric, etc. If a domain goes down, it wouldn\u0026rsquo;t have an impact on the workloads since the other domains are healthy and the services are deployed in HA.\nIn this context, we think that the SLA of OpenShift can significantly be increased by deploying the OpenShift cluster (control plane and workloads) across at least 3 domains.\nIn OCP 4.13, Failure Domains will be TechPreview (not supported) but you can still test it. We plan to make it supported in a future release.\nIf you remember the previous post, we were deploying OpenShift within one domain, with one external load balancer. Now that we have Failure Domains, let\u0026rsquo;s deploy 3 external LBs (one in each domain) and then a cluster that is distributed over 3 domains.\nPre-requisites At least 3 networks and subnets (can be tenant or provider networks) have to be pre-created. They need to be reachable from where Ansible will be run. The machines used for the LB have to be deployed on CentOS9 (this is what we test).\nDeploy your own Load-Balancers In our example, we\u0026rsquo;ll deploy one LB per leaf, which is in its own routed network. Therefore, we\u0026rsquo;ll deploy 3 load balancers.\nLet\u0026rsquo;s deploy!\nCreate your Ansible inventory.yaml file:\n--- all: hosts: lb1: ansible_host: 192.168.11.2 config: lb1 lb2 ansible_host: 192.168.12.2 config: lb2 lb3: ansible_host: 192.168.13.2 config: lb3 vars: ansible_user: cloud-user ansible_become: true Create the Ansible playbook.yaml file:\n--- - hosts: - lb1 - lb2 - lb3 tasks: - name: Deploy the LBs include_role: name: emilienm.routed_lb Write the LB configs in Ansible vars.yaml:\n--- configs: lb1: bgp_asn: 64998 bgp_neighbors: - ip: 192.168.11.1 password: f00barZ services: \u0026amp;services - name: api vips: - 192.168.100.240 min_backends: 1 healthcheck: \u0026#34;httpchk GET /readyz HTTP/1.0\u0026#34; balance: roundrobin frontend_port: 6443 haproxy_monitor_port: 8081 backend_opts: \u0026#34;check check-ssl inter 1s fall 2 rise 3 verify none\u0026#34; backend_port: 6443 backend_hosts: \u0026amp;lb_hosts - name: rack1-10 ip: 192.168.11.10 - name: rack1-11 ip: 192.168.11.11 - name: rack1-12 ip: 192.168.11.12 - name: rack1-13 ip: 192.168.11.13 - name: rack1-14 ip: 192.168.11.14 - name: rack1-15 ip: 192.168.11.15 - name: rack1-16 ip: 192.168.11.16 - name: rack1-17 ip: 192.168.11.17 - name: rack1-18 ip: 192.168.11.18 - name: rack1-19 ip: 192.168.11.19 - name: rack1-20 ip: 192.168.11.20 - name: rack2-10 ip: 192.168.12.10 - name: rack2-11 ip: 192.168.12.11 - name: rack2-12 ip: 192.168.12.12 - name: rack2-13 ip: 192.168.12.13 - name: rack2-14 ip: 192.168.12.14 - name: rack2-15 ip: 192.168.12.15 - name: rack2-16 ip: 192.168.12.16 - name: rack2-17 ip: 192.168.12.17 - name: rack2-18 ip: 192.168.12.18 - name: rack2-19 ip: 192.168.12.19 - name: rack2-20 ip: 192.168.12.20 - name: rack3-10 ip: 192.168.13.10 - name: rack3-11 ip: 192.168.13.11 - name: rack3-12 ip: 192.168.13.12 - name: rack3-13 ip: 192.168.13.13 - name: rack3-14 ip: 192.168.13.14 - name: rack3-15 ip: 192.168.13.15 - name: rack3-16 ip: 192.168.13.16 - name: rack3-17 ip: 192.168.13.17 - name: rack3-18 ip: 192.168.13.18 - name: rack3-19 ip: 192.168.13.19 - name: rack3-20 ip: 192.168.13.20 - name: ingress_http vips: - 192.168.100.250 min_backends: 1 healthcheck: \u0026#34;httpchk GET /healthz/ready HTTP/1.0\u0026#34; frontend_port: 80 haproxy_monitor_port: 8082 balance: roundrobin backend_opts: \u0026#34;check check-ssl port 1936 inter 1s fall 2 rise 3 verify none\u0026#34; backend_port: 80 backend_hosts: *lb_hosts - name: ingress_https vips: - 192.168.100.250 min_backends: 1 healthcheck: \u0026#34;httpchk GET /healthz/ready HTTP/1.0\u0026#34; frontend_port: 443 haproxy_monitor_port: 8083 balance: roundrobin backend_opts: \u0026#34;check check-ssl port 1936 inter 1s fall 2 rise 3 verify none\u0026#34; backend_port: 443 backend_hosts: *lb_hosts - name: mcs vips: - 192.168.100.240 min_backends: 1 frontend_port: 22623 haproxy_monitor_port: 8084 balance: roundrobin backend_opts: \u0026#34;check check-ssl inter 5s fall 2 rise 3 verify none\u0026#34; backend_port: 22623 backend_hosts: *lb_hosts lb2: bgp_asn: 64998 bgp_neighbors: - ip: 192.168.12.1 password: f00barZ services: \u0026amp;services lb3: bgp_asn: 64998 bgp_neighbors: - ip: 192.168.13.1 password: f00barZ services: \u0026amp;services In this case, we deploy OpenShift on OpenStack which doesn\u0026rsquo;t support static IPs. Therefore, we have to put all the available IPs from the subnets used for the machines, in the HAproxy backends.\nInstall the role and the dependencies:\nansible-galaxy install emilienm.routed_lb,1.0.0 ansible-galaxy collection install ansible.posix ansible.utils Deploy the LBs:\nansible-playbook -i inventory.yaml -e \u0026#34;@vars.yaml\u0026#34; playbook.yaml Deploy OpenShift Here is an example of install-config.yaml:\napiVersion: v1 baseDomain: mydomain.test compute: - name: worker platform: openstack: type: m1.xlarge replicas: 1 controlPlane: name: master platform: openstack: type: m1.xlarge failureDomains: - portTargets: - id: control-plane network: id: fb6f8fea-5063-4053-81b3-6628125ed598 fixedIPs: - subnet: id: b02175dd-95c6-4025-8ff3-6cf6797e5f86 - portTargets: - id: control-plane network: id: 9a5452a8-41d9-474c-813f-59b6c34194b6 fixedIPs: - subnet: id: 5fe5b54a-217c-439d-b8eb-441a03f7636d - portTargets: - id: control-plane network: id: 3ed980a6-6f8e-42d3-8500-15f18998c434 fixedIPs: - subnet: id: a7d57db6-f896-475f-bdca-c3464933ec02 replicas: 3 metadata: name: mycluster networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 192.168.11.0/24 - cidr: 192.168.100.0/24 platform: openstack: cloud: mycloud machinesSubnet: 8586bf1a-cc3c-4d40-bdf6-c243decc603a apiVIPs: - 192.168.100.240 ingressVIPs: - 192.168.100.250 loadBalancer: type: UserManaged featureSet: TechPreviewNoUpgrade After the deployment, you\u0026rsquo;ll only have one worker in the first domain. To deploy more workers in other domains, you\u0026rsquo;ll have to create a MachineSet per domain (the procedure is well documented in OpenShift already).\nNote that for each Failure Domain, you have to provide the leaf network ID and its subnet ID as well. If you deploy with availability zones, you\u0026rsquo;ll be able to provide them in each domain. The documentation for this feature is in progress and I\u0026rsquo;ll update this post once we have it published.\nIf you\u0026rsquo;re interested by a demo, I recorded one here.\nKnown limitations Deploying OpenShift with static IPs for the machines is not supported with OpenStack platform for now. Changing the IP address for any OpenShift control plane VIP (API + Ingress) is currently not supported. So once the external LB and the OpenShift cluster is deployed, the VIPs can\u0026rsquo;t be changed. Migrating an OpenShift cluster from the OpenShift managed LB to an external LB is currently not supported. Failure Domains are only for the control plane for now, and will be extended to the compute nodes. Keep in mind that the features will be TechPreview at first and once it has reached some maturity, we\u0026rsquo;ll promote them to GA.\nWrap-up In this article, we combined both exciting features that will help to increase your SLA and also improve the performances not only on the control plane but also for the workloads.\nWe have already got positive feedback from various teams, who tested it at a large scale and demonstrated that in this scenario, OpenShift is more reliable, better load-balanced and distributed in case of failure.\nIn a future post, I want to cover how you can make your workloads more reliable by using MetalLB as a load balancer in BGP mode.\nI hope you liked it and please provide any feedback on the channels.\n","permalink":"https://my1.fr/blog/openshift-external-load-balancer-control-plane-with-failure-domains/","summary":"\u003cp\u003eThis is my second post of a series which will cover how you can distribute your OpenShift cluster across multiple datacenter domains and increase\navailability and performance of your control plane.\u003c/p\u003e","title":"Deploying OpenShift on OpenStack with an External Load-Balancer for your control plane in multiple Failure Domains"},{"content":"This is my first post of a series which will cover how you can distribute your OpenShift cluster across multiple datacenter domains and increase availability and performance of your control plane.\nBackground Originally the on-premise OpenShift IPI architecture was designed to deploy an internal (called OpenShift Managed) load balancer based on HAproxy and Keepalived. However when you want to distribute your cluster across multiple failure domains, your control plane has to be deployable on multiple L2 networks, which are usually isolated per rack, and routed with protocols like BGP.\nStretched vs L3 networks A single stretched L2 network brings challenges:\nNetwork latency is not predictable Traffic bottlenecks L2 domain failures Network management complexity Smaller (L3 routed) networks however has these benefits:\nOptimize East-West traffic Low and predictable latency Easier to extend and manage Failure domain isolated to a network Non blocking network fabric OpenShift Managed Load-Balancer For on-prem platforms (VSphere, Baremetal, OpenStack, Ovirt and Nutanix), the control plane load balancer is based on HAproxy and Keepalived. It means that the control plane VIPs (for API \u0026amp; Ingress services) will be managed in Active/Passive mode. The Keepalived master (elected by VRRPv2) will host the VIPs and therefore all the API \u0026amp; Ingress traffic will always go through one node and then load-balanced across the control plane. This bottleneck has been an issue at large scale.\nAlso, Keepalived doesn\u0026rsquo;t deal with L3 routing, so if the VIPs aren\u0026rsquo;t within the same subnet as the L2 networks, the network fabric can\u0026rsquo;t know where the VIPs actually are.\nUser Managed Load-Balancer When we initially looked at the limitations of the OpenShift Managed Load-Balancer, we thought we would just add BGP to the OpenShift control plane, so the VIPs could be routed across the datacenter. You can have a look at this demo that shows how it would work. After the initial proposal which brought up a lot of good ideas, it was decided that for now we would rather try to externalize the Load-Balancer and let the customers dealing with it, rather than implementing something new in OpenShift (I\u0026rsquo;ll come back to it the wrap-up).\nIndeed, a lot of our customers already have (enterprise-grade) load balancers that they use for their workloads. Some of them want to re-use these appliances and manage the OpenShift control plane traffic with them.\nWe realized that some of them want BGP, some of them don\u0026rsquo;t. Some want to keep stretched L2 networks, some don\u0026rsquo;t. There were a lot of decisions we would make if we would have implemented BGP within the OpenShift control plane so we decided that for now we will allow to use an external (user-managed) load balancer, like it\u0026rsquo;s already the case for the workloads themselves (e.g. with MetalLB).\nMore details on the design can be found in this OpenShift enhancement.\nDeploy your own Load-Balancer I want to share how someone can deploy a load balancer that will be used by the OpenShift control plane. For that, I\u0026rsquo;ve decided to create an Ansible role named ansible-role-routed-lb.\nThis will deploy an advanced Load-Balancer capable of managing routed VIPs with FRR (using BGP) and load-balance traffic with HAproxy.\nThe role will do the following:\nIf BGP neighbors are provided in the config, it\u0026rsquo;ll deploy FRR and peer with your BGP neighbor(s). If the VIPs are created on the node, they\u0026rsquo;ll be routed in your infrastructure. Deploy HAproxy to load-balance and monitor your service. If the VIPs are provided in the config, they will be created if a minimum number of backend(s) are found healthy for a given service, and therefore routed in BGP if FRR is deployed. They will be removed if no backend was found healthy for a given service, therefore not routed in BGP if FRR is deployed So if you\u0026rsquo;re hosting multiple Load-Balancers, your OpenShift control plane traffic will be:\nrouted thanks to BGP if FRR is deployed load-balanced and high-availability at the VIP level thanks to BGP if FRR is deployed load-balanced between healthy backends thanks to HAproxy Let\u0026rsquo;s deploy it!\nIn this blog post, we won\u0026rsquo;t cover the Failure Domains yet, and will deploy OpenShift within a single Leaf. Therefore, we\u0026rsquo;ll deploy only one load balancer.\nCreate your Ansible inventory.yaml file:\n--- all: hosts: lb: ansible_host: 192.168.11.2 ansible_user: cloud-user ansible_become: true 192.168.11.2 is the IP address of the load balancer.\nCreate the Ansible playbook.yaml file:\n--- - hosts: lb vars: config: lb tasks: - name: Deploy the LBs include_role: name: emilienm.routed_lb Write the LB configs in Ansible vars.yaml:\n--- configs: lb: bgp_asn: 64998 bgp_neighbors: - ip: 192.168.11.1 password: f00barZ services: - name: api vips: - 192.168.100.240 min_backends: 1 healthcheck: \u0026#34;httpchk GET /readyz HTTP/1.0\u0026#34; balance: roundrobin frontend_port: 6443 haproxy_monitor_port: 8081 backend_opts: \u0026#34;check check-ssl inter 1s fall 2 rise 3 verify none\u0026#34; backend_port: 6443 backend_hosts: \u0026amp;lb_hosts - name: rack1-10 ip: 192.168.11.10 - name: rack1-11 ip: 192.168.11.11 - name: rack1-12 ip: 192.168.11.12 - name: rack1-13 ip: 192.168.11.13 - name: rack1-14 ip: 192.168.11.14 - name: rack1-15 ip: 192.168.11.15 - name: rack1-16 ip: 192.168.11.16 - name: rack1-17 ip: 192.168.11.17 - name: rack1-18 ip: 192.168.11.18 - name: rack1-19 ip: 192.168.11.19 - name: rack1-20 ip: 192.168.11.20 - name: ingress_http vips: - 192.168.100.250 min_backends: 1 healthcheck: \u0026#34;httpchk GET /healthz/ready HTTP/1.0\u0026#34; frontend_port: 80 haproxy_monitor_port: 8082 balance: roundrobin backend_opts: \u0026#34;check check-ssl port 1936 inter 1s fall 2 rise 3 verify none\u0026#34; backend_port: 80 backend_hosts: *lb_hosts - name: ingress_https vips: - 192.168.100.250 min_backends: 1 healthcheck: \u0026#34;httpchk GET /healthz/ready HTTP/1.0\u0026#34; frontend_port: 443 haproxy_monitor_port: 8083 balance: roundrobin backend_opts: \u0026#34;check check-ssl port 1936 inter 1s fall 2 rise 3 verify none\u0026#34; backend_port: 443 backend_hosts: *lb_hosts - name: mcs vips: - 192.168.100.240 min_backends: 1 frontend_port: 22623 haproxy_monitor_port: 8084 balance: roundrobin backend_opts: \u0026#34;check check-ssl inter 5s fall 2 rise 3 verify none\u0026#34; backend_port: 22623 backend_hosts: *lb_hosts In this case, we deploy OpenShift on OpenStack which doesn\u0026rsquo;t support static IPs. Therefore, we have to put all the available IPs from the subnet used for the machines, in the HAproxy backends.\nInstall the role and the dependencies:\nansible-galaxy install emilienm.routed_lb,1.0.0 ansible-galaxy collection install ansible.posix ansible.utils Deploy the LBs:\nansible-playbook -i inventory.yaml -e \u0026#34;@vars.yaml\u0026#34; playbook.yaml Deploy OpenShift This feature will be available in the 4.13 release as TechPreview.\nHere is how you can simply enable it via the install-config.yaml:\napiVersion: v1 baseDomain: mydomain.test compute: - name: worker platform: openstack: type: m1.xlarge replicas: 3 controlPlane: name: master platform: openstack: type: m1.xlarge replicas: 3 metadata: name: mycluster networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 192.168.11.0/24 - cidr: 192.168.100.0/24 platform: openstack: cloud: mycloud machinesSubnet: 8586bf1a-cc3c-4d40-bdf6-c243decc603a apiVIPs: - 192.168.100.240 ingressVIPs: - 192.168.100.250 loadBalancer: type: UserManaged featureSet: TechPreviewNoUpgrade You can also watch this demo which shows the outcome.\nKnown limitations Deploying OpenShift with static IPs for the machines is only supported on Baremetal platform for now but it\u0026rsquo;s in the roadmap to support it on VSphere and OpenStack as well. Changing the IP address for any OpenShift control plane VIP (API + Ingress) is currently not supported. So once the external LB and the OpenShift cluster is deployed, the VIPs can\u0026rsquo;t be changed. This is in our roadmap. Migrating an OpenShift cluster from the OpenShift managed LB to an external LB is currently not supported. It\u0026rsquo;s in our roadmap as well. Keep in mind that the feature will be TechPreview at first and once it has reached some maturity, we\u0026rsquo;ll promote it to GA.\nWrap-up Having the VIPs highly available, routed across multiple domains is only a first step into distributing the OpenShift control plane. In the future, we\u0026rsquo;ll discuss about how Failure Domains will be configured when deploying OpenShift on OpenStack. Note that this is already doable on Baremetal and Vsphere.\nWith this effort, our customers can now decide which Load-Balancer to use, and if they have some expertise in their appliance now they can use it for the OpenShift control plane.\nThe way we implemented it is flexible and will allow us to implement new load balancers in OpenShift if we want to in the future. The proof of concept done a few months ago with BGP in the control plane could be restored if there is growing interest.\nI hope you liked this article and stay tuned for the next ones!\n","permalink":"https://my1.fr/blog/openshift-external-load-balancer-control-plane-intro/","summary":"\u003cp\u003eThis is my first post of a series which will cover how you can distribute your OpenShift cluster across multiple datacenter domains and increase\navailability and performance of your control plane.\u003c/p\u003e","title":"Deploying OpenShift with an External Load-Balancer for your control plane"},{"content":"Stay tuned on our recent achievements in the Kubernetes and OpenStack space when running Fast-Datapath applications.\nAuthors: Emilien Macchi and Maysa Macedo.\nIn the past months, the Kubernetes Network Plumbing Working-Group added new features to the SR-IOV Network Operator for the OpenStack platform.\nIf you’re not familiar with this operator, it helps Kubernetes cluster users deploy their workloads to be connected to Fast Datapath (FDP) networking resources. While the operator is named “SR-IOV”, we’ll see that it can also manage other types of connectivity.\nIn fact, the operator helps to provision and configure the SR-IOV Network Device Plugin for Kubernetes, which is in charge of discovering and advertising networking resources for FDP, mainly (but not exclusively) for SR-IOV Virtual Functions (VFs) and PCI Physical Functions (PFs) that are available on a Kubernetes host (usually a worker node).\nThe operator hides some complexity to achieve that and provides an easy user interface.\nOpenStack metadata support The operator originally required config-drives to be enabled for the machines connected to the FDP networking, so it could read the OpenStack metadata and Network data.\nWe removed that requirement by adding support for reading that information from the Nova metadata service if no config-drive was used.\nIf your Kubernetes hosts have access to the Nova metadata URL, then you have nothing to do! Otherwise, you’ll need to make sure to create the machines with config-drive enabled.\nEnable VFIO with NOIOMMU In virtual deployments of Kubernetes where the underlying virtualization platform (e.g. QEMU) has support for virtualized I/O memory management unit (IOMMU) however OpenStack Nova doesn\u0026rsquo;t know how to handle it yet. It\u0026rsquo;s a work in progress. Therefore, the VFIO PCI driver needs to be loaded with an option named enable_unsafe_noiommu_mode enabled. This option gives user-space I/O access to a device which is direct memory access capable without a IOMMU.\nThe operator is now loading the driver with the right arguments so the users don’t have to worry about it.\nDPDK The operator was initially designed to work on Baremetal and not necessarily on virtualized platforms. However, when a virtualized Kubernetes host is connected to some network hardware using DPDK, the device is exposed as a virtio interface (seen as a VF by the operator) but to take advantage of DPDK, the device has to use the VFIO-PCI driver. We added support for detecting vhost-user interfaces that are connected to the specified Neutron network used for DPDK. Vhost-user is a module part of DPDK and it helps to run networking in the user-space. You can find more information here.\nHere is an example of a SriovNetworkNodePolicy that can be used for Intel devices (you’ll need to change a few things if your device is Mellanox):\napiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: dpdk1 namespace: openshift-sriov-network-operator spec: deviceType: vfio-pci # change to netdevice if Mellanox nicSelector: netFilter: openstack/NetworkID:55a54d05-9ec1-4051-8adb-1b5a7be4f1b6 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \u0026#39;true\u0026#39; numVfs: 1 priority: 99 resourceName: dpdk1 isRdma: false # set to true if Mellanox You’ll need to configure the Network ID that matches your DPDK network in OpenStack.\nOVS Hardware Offload Open-vSwitch is CPU intensive, which affects system performance and prevents available bandwidth from being fully utilized.\nSince OVS 2.8 a feature called OVS Hardware Offload is available. It improves performance significantly by offloading tasks to the hardware running the NIC. OpenStack has full compatibility with this feature and the SR-IOV operator can now take advantage of it.\nHere is an example of a SriovNetworkNodePolicy that can be used:\napiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: hwoffload1 namespace: openshift-sriov-network-operator spec: deviceType: netdevice nicSelector: netFilter: openstack/NetworkID:55a54d05-9ec1-4051-8adb-1b5a7be4f1b6 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \u0026#39;true\u0026#39; numVfs: 1 priority: 99 resourceName: hwoffload1 isRdma: true For now, we only support certain types of devices from the Mellanox vendor.\nAlso, you’ll need to configure the Network ID that matches your offloaded network for OpenStack.\nWrap-up The SR-IOV Network operator was extended to support essential use-cases for OpenStack, so the workloads can be using FDP features. All the features are available in the upstream operator. If you’re an OpenShift user, it’ll be available to you in the 4.11 release and backported to 4.10 in the next zstream, so stay tuned!\n","permalink":"https://my1.fr/blog/sriov-network-operator-improvements-openstack/","summary":"\u003cp\u003eStay tuned on our recent achievements in the Kubernetes and OpenStack space when running Fast-Datapath applications.\u003c/p\u003e","title":"SR-IOV network operator improvements for OpenStack"},{"content":"Read this post to learn more how to update a container in TripleO on a live system.\nNote: this might sound surgery but this is I think the clean options to patch container images in TripleO.\nYour TripleO cloud is running and you want to update an rpm in one or multiple containers?\nTripleO provides some CLI to build new container images with the rpms that you want. This procedure is also documented here.\nIn this particular example, we will update the python3-networking-ovn rpm on octavia_api.\nYou need a host to build the image: The easiest place is the Undercloud or the Standalone node, where Buildah and tripleoclient are installed. We\u0026rsquo;ll build the image from that host.\nPut your rpms in a directory: e.g. in /tmp/rpms\nExport the OpenStack admin credentials: e.g.: export OS_CLOUD=standalone Login to the registry (when using OSP): podman login registry.redhat.io Build the new container image for octavia_api: openstack tripleo container image hotfix \\ --image registry.redhat.io/rhosp-rhel8/openstack-octavia-api:16.2 \\ --rpms-path /tmp/hotfix \\ --tag 16.2-customfix \\ You should see the new image by running buildah images.\nNow you\u0026rsquo;ll need to push the image to a registry (yours, or TripleO registry): e.g. :\nbuildah push registry.redhat.io/rhosp-rhel8/openstack-octavia-api:16.2-16.2-customfix docker://quay.io/emilien/openstack-octavia-api:16.2-customfix Now, there are two methods for deploying that new image.\nRun the deploy command again, after updating the ContainerOctaviaApiImage parameter in TripleO environment Run the following steps: You need to figure out what\u0026rsquo;s the TripleO step where Octavia is deployed (it\u0026rsquo;s step 4), by looking on the host in /var/lib/tripleo-config/container-startup-config and grep for octavia_api.\nNow, go on the host where you want to use that new image (in the case of Standalone, it\u0026rsquo;s the same host where you built the image) and create an Ansible playbook with this content (e.g. paunch.yaml):\n- hosts: localhost become: true vars: service_name: octavia_api tasks: - name: Stop and clean the old container command: systemctl stop {{ service name }} \u0026amp;\u0026amp; podman rm {{ service_name }} - name: Start containers for step 1 paunch: config: /var/lib/tripleo-config/container-startup-config/step_4/hashed-{{ service_name }}.json config_overrides: octavia_api: image: quay.io/emilien/openstack-octavia-api:16.2-customfix\u0026#34; config_id: tripleo_step4 cleanup: false action: apply Change the content for your needs (different step, image, etc).\nRun Ansible with:\nansible-playbook paunch.yaml Your container is now running with your custom image (check with podman inspect).\nFor more details or help, check out the TripleO manuals or ask for help on IRC #tripleo (OFTC now).\n","permalink":"https://my1.fr/blog/patching-containers-in-tripleo/","summary":"\u003cp\u003eRead this post to learn more how to update a container in TripleO on a live system.\u003c/p\u003e","title":"Patching containers in TripleO"},{"content":"Have a look at how we can move container images from the docker.io registry to quay.io.\nThanks to Skopeo, we can copy container images from one registry to another.\nIn this post, we\u0026rsquo;ll copy images from docker.io to quay.io, a container registry which has a lot of features that docker.io doesn\u0026rsquo;t provide. Two of them that I really like are:\nList and manage image vulnerabilities and other security information Manage the manifests of an image If you want more information, checkout their documentation.\nI wrote a small script that one can use to automate the copy of images.\nBefore running the script:\nGet OAuth token from: https://quay.io/organization/[your-org]?tab=applications Change the token, namespace, containers and tag (if needed) If your docker.io registry requires authentication, you\u0026rsquo;ll need to run podman login docker.io (--src-creds option could also be used with Skopeo) You\u0026rsquo;ll need to authenticate against your quay.io registry with podman login quay.io (--dest-creds option could also be used with Skopeo) #!/bin/sh set -ex # get OAuth token from https://quay.io/organization/[your-org]?tab=applications token=\u0026#39;secrete\u0026#39; namespace=yourorg containers=\u0026#39;app1 app2\u0026#39; tag=latest retry() { local -r -i max_attempts=\u0026#34;$1\u0026#34;; shift local -r cmd=\u0026#34;$@\u0026#34; local -i attempt_num=1 until $cmd do if ((attempt_num==max_attempts)) then echo \u0026#34;Attempt $attempt_num failed and there are no more attempts left!\u0026#34; return 1 else echo \u0026#34;Attempt $attempt_num failed! Trying again in $attempt_num seconds...\u0026#34; sleep $((attempt_num++)) fi done } for container in $containers; do # create empty public repo first otherwise skopeo will create the image as private curl -X POST https://quay.io/api/v1/repository \\ -d \u0026#39;{\u0026#34;namespace\u0026#34;:\u0026#34;\u0026#39;$namespace\u0026#39;\u0026#34;,\u0026#34;repository\u0026#34;:\u0026#34;\u0026#39;$container\u0026#39;\u0026#34;,\u0026#34;description\u0026#34;:\u0026#34;Container image \u0026#39;$container\u0026#39;\u0026#34;,\u0026#34;visibility\u0026#34;:\u0026#34;public\u0026#34;}\u0026#39; \\ -H \u0026#39;Authorization: Bearer \u0026#39;$token\u0026#39;\u0026#39; -H \u0026#34;Content-Type: application/json\u0026#34; # workaround if quay.io returns 500 error, likely due to an internal bug when using skopeo against docker.io copy=\u0026#34;skopeo copy docker://docker.io/$namespace/$container:$tag docker://quay.io/$namespace/$container:$tag\u0026#34; retry 5 $copy done As you can see, there are 2 unusual things in this script:\nThe curl creates an empty public image otherwise quay.io would create a private image by default when copying the image with Skopeo. As far as I know, there is no option in quay.io to change the default policy. Of course, remove it if you don\u0026rsquo;t want your image to be public by default. The retry mechanism is to workaround the 500 error that you might get when it provisions a new repository, and it says it already exists (sounds specific to how the registry receives authentication from Skopeo vs Docker CLI). Enjoy Skopeo \u0026amp; quay.io!\n","permalink":"https://my1.fr/blog/moving-container-images-from-docker-io-to-quay-io/","summary":"\u003cp\u003eHave a look at how we can move container images from the docker.io registry to quay.io.\u003c/p\u003e","title":"Moving container images from docker.io to quay.io"},{"content":"I finally took some time to write some thoughts about what Leadership means to me.\nDuring April 2017, I’ve been very lucky to attend the Leadership training, organized by Zingtrain, paid by the OpenStack Foundation and sponsored by my employer (Red Hat) who paid for the trip to go in Ann Arbor. Thank you to all of them!. I also would like to thank Colette Alexander who made this happen.\nIn this blog post, I’ll explain what I’ve learned and also what I took away during this training but also my career; also I’ll give some personal opinions that only engage myself and nobody else.\nFour {Levels, Stages} of Learning Being a leader starts with the willingness of learning. Let’s start by the four levels of learning:\nListening, Reflecting, Assimilating and acting, Teaching (Repeat)\nThe things mentioned during the training were very close of how I personally learned how to be an Open-Source contributor. It starts by listening around you. It has been a little bit frustrating for me at the beginning to not being able to quickly take actions when new ideas come up, but being patient is really worth it.\nThe time to reflect is important to assimilate what happens out there: “what people do” and “why”, “how to they work together” and “how my contribution would fit in there” are the biggest questions I ask myself most of the time I’m jumping into something new for me.\nThen it’s time for actions. That time is really interesting because it’s very exiting at the beginning when contributing for the first time to a project, but can also be frustrating when getting the first feedback of this contribution. It’s like an “emotional elevator” where you go from total happiness of finally feeling useful in this project to “I’m so frustrated, the way I proposed my idea was rejected, I just want to thrash everything and run away”. This moment is to me very crucial and usually I manage to get my frustration out by going for a run or do some other activities that I like. Coming back on keyboard, I take time to retrospect and see how can I do better the next time.\nNow you’re part of the project and you know how to contribute, the work is not finished. Quite some times I see some projects where it’s hard to join the team because there is simply nobody willing to take time and explain you the really basics. Note: on the other side, it also comes with the capacity of saying “I don’t know” (yeah it happens, period.) and learners have to be ready to be mentored. Anyway, if you know something, teach it so more people will know it and your project will remain a cool place to work for.\nLet’s talk about the four stages of the learning journey.\nIt starts by being Unconsciously Incompetent. You underestimate the skills required to contribute and you jump into this hole without knowing that it’s not going to be easy. This stage is usually fast when you become consciously incompetent and realize it won’t be so easy. Don’t give up and go learning, you’ll become unconsciously incompetent (when you start to be productive and teach what you’ve learned). And then it comes the time to be consciously competent. If you didn’t start to teach the skill to someone else, it’s never to late to do it. If you want to read my personal experience of being a Project Lead in OpenStack community, I wrote a blog post that mentions these Learning steps.\nThe importance of a vision There are different versions out there of what is a Vision. A Vision is not a Mission Statement nor a Strategic Plan. My definition from what I’ve seen and learned over the last years would be: “a vision tells a successful story about what you want to be and where you want to go”.\nAn effective Vision is:\nwritten collectively (where all individuals part of the story can contribute) inspiring people who work with you but also externals strategically sound documented and communicated It starts by taking your pen and write yourself on paper your first draft. I find it important to highlight “you” and “draft” because to me a good Vision takes time and iterations to be well written by yourself and not by any consultant.\nDuring the training, for the first time I wrote a vision of my life in 1 year and I found the exercise interesting. Also, when I came back I started this work with my team at Red Hat. So far it has been very helpful to document where the team wants to go.\nGood Leaders offer great service to staff A good Leader is not a boss, nor a Chief. A Leader is an human who makes the best as possible to serve a team who work on a common purpose. During my last 5 years, people who inspired me were Leaders in some sorts. They help others to be better, share their knowledge, accept failures and learn from them.\nTo me, a good Leader is someone able to drive a project to success without taking any decision, but instead, influence her / his peers by engaging collaboration to make the work happen.\nSomething we learned during the training: Power = 1 / ( Authority x Frequency of use )\nAlso two things I’ve learned over the last years and also were confirmed during the training:\nMultitasking doesn’t work. Being a Leader doesn’t mean you have to be busier than others so you can do multiple things in the same time. First of all: everyone is busy (period again.); Second: it’s impossible for the most common brains to perform in a successful way multiple tasks at the same time. High performing has nothing to do with skills. It’s a matter of how much your team shares a common understanding at how they can work together for a specific purpose (“It’s easy to do the right thing, but hard to do the bad things”). Working fourteen hours per day is not efficient and knowing everything doesn’t mean you’re a good Leader. Bottom-line change is leadership I’m convinced that it exists multiple methods to be a great leader and bring new ideas. One of them might be the BLC (Bottom-line change). It appears to be useful when you (leader) wants to bring a new idea in your team.\nFirst of all, you need to make sure you’ll have some time to dedicate because people won’t always buy your idea so quickly. You need to prepare your idea: write some background, define a problem to solve, and if possible get some valid data to justify your proposal.\nOne of the key things is to get the right people involved in your idea. If your idea is a new feature, get all stakeholders involved (one person per group is enough), and rewrite the idea with them, so all of them agree on it. This step is very useful so when you’ll present results, people will recognize their interests since you asked to the right people. Engage the microcosm to work on the vision and prepare a plan for the change. Share the results with your team and help them to implement the change by giving support and accept feedback.\nAs a conclusion, I would define Leadership as a skill that you can’t learn only in the books (but some books are very useful like Being a Better Leader). You need to practice, try, fail, retrospect and try again. Being a leader in some tasks is very rewarding and in my opinion sometimes reduces frustration. At least but not least, being a good leader and going the extra-mile can be to create new leaders around you by sharing techniques, trusting and promoting people. Have fun!\n","permalink":"https://my1.fr/blog/what-leadership-means-to-me/","summary":"\u003cp\u003eI finally took some time to write some thoughts about what Leadership means to me.\u003c/p\u003e","title":"What Leadership Means To Me"},{"content":"This story explains why I started to stop working as a anarchistic-multi-tasking-schedule-driven and learnt how to become a good team leader.\nHow it started March 2015, Puppet OpenStack project just moved under the Big Tent. What a success for our group!\nOne of the first step was to elect a Project Team Lead. Our group was pretty small (~10 active contributors) so we thought that the PTL would be just a facilitator for the group, and the liaison with other projects that interact with us. I mean, easy, right?\nAt that time, I was clearly an unconsciously incompetent PTL. I thought I knew what I was doing to drive the project to success.\nBut situation evolved. I started to deal with things that I didn\u0026rsquo;t expect to deal with like making sure our team works together in a way that is efficient and consistent. I also realized nobody knew what a PTL was really supposed to do (at least in our group), so I took care of more tasks, like release management, organizing Summit design sessions, promoting core reviewers, and welcoming newcomers. That was the time where I realized I become a consciously incompetent PTL. I was doing things that nobody taught me before.\nIn fact, there is no book telling you how to lead an OpenStack project so I decided to jump in this black hole and hopefully I would make mistakes so I can learn something.\nSet your own expectations I made the mistake of engaging myself into a role where expectations were not cleared with the team. The PTL guide is not enough to clear expectations of what your team will wait from you. This is something you have to figure out with the folks you\u0026rsquo;re working with. You would be surprised by the diversity of expectations that project contributors have for their PTL. Talk with your team and ask them what they want you to be and how they see you as a team lead. I don\u0026rsquo;t think there is a single rule that works for all projects, because of the different cultures in OpenStack community.\nEmbrace changes … and accept failures. There is no project in OpenStack that didn\u0026rsquo;t had outstanding issues (technical and human). The first step as a PTL is to acknowledge the problem and share it with your team. Most of the conflicts are self-resolved when everyone agrees that yes, there is a problem. It can be a code design issue or any other technical disagreement but also human complains, like the difficulty to start contributing or the lack of reward for very active contributors who aren\u0026rsquo;t core yet. Once a problem is resolved: discuss with your team about how we can avoid the same situation in the future. Make a retrospective if needed but talk and document the output.\nI continuously encourage at welcoming all kind of changes in TripleO so we can adopt new technologies that will make our project better.\nKeep in mind it has a cost. Some people will disagree but that\u0026rsquo;s fine: you might have to pick a rate of acceptance to consider that your team is ready to make this change.\nDelegate We are humans and have limits. We can\u0026rsquo;t be everywhere and do everything. We have to accept that PTLs are not supposed to be online 24/7. They don\u0026rsquo;t always have the best ideas and don\u0026rsquo;t always take the right decisions. This is fine. Your project will survive.\nI learnt that when I started to be PTL of TripleO in 2016. The TripleO team has become so big that I didn\u0026rsquo;t realize how many interruptions I would have every day. So I decided to learn how to delegate. We worked together and created TripleO Squads where each squad focus on a specific area of TripleO. Each squad would be autonomous enough to propose their own core reviewers or do their own meetings when needed. I wanted small teams working together, failing fast and making quick iterations so we could scale the project, accept and share the work load and increase the trust inside the TripleO team.\nThis is where I started to be a Consciously Competent PTL.\nWhere am I now I have reached a point where I think that projects wouldn\u0026rsquo;t need a PTL to run fine if they really wanted. Instead, I start to believe about some essential things that would actually help to get rid of this role:\nAs a team, define the vision of the project and document it. It will really help to know where we want to go and clear all expectations about the project. Establish trust to each individual by default and welcome newcomers. Encourage collective and distributed leadership. Try, Do, Fail, Learn, Teach. and start again. Don\u0026rsquo;t stale. This long journey helped me to learn many things in both technical and human areas. It has been awesome to work with such groups so far. I would like to spend more time on technical work (aka coding) but also in teaching and mentoring new contributors in OpenStack. Therefore, I won\u0026rsquo;t be PTL during the next cycle and my hope is to see new leaders in TripleO, who would come up with fresh ideas and help us to keep TripleO rocking.\nThanks for reading so far, and also thanks for your trust.\n","permalink":"https://my1.fr/blog/my-journey-as-an-openstack-ptl/","summary":"\u003cp\u003eThis story explains why I started to stop working as a anarchistic-multi-tasking-schedule-driven and learnt how to become a good team leader.\u003c/p\u003e","title":"My Journey As An OpenStack PTL"},{"content":" I don\u0026rsquo;t post much about my personal life, but I\u0026rsquo;m a dual-citizen (french and canadian) living in Quebec-City. Outside work, I love outdoors, spending time in family, flying aircrafts (private pilot) and a bunch of other things. My professional path is frequently updated on my Linkedin profile. I sometimes post on Twitter. If you want to reach out, please send me an email. ","permalink":"https://my1.fr/about-me/","summary":"\u003cul\u003e\n\u003cli\u003eI don\u0026rsquo;t post much about my personal life, but I\u0026rsquo;m a dual-citizen (french and canadian) living in Quebec-City. Outside work, I love outdoors, spending time in family, flying aircrafts (private pilot) and a bunch of other things.\u003c/li\u003e\n\u003cli\u003eMy professional path is frequently updated on my \u003ca href=\"https://www.linkedin.com/in/emilienmacchi\"\u003eLinkedin profile\u003c/a\u003e.\u003c/li\u003e\n\u003cli\u003eI sometimes post on \u003ca href=\"https://twitter.com/EmilienMacchi\"\u003eTwitter\u003c/a\u003e.\u003c/li\u003e\n\u003cli\u003eIf you want to reach out, please send me an \u003ca href=\"mailto:emacchi@pm.me\"\u003eemail\u003c/a\u003e.\u003c/li\u003e\n\u003c/ul\u003e","title":"About me"}]
\ No newline at end of file
+[{"content":"AI-powered coding assistants are revolutionizing the way developers write software. By providing contextual code suggestions and reducing repetitive tasks, these tools can significantly increase productivity. In this post, we\u0026rsquo;ll explore how AI-assisted coding works, its benefits, potential challenges, and tips for using it effectively in your development workflow. I\u0026rsquo;ll also share examples with two solutions and compare them so it\u0026rsquo;ll help you to make decisions on what tools to use.\nImportant Disclaimer The usage of coding assistants is often restricting by company policies. As a Red Hat employee, I am not allowed to leverage AI to contribute to our products and I want to make clear that this article is for information sharing only. My experiments are only on personal projects and the thoughts shared in this article are only my own.\nThe Rise of AI-Assisted Coding In recent years, machine learning models trained on vast amounts of code have evolved into free or commercial tools, which integrate seamlessly with popular code editors. These tools use natural language processing and deep learning to provide real-time code suggestions as you write. Some of these tools also provide a chatbot which can help to explain some code or even generate it for you based on the questions.\nCoding Assistants analyze the code context and provide auto-suggestions, complete code snippets, or even entire functions. By learning from repositories (public and/or private), these assistants understand programming patterns and offer relevant solutions.\nBenefits of Using AI Coding Assistants AI coding assistants can have a huge positive impact on the software development lifecycle. Some of the key benefits include:\nIncreased productivity: by suggesting common code snippets, functions, and boilerplate code, AI tools can speed up the development process. You spend less time on repetitive tasks and more time solving complex problems. Reduced syntax errors: AI suggestions can minimize syntax errors by offering code that adheres to best practices and standards. Note that generated syntax is not always right, which is why results are better with a powerful code editor. Learning opportunity: developers can learn new APIs, frameworks, and approaches by exploring the suggestions provided by the AI assistant. Enhanced focus: instead of switching between the IDE and browser for documentation or Stack Overflow answers, developers can stay focused on the code, with instant context-aware help. Challenges and Limitations Like anything with AI, coding assistants offer many benefits, they also come with certain challenges and limitations.\nCode quality concerns: tools suggest solutions that work but are not optimized or could introduce technical debt. The responsibility still lies with the developer to review and refine the code. I think that robots aren\u0026rsquo;t (for now) able to replace humans to write code that is well architectured and following all best practices. Security risks: code assistants might suggest insecure coding patterns or code with vulnerabilities. Developers need to remain vigilant, especially when it comes to security-critical code. I\u0026rsquo;ve experienced a few times where the assistant suggested very dangerous snippets involving the deletion of system files for example. Over-reliance: developers could become overly reliant on these tools, which could hinder deeper understanding of the code they write. This could be problematic for complex or novel problems where the AI may not provide helpful suggestions, especially when you work on new products and you need to create new concepts. Privacy and IP Issues: some concerns have been raised regarding how AI tools leverage open-source code and whether suggestions violate intellectual property or licensing agreements. In this post we\u0026rsquo;ll compare two models where one is open-source. Best Practices for Using AI Coding Assistants To get the most out of AI-powered coding tools, it’s important to follow a few best practices:\nUse AI as a complement, not a crutch: AI should assist in the development process but not replace thoughtful coding. Use the suggestions as a foundation, but always review and customize the code to fit your specific use case. Understand the code: don\u0026rsquo;t blindly accept suggestions. Make sure you understand the code being suggested and test it thoroughly. Focus on complex problem solving: let AI handle the repetitive or boilerplate tasks so you can spend more time on complex logic and design decisions. Stay up to date on best practices: AI tools will continue to improve, but developers need to stay informed about programming best practices, especially around security, performance, and maintainability. Keep learning, take trainings, review other\u0026rsquo;s code and also keep up with the models you\u0026rsquo;ve been using for code assistant. New releases often offer interesting features. My toolbox I\u0026rsquo;ve been playing with code generation for quite a while now and I want to share a couple of tools that I use:\nGithub Copilot: commercial solution for AI-powered coding assistant that provides real-time code suggestions directly within your code editor. Granite Code: open-source models that are decoder-only models designed for code generative tasks, trained with code written in 116 programming languages. I won\u0026rsquo;t explain how to setup your IDE to use these tools, I suggest to read their official documentation. Note that to run Granite models locally, you\u0026rsquo;ll certainly need a GPU if you want the code assistant to really be usable.\nBefore we start This is not a detailed comparaison between the two models but rather a quick example on that these tools can offer so you get an idea if you haven\u0026rsquo;t tried them yet.\nBenchmarking the models is not the goal here.\nGithub Copilot Copilot offers two extensions:\nGithub Copilot: provides inline coding suggestions as you type. GitHub Copilot Chat: provides conversational AI assistance. Let\u0026rsquo;s use the chat first:\nThe generated snippet just works.\nNow let\u0026rsquo;s see if we can print the Python version directly by commenting a function that doesn\u0026rsquo;t exist yet.\nHere is another example involving basic mathematics:\nNow let\u0026rsquo;s ask Copilot to generate unit tests for that script (using the \u0026ldquo;Generate tests button\u0026rdquo;):\nGranite Now let\u0026rsquo;s play with the Granite 8B code model. I\u0026rsquo;ve installed the Continue extension which provides both inline coding suggestions (if the model permits) and a chat.\nAs previously done, let\u0026rsquo;s first talk to the chat to initate the script.\nThe result is doing the job with almost the same result. You\u0026rsquo;ll notice that it didn\u0026rsquo;t create a function but rather printed directly what we asked for.\nLet\u0026rsquo;s see if it can find out what version of Python is running:\nAnd the other example:\nUnit tests:\nIf you noticed, the test will fail. It missed 5 which is in the first list and is odd. The test wasn\u0026rsquo;t as extended as Copilot suggested and in this case, wasn\u0026rsquo;t working.\nMy take on the models Again the examples were very basic (on purpose) so you can have a slight overview on what to expect. Let\u0026rsquo;s say it: comparing Granite 8B with Copilot isn\u0026rsquo;t fair. They aren\u0026rsquo;t the same at all and don\u0026rsquo;t have the same capabilities. However, better results could be obtained with Granite if we used 20 or 34 billions parameters. For an open-source model, it\u0026rsquo;s already providing really good results and can help to increase your productivity.\nConclusion AI-powered tools are transforming the way we write software by making coding more efficient, accessible, and collaborative. However, they\u0026rsquo;re not a silver bullet. Developers should use these tools thoughtfully, balancing the convenience of AI assistance with the need to maintain control and understanding of the code they write.\nBy adopting AI-assisted coding into your workflow, you can boost productivity and learn new approaches, but don\u0026rsquo;t forget that good coding practices and creativity will always remain central to successful software development.\nNow it\u0026rsquo;s up to you to test the tools, find models that fit your needs (and company policy!) which will hopefully help you.\nNote: the image of this post was generated by Llama3 using the Facebook chat. The prompt was \u0026ldquo;generate an image of a developer assisted by a robot\u0026rdquo;.\n","permalink":"https://my1.fr/blog/ai-assisted-coding-tools/","summary":"\u003cp\u003eAI-powered coding assistants are revolutionizing the way developers write software.\nBy providing contextual code suggestions and reducing repetitive tasks, these tools\ncan significantly increase productivity.\nIn this post, we\u0026rsquo;ll explore how AI-assisted coding works, its benefits, potential challenges,\nand tips for using it effectively in your development workflow. I\u0026rsquo;ll also share\nexamples with two solutions and compare them so it\u0026rsquo;ll help you to make decisions on what tools\nto use.\u003c/p\u003e","title":"How AI-Assisted is Transforming Software Development"},{"content":"This is a quick tutorial (mainly brain dump) on how I\u0026rsquo;m using Tilt do quickly iterate over my cluster-api-provider-openstack work.\nBefore you continue I won\u0026rsquo;t go into what CAPO, CAPI, Kind, ctlptl and Tilt are and how they work. I\u0026rsquo;ve just learnt about Tilt so this post will probably be updated from time to time. My environment always runs on latest stable Fedora, and latest dependencies (Kind, ctlptl, Tilt, etc). Check that your tools meet the latest requirements. Podman A couple of things I had to do regarding Podman:\nEnable the Podman socket: systemctl --user enable --now podman.socket And then in my zshrc I add:\nexport DOCKER_HOST=unix:///run/user/$(id -u)/podman/podman.sock Allow the local registry to be insecure, by editing /etc/containers/registries.conf and add: [[registry]] location = \u0026#34;localhost:5000\u0026#34; insecure = true Running out of inotify resources My default Fedora had too low values for both max_user_watches and max_user_instances, so I tweaked it a bit:\nsudo sysctl fs.inotify.max_user_watches=524288 sudo sysctl fs.inotify.max_user_instances=512 Deploy the Kind management cluster ctlptl create registry ctlptl-registry --port=5000 ctlptl create cluster kind --registry=ctlptl-registry I found ctlptl super useful as it handles the container registry, but you can also simply use Kind directly and deploy your own registry or e.g. use quay.io.\nCreate a Secret for clouds.yaml For now I\u0026rsquo;m creating the secret \u0026ldquo;manually\u0026rdquo;, but I know Tilt can do it for us.\nexport CLUSTER_NAME=dev export CAPO_DIRECTORY=~/go/src/github.com/kubernetes-sigs/cluster-api-provider-openstack # replace `my_cloud` by the name of your cloud in clouds.yaml source $CAPO_DIRECTORY/templates/env.rc ~/.config/openstack/clouds.yaml my_cloud cat \u0026lt;\u0026lt;EOF | kubectl apply -f - apiVersion: v1 data: cacert: ${OPENSTACK_CLOUD_CACERT_B64} clouds.yaml: ${OPENSTACK_CLOUD_YAML_B64} kind: Secret metadata: labels: clusterctl.cluster.x-k8s.io/move: \u0026#34;true\u0026#34; name: ${CLUSTER_NAME}-cloud-config EOF Prepare CAPI You need to create tilt-settings.yaml in the CAPI directory. This is an example of how it could looks like:\nbuild_engine: podman kind_cluster_name: kind provider_repos: - ../cluster-api-provider-openstack enable_providers: - openstack - kubeadm-bootstrap - kubeadm-control-plane debug: openstack: port: 31000 kustomize_substitutions: CLUSTER_TOPOLOGY: \u0026#34;true\u0026#34; CLUSTER_NAME: \u0026#34;dev\u0026#34; OPENSTACK_SSH_KEY_NAME: \u0026#34;emilien\u0026#34; OPENSTACK_BASTION_FLAVOR: \u0026#34;m1.large\u0026#34; OPENSTACK_CONTROL_PLANE_MACHINE_FLAVOR: \u0026#34;m1.large\u0026#34; OPENSTACK_NODE_MACHINE_FLAVOR: \u0026#34;m1.large\u0026#34; OPENSTACK_FAILURE_DOMAIN: \u0026#34;nova\u0026#34; OPENSTACK_IMAGE_NAME: \u0026#34;ubuntu-2204-kube-v1.28.5\u0026#34; OPENSTACK_CLOUD: foch_openshift OPENSTACK_DNS_NAMESERVERS: \u0026#34;1.1.1.1\u0026#34; NAMESPACE: \u0026#34;default\u0026#34; KUBERNETES_VERSION: \u0026#34;v1.28.5\u0026#34; template_dirs: openstack: - ../cluster-api-provider-openstack/templates Configure Virtual Studio Code In the CAPO directory, create .vscode/launch.json:\n{ \u0026#34;version\u0026#34;: \u0026#34;0.2.0\u0026#34;, \u0026#34;configurations\u0026#34;: [ { \u0026#34;name\u0026#34;: \u0026#34;Connect to OpenStack provider\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;go\u0026#34;, \u0026#34;request\u0026#34;: \u0026#34;attach\u0026#34;, \u0026#34;mode\u0026#34;: \u0026#34;remote\u0026#34;, \u0026#34;port\u0026#34;: 31000, \u0026#34;host\u0026#34;: \u0026#34;127.0.0.1\u0026#34;, \u0026#34;showLog\u0026#34;: true, \u0026#34;trace\u0026#34;: \u0026#34;log\u0026#34; } ] } Make sure you have the Go extension installed and also you need to install Delve, a debugger for Go.\nAfter that you can add breakpoints to your code and debug. Have a look at this guide for useful content.\nRun Tilt! tilt up Here is the URL to follow what Tilt will do, but basically it will do everything under the cover so when you change something in CAPO or CAPI or Tilt config, it\u0026rsquo;ll rebuild images and redeploy them in the management cluster.\nTo deploy a workload cluster, I do it from the UI:\nIn CAPO.clusterclasses, I apply the dev-test ClusterClass. in CAPO.templates, I create a development cluster. The cluster will now be deployed.\n","permalink":"https://my1.fr/blog/developing-cluster-api-provider-openstack-with-tilt/","summary":"\u003cp\u003eThis is a quick tutorial (mainly brain dump) on how I\u0026rsquo;m using Tilt do quickly iterate over my cluster-api-provider-openstack work.\u003c/p\u003e","title":"Developing cluster-api-provider-openstack with Tilt"},{"content":"This is my second post of a series which will cover how you can distribute your OpenShift cluster across multiple datacenter domains and increase availability and performance of your control plane.\nBackground If you haven\u0026rsquo;t read it, please have a look at the first post.\nFailure Domains Failure Domains help to spread the OpenShift control plane across multiple (at least 3) domains where each domain has a defined storage / network / compute configuration. In a modern datacenter, each domain has its own power unit, network and storage fabric, etc. If a domain goes down, it wouldn\u0026rsquo;t have an impact on the workloads since the other domains are healthy and the services are deployed in HA.\nIn this context, we think that the SLA of OpenShift can significantly be increased by deploying the OpenShift cluster (control plane and workloads) across at least 3 domains.\nIn OCP 4.13, Failure Domains will be TechPreview (not supported) but you can still test it. We plan to make it supported in a future release.\nIf you remember the previous post, we were deploying OpenShift within one domain, with one external load balancer. Now that we have Failure Domains, let\u0026rsquo;s deploy 3 external LBs (one in each domain) and then a cluster that is distributed over 3 domains.\nPre-requisites At least 3 networks and subnets (can be tenant or provider networks) have to be pre-created. They need to be reachable from where Ansible will be run. The machines used for the LB have to be deployed on CentOS9 (this is what we test).\nDeploy your own Load-Balancers In our example, we\u0026rsquo;ll deploy one LB per leaf, which is in its own routed network. Therefore, we\u0026rsquo;ll deploy 3 load balancers.\nLet\u0026rsquo;s deploy!\nCreate your Ansible inventory.yaml file:\n--- all: hosts: lb1: ansible_host: 192.168.11.2 config: lb1 lb2 ansible_host: 192.168.12.2 config: lb2 lb3: ansible_host: 192.168.13.2 config: lb3 vars: ansible_user: cloud-user ansible_become: true Create the Ansible playbook.yaml file:\n--- - hosts: - lb1 - lb2 - lb3 tasks: - name: Deploy the LBs include_role: name: emilienm.routed_lb Write the LB configs in Ansible vars.yaml:\n--- configs: lb1: bgp_asn: 64998 bgp_neighbors: - ip: 192.168.11.1 password: f00barZ services: \u0026amp;services - name: api vips: - 192.168.100.240 min_backends: 1 healthcheck: \u0026#34;httpchk GET /readyz HTTP/1.0\u0026#34; balance: roundrobin frontend_port: 6443 haproxy_monitor_port: 8081 backend_opts: \u0026#34;check check-ssl inter 1s fall 2 rise 3 verify none\u0026#34; backend_port: 6443 backend_hosts: \u0026amp;lb_hosts - name: rack1-10 ip: 192.168.11.10 - name: rack1-11 ip: 192.168.11.11 - name: rack1-12 ip: 192.168.11.12 - name: rack1-13 ip: 192.168.11.13 - name: rack1-14 ip: 192.168.11.14 - name: rack1-15 ip: 192.168.11.15 - name: rack1-16 ip: 192.168.11.16 - name: rack1-17 ip: 192.168.11.17 - name: rack1-18 ip: 192.168.11.18 - name: rack1-19 ip: 192.168.11.19 - name: rack1-20 ip: 192.168.11.20 - name: rack2-10 ip: 192.168.12.10 - name: rack2-11 ip: 192.168.12.11 - name: rack2-12 ip: 192.168.12.12 - name: rack2-13 ip: 192.168.12.13 - name: rack2-14 ip: 192.168.12.14 - name: rack2-15 ip: 192.168.12.15 - name: rack2-16 ip: 192.168.12.16 - name: rack2-17 ip: 192.168.12.17 - name: rack2-18 ip: 192.168.12.18 - name: rack2-19 ip: 192.168.12.19 - name: rack2-20 ip: 192.168.12.20 - name: rack3-10 ip: 192.168.13.10 - name: rack3-11 ip: 192.168.13.11 - name: rack3-12 ip: 192.168.13.12 - name: rack3-13 ip: 192.168.13.13 - name: rack3-14 ip: 192.168.13.14 - name: rack3-15 ip: 192.168.13.15 - name: rack3-16 ip: 192.168.13.16 - name: rack3-17 ip: 192.168.13.17 - name: rack3-18 ip: 192.168.13.18 - name: rack3-19 ip: 192.168.13.19 - name: rack3-20 ip: 192.168.13.20 - name: ingress_http vips: - 192.168.100.250 min_backends: 1 healthcheck: \u0026#34;httpchk GET /healthz/ready HTTP/1.0\u0026#34; frontend_port: 80 haproxy_monitor_port: 8082 balance: roundrobin backend_opts: \u0026#34;check check-ssl port 1936 inter 1s fall 2 rise 3 verify none\u0026#34; backend_port: 80 backend_hosts: *lb_hosts - name: ingress_https vips: - 192.168.100.250 min_backends: 1 healthcheck: \u0026#34;httpchk GET /healthz/ready HTTP/1.0\u0026#34; frontend_port: 443 haproxy_monitor_port: 8083 balance: roundrobin backend_opts: \u0026#34;check check-ssl port 1936 inter 1s fall 2 rise 3 verify none\u0026#34; backend_port: 443 backend_hosts: *lb_hosts - name: mcs vips: - 192.168.100.240 min_backends: 1 frontend_port: 22623 haproxy_monitor_port: 8084 balance: roundrobin backend_opts: \u0026#34;check check-ssl inter 5s fall 2 rise 3 verify none\u0026#34; backend_port: 22623 backend_hosts: *lb_hosts lb2: bgp_asn: 64998 bgp_neighbors: - ip: 192.168.12.1 password: f00barZ services: \u0026amp;services lb3: bgp_asn: 64998 bgp_neighbors: - ip: 192.168.13.1 password: f00barZ services: \u0026amp;services In this case, we deploy OpenShift on OpenStack which doesn\u0026rsquo;t support static IPs. Therefore, we have to put all the available IPs from the subnets used for the machines, in the HAproxy backends.\nInstall the role and the dependencies:\nansible-galaxy install emilienm.routed_lb,1.0.0 ansible-galaxy collection install ansible.posix ansible.utils Deploy the LBs:\nansible-playbook -i inventory.yaml -e \u0026#34;@vars.yaml\u0026#34; playbook.yaml Deploy OpenShift Here is an example of install-config.yaml:\napiVersion: v1 baseDomain: mydomain.test compute: - name: worker platform: openstack: type: m1.xlarge replicas: 1 controlPlane: name: master platform: openstack: type: m1.xlarge failureDomains: - portTargets: - id: control-plane network: id: fb6f8fea-5063-4053-81b3-6628125ed598 fixedIPs: - subnet: id: b02175dd-95c6-4025-8ff3-6cf6797e5f86 - portTargets: - id: control-plane network: id: 9a5452a8-41d9-474c-813f-59b6c34194b6 fixedIPs: - subnet: id: 5fe5b54a-217c-439d-b8eb-441a03f7636d - portTargets: - id: control-plane network: id: 3ed980a6-6f8e-42d3-8500-15f18998c434 fixedIPs: - subnet: id: a7d57db6-f896-475f-bdca-c3464933ec02 replicas: 3 metadata: name: mycluster networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 192.168.11.0/24 - cidr: 192.168.100.0/24 platform: openstack: cloud: mycloud machinesSubnet: 8586bf1a-cc3c-4d40-bdf6-c243decc603a apiVIPs: - 192.168.100.240 ingressVIPs: - 192.168.100.250 loadBalancer: type: UserManaged featureSet: TechPreviewNoUpgrade After the deployment, you\u0026rsquo;ll only have one worker in the first domain. To deploy more workers in other domains, you\u0026rsquo;ll have to create a MachineSet per domain (the procedure is well documented in OpenShift already).\nNote that for each Failure Domain, you have to provide the leaf network ID and its subnet ID as well. If you deploy with availability zones, you\u0026rsquo;ll be able to provide them in each domain. The documentation for this feature is in progress and I\u0026rsquo;ll update this post once we have it published.\nIf you\u0026rsquo;re interested by a demo, I recorded one here.\nKnown limitations Deploying OpenShift with static IPs for the machines is not supported with OpenStack platform for now. Changing the IP address for any OpenShift control plane VIP (API + Ingress) is currently not supported. So once the external LB and the OpenShift cluster is deployed, the VIPs can\u0026rsquo;t be changed. Migrating an OpenShift cluster from the OpenShift managed LB to an external LB is currently not supported. Failure Domains are only for the control plane for now, and will be extended to the compute nodes. Keep in mind that the features will be TechPreview at first and once it has reached some maturity, we\u0026rsquo;ll promote them to GA.\nWrap-up In this article, we combined both exciting features that will help to increase your SLA and also improve the performances not only on the control plane but also for the workloads.\nWe have already got positive feedback from various teams, who tested it at a large scale and demonstrated that in this scenario, OpenShift is more reliable, better load-balanced and distributed in case of failure.\nIn a future post, I want to cover how you can make your workloads more reliable by using MetalLB as a load balancer in BGP mode.\nI hope you liked it and please provide any feedback on the channels.\n","permalink":"https://my1.fr/blog/openshift-external-load-balancer-control-plane-with-failure-domains/","summary":"\u003cp\u003eThis is my second post of a series which will cover how you can distribute your OpenShift cluster across multiple datacenter domains and increase\navailability and performance of your control plane.\u003c/p\u003e","title":"Deploying OpenShift on OpenStack with an External Load-Balancer for your control plane in multiple Failure Domains"},{"content":"This is my first post of a series which will cover how you can distribute your OpenShift cluster across multiple datacenter domains and increase availability and performance of your control plane.\nBackground Originally the on-premise OpenShift IPI architecture was designed to deploy an internal (called OpenShift Managed) load balancer based on HAproxy and Keepalived. However when you want to distribute your cluster across multiple failure domains, your control plane has to be deployable on multiple L2 networks, which are usually isolated per rack, and routed with protocols like BGP.\nStretched vs L3 networks A single stretched L2 network brings challenges:\nNetwork latency is not predictable Traffic bottlenecks L2 domain failures Network management complexity Smaller (L3 routed) networks however has these benefits:\nOptimize East-West traffic Low and predictable latency Easier to extend and manage Failure domain isolated to a network Non blocking network fabric OpenShift Managed Load-Balancer For on-prem platforms (VSphere, Baremetal, OpenStack, Ovirt and Nutanix), the control plane load balancer is based on HAproxy and Keepalived. It means that the control plane VIPs (for API \u0026amp; Ingress services) will be managed in Active/Passive mode. The Keepalived master (elected by VRRPv2) will host the VIPs and therefore all the API \u0026amp; Ingress traffic will always go through one node and then load-balanced across the control plane. This bottleneck has been an issue at large scale.\nAlso, Keepalived doesn\u0026rsquo;t deal with L3 routing, so if the VIPs aren\u0026rsquo;t within the same subnet as the L2 networks, the network fabric can\u0026rsquo;t know where the VIPs actually are.\nUser Managed Load-Balancer When we initially looked at the limitations of the OpenShift Managed Load-Balancer, we thought we would just add BGP to the OpenShift control plane, so the VIPs could be routed across the datacenter. You can have a look at this demo that shows how it would work. After the initial proposal which brought up a lot of good ideas, it was decided that for now we would rather try to externalize the Load-Balancer and let the customers dealing with it, rather than implementing something new in OpenShift (I\u0026rsquo;ll come back to it the wrap-up).\nIndeed, a lot of our customers already have (enterprise-grade) load balancers that they use for their workloads. Some of them want to re-use these appliances and manage the OpenShift control plane traffic with them.\nWe realized that some of them want BGP, some of them don\u0026rsquo;t. Some want to keep stretched L2 networks, some don\u0026rsquo;t. There were a lot of decisions we would make if we would have implemented BGP within the OpenShift control plane so we decided that for now we will allow to use an external (user-managed) load balancer, like it\u0026rsquo;s already the case for the workloads themselves (e.g. with MetalLB).\nMore details on the design can be found in this OpenShift enhancement.\nDeploy your own Load-Balancer I want to share how someone can deploy a load balancer that will be used by the OpenShift control plane. For that, I\u0026rsquo;ve decided to create an Ansible role named ansible-role-routed-lb.\nThis will deploy an advanced Load-Balancer capable of managing routed VIPs with FRR (using BGP) and load-balance traffic with HAproxy.\nThe role will do the following:\nIf BGP neighbors are provided in the config, it\u0026rsquo;ll deploy FRR and peer with your BGP neighbor(s). If the VIPs are created on the node, they\u0026rsquo;ll be routed in your infrastructure. Deploy HAproxy to load-balance and monitor your service. If the VIPs are provided in the config, they will be created if a minimum number of backend(s) are found healthy for a given service, and therefore routed in BGP if FRR is deployed. They will be removed if no backend was found healthy for a given service, therefore not routed in BGP if FRR is deployed So if you\u0026rsquo;re hosting multiple Load-Balancers, your OpenShift control plane traffic will be:\nrouted thanks to BGP if FRR is deployed load-balanced and high-availability at the VIP level thanks to BGP if FRR is deployed load-balanced between healthy backends thanks to HAproxy Let\u0026rsquo;s deploy it!\nIn this blog post, we won\u0026rsquo;t cover the Failure Domains yet, and will deploy OpenShift within a single Leaf. Therefore, we\u0026rsquo;ll deploy only one load balancer.\nCreate your Ansible inventory.yaml file:\n--- all: hosts: lb: ansible_host: 192.168.11.2 ansible_user: cloud-user ansible_become: true 192.168.11.2 is the IP address of the load balancer.\nCreate the Ansible playbook.yaml file:\n--- - hosts: lb vars: config: lb tasks: - name: Deploy the LBs include_role: name: emilienm.routed_lb Write the LB configs in Ansible vars.yaml:\n--- configs: lb: bgp_asn: 64998 bgp_neighbors: - ip: 192.168.11.1 password: f00barZ services: - name: api vips: - 192.168.100.240 min_backends: 1 healthcheck: \u0026#34;httpchk GET /readyz HTTP/1.0\u0026#34; balance: roundrobin frontend_port: 6443 haproxy_monitor_port: 8081 backend_opts: \u0026#34;check check-ssl inter 1s fall 2 rise 3 verify none\u0026#34; backend_port: 6443 backend_hosts: \u0026amp;lb_hosts - name: rack1-10 ip: 192.168.11.10 - name: rack1-11 ip: 192.168.11.11 - name: rack1-12 ip: 192.168.11.12 - name: rack1-13 ip: 192.168.11.13 - name: rack1-14 ip: 192.168.11.14 - name: rack1-15 ip: 192.168.11.15 - name: rack1-16 ip: 192.168.11.16 - name: rack1-17 ip: 192.168.11.17 - name: rack1-18 ip: 192.168.11.18 - name: rack1-19 ip: 192.168.11.19 - name: rack1-20 ip: 192.168.11.20 - name: ingress_http vips: - 192.168.100.250 min_backends: 1 healthcheck: \u0026#34;httpchk GET /healthz/ready HTTP/1.0\u0026#34; frontend_port: 80 haproxy_monitor_port: 8082 balance: roundrobin backend_opts: \u0026#34;check check-ssl port 1936 inter 1s fall 2 rise 3 verify none\u0026#34; backend_port: 80 backend_hosts: *lb_hosts - name: ingress_https vips: - 192.168.100.250 min_backends: 1 healthcheck: \u0026#34;httpchk GET /healthz/ready HTTP/1.0\u0026#34; frontend_port: 443 haproxy_monitor_port: 8083 balance: roundrobin backend_opts: \u0026#34;check check-ssl port 1936 inter 1s fall 2 rise 3 verify none\u0026#34; backend_port: 443 backend_hosts: *lb_hosts - name: mcs vips: - 192.168.100.240 min_backends: 1 frontend_port: 22623 haproxy_monitor_port: 8084 balance: roundrobin backend_opts: \u0026#34;check check-ssl inter 5s fall 2 rise 3 verify none\u0026#34; backend_port: 22623 backend_hosts: *lb_hosts In this case, we deploy OpenShift on OpenStack which doesn\u0026rsquo;t support static IPs. Therefore, we have to put all the available IPs from the subnet used for the machines, in the HAproxy backends.\nInstall the role and the dependencies:\nansible-galaxy install emilienm.routed_lb,1.0.0 ansible-galaxy collection install ansible.posix ansible.utils Deploy the LBs:\nansible-playbook -i inventory.yaml -e \u0026#34;@vars.yaml\u0026#34; playbook.yaml Deploy OpenShift This feature will be available in the 4.13 release as TechPreview.\nHere is how you can simply enable it via the install-config.yaml:\napiVersion: v1 baseDomain: mydomain.test compute: - name: worker platform: openstack: type: m1.xlarge replicas: 3 controlPlane: name: master platform: openstack: type: m1.xlarge replicas: 3 metadata: name: mycluster networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 192.168.11.0/24 - cidr: 192.168.100.0/24 platform: openstack: cloud: mycloud machinesSubnet: 8586bf1a-cc3c-4d40-bdf6-c243decc603a apiVIPs: - 192.168.100.240 ingressVIPs: - 192.168.100.250 loadBalancer: type: UserManaged featureSet: TechPreviewNoUpgrade You can also watch this demo which shows the outcome.\nKnown limitations Deploying OpenShift with static IPs for the machines is only supported on Baremetal platform for now but it\u0026rsquo;s in the roadmap to support it on VSphere and OpenStack as well. Changing the IP address for any OpenShift control plane VIP (API + Ingress) is currently not supported. So once the external LB and the OpenShift cluster is deployed, the VIPs can\u0026rsquo;t be changed. This is in our roadmap. Migrating an OpenShift cluster from the OpenShift managed LB to an external LB is currently not supported. It\u0026rsquo;s in our roadmap as well. Keep in mind that the feature will be TechPreview at first and once it has reached some maturity, we\u0026rsquo;ll promote it to GA.\nWrap-up Having the VIPs highly available, routed across multiple domains is only a first step into distributing the OpenShift control plane. In the future, we\u0026rsquo;ll discuss about how Failure Domains will be configured when deploying OpenShift on OpenStack. Note that this is already doable on Baremetal and Vsphere.\nWith this effort, our customers can now decide which Load-Balancer to use, and if they have some expertise in their appliance now they can use it for the OpenShift control plane.\nThe way we implemented it is flexible and will allow us to implement new load balancers in OpenShift if we want to in the future. The proof of concept done a few months ago with BGP in the control plane could be restored if there is growing interest.\nI hope you liked this article and stay tuned for the next ones!\n","permalink":"https://my1.fr/blog/openshift-external-load-balancer-control-plane-intro/","summary":"\u003cp\u003eThis is my first post of a series which will cover how you can distribute your OpenShift cluster across multiple datacenter domains and increase\navailability and performance of your control plane.\u003c/p\u003e","title":"Deploying OpenShift with an External Load-Balancer for your control plane"},{"content":"Stay tuned on our recent achievements in the Kubernetes and OpenStack space when running Fast-Datapath applications.\nAuthors: Emilien Macchi and Maysa Macedo.\nIn the past months, the Kubernetes Network Plumbing Working-Group added new features to the SR-IOV Network Operator for the OpenStack platform.\nIf you’re not familiar with this operator, it helps Kubernetes cluster users deploy their workloads to be connected to Fast Datapath (FDP) networking resources. While the operator is named “SR-IOV”, we’ll see that it can also manage other types of connectivity.\nIn fact, the operator helps to provision and configure the SR-IOV Network Device Plugin for Kubernetes, which is in charge of discovering and advertising networking resources for FDP, mainly (but not exclusively) for SR-IOV Virtual Functions (VFs) and PCI Physical Functions (PFs) that are available on a Kubernetes host (usually a worker node).\nThe operator hides some complexity to achieve that and provides an easy user interface.\nOpenStack metadata support The operator originally required config-drives to be enabled for the machines connected to the FDP networking, so it could read the OpenStack metadata and Network data.\nWe removed that requirement by adding support for reading that information from the Nova metadata service if no config-drive was used.\nIf your Kubernetes hosts have access to the Nova metadata URL, then you have nothing to do! Otherwise, you’ll need to make sure to create the machines with config-drive enabled.\nEnable VFIO with NOIOMMU In virtual deployments of Kubernetes where the underlying virtualization platform (e.g. QEMU) has support for virtualized I/O memory management unit (IOMMU) however OpenStack Nova doesn\u0026rsquo;t know how to handle it yet. It\u0026rsquo;s a work in progress. Therefore, the VFIO PCI driver needs to be loaded with an option named enable_unsafe_noiommu_mode enabled. This option gives user-space I/O access to a device which is direct memory access capable without a IOMMU.\nThe operator is now loading the driver with the right arguments so the users don’t have to worry about it.\nDPDK The operator was initially designed to work on Baremetal and not necessarily on virtualized platforms. However, when a virtualized Kubernetes host is connected to some network hardware using DPDK, the device is exposed as a virtio interface (seen as a VF by the operator) but to take advantage of DPDK, the device has to use the VFIO-PCI driver. We added support for detecting vhost-user interfaces that are connected to the specified Neutron network used for DPDK. Vhost-user is a module part of DPDK and it helps to run networking in the user-space. You can find more information here.\nHere is an example of a SriovNetworkNodePolicy that can be used for Intel devices (you’ll need to change a few things if your device is Mellanox):\napiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: dpdk1 namespace: openshift-sriov-network-operator spec: deviceType: vfio-pci # change to netdevice if Mellanox nicSelector: netFilter: openstack/NetworkID:55a54d05-9ec1-4051-8adb-1b5a7be4f1b6 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \u0026#39;true\u0026#39; numVfs: 1 priority: 99 resourceName: dpdk1 isRdma: false # set to true if Mellanox You’ll need to configure the Network ID that matches your DPDK network in OpenStack.\nOVS Hardware Offload Open-vSwitch is CPU intensive, which affects system performance and prevents available bandwidth from being fully utilized.\nSince OVS 2.8 a feature called OVS Hardware Offload is available. It improves performance significantly by offloading tasks to the hardware running the NIC. OpenStack has full compatibility with this feature and the SR-IOV operator can now take advantage of it.\nHere is an example of a SriovNetworkNodePolicy that can be used:\napiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: hwoffload1 namespace: openshift-sriov-network-operator spec: deviceType: netdevice nicSelector: netFilter: openstack/NetworkID:55a54d05-9ec1-4051-8adb-1b5a7be4f1b6 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \u0026#39;true\u0026#39; numVfs: 1 priority: 99 resourceName: hwoffload1 isRdma: true For now, we only support certain types of devices from the Mellanox vendor.\nAlso, you’ll need to configure the Network ID that matches your offloaded network for OpenStack.\nWrap-up The SR-IOV Network operator was extended to support essential use-cases for OpenStack, so the workloads can be using FDP features. All the features are available in the upstream operator. If you’re an OpenShift user, it’ll be available to you in the 4.11 release and backported to 4.10 in the next zstream, so stay tuned!\n","permalink":"https://my1.fr/blog/sriov-network-operator-improvements-openstack/","summary":"\u003cp\u003eStay tuned on our recent achievements in the Kubernetes and OpenStack space when running Fast-Datapath applications.\u003c/p\u003e","title":"SR-IOV network operator improvements for OpenStack"},{"content":"Read this post to learn more how to update a container in TripleO on a live system.\nNote: this might sound surgery but this is I think the clean options to patch container images in TripleO.\nYour TripleO cloud is running and you want to update an rpm in one or multiple containers?\nTripleO provides some CLI to build new container images with the rpms that you want. This procedure is also documented here.\nIn this particular example, we will update the python3-networking-ovn rpm on octavia_api.\nYou need a host to build the image: The easiest place is the Undercloud or the Standalone node, where Buildah and tripleoclient are installed. We\u0026rsquo;ll build the image from that host.\nPut your rpms in a directory: e.g. in /tmp/rpms\nExport the OpenStack admin credentials: e.g.: export OS_CLOUD=standalone Login to the registry (when using OSP): podman login registry.redhat.io Build the new container image for octavia_api: openstack tripleo container image hotfix \\ --image registry.redhat.io/rhosp-rhel8/openstack-octavia-api:16.2 \\ --rpms-path /tmp/hotfix \\ --tag 16.2-customfix \\ You should see the new image by running buildah images.\nNow you\u0026rsquo;ll need to push the image to a registry (yours, or TripleO registry): e.g. :\nbuildah push registry.redhat.io/rhosp-rhel8/openstack-octavia-api:16.2-16.2-customfix docker://quay.io/emilien/openstack-octavia-api:16.2-customfix Now, there are two methods for deploying that new image.\nRun the deploy command again, after updating the ContainerOctaviaApiImage parameter in TripleO environment Run the following steps: You need to figure out what\u0026rsquo;s the TripleO step where Octavia is deployed (it\u0026rsquo;s step 4), by looking on the host in /var/lib/tripleo-config/container-startup-config and grep for octavia_api.\nNow, go on the host where you want to use that new image (in the case of Standalone, it\u0026rsquo;s the same host where you built the image) and create an Ansible playbook with this content (e.g. paunch.yaml):\n- hosts: localhost become: true vars: service_name: octavia_api tasks: - name: Stop and clean the old container command: systemctl stop {{ service name }} \u0026amp;\u0026amp; podman rm {{ service_name }} - name: Start containers for step 1 paunch: config: /var/lib/tripleo-config/container-startup-config/step_4/hashed-{{ service_name }}.json config_overrides: octavia_api: image: quay.io/emilien/openstack-octavia-api:16.2-customfix\u0026#34; config_id: tripleo_step4 cleanup: false action: apply Change the content for your needs (different step, image, etc).\nRun Ansible with:\nansible-playbook paunch.yaml Your container is now running with your custom image (check with podman inspect).\nFor more details or help, check out the TripleO manuals or ask for help on IRC #tripleo (OFTC now).\n","permalink":"https://my1.fr/blog/patching-containers-in-tripleo/","summary":"\u003cp\u003eRead this post to learn more how to update a container in TripleO on a live system.\u003c/p\u003e","title":"Patching containers in TripleO"},{"content":"Have a look at how we can move container images from the docker.io registry to quay.io.\nThanks to Skopeo, we can copy container images from one registry to another.\nIn this post, we\u0026rsquo;ll copy images from docker.io to quay.io, a container registry which has a lot of features that docker.io doesn\u0026rsquo;t provide. Two of them that I really like are:\nList and manage image vulnerabilities and other security information Manage the manifests of an image If you want more information, checkout their documentation.\nI wrote a small script that one can use to automate the copy of images.\nBefore running the script:\nGet OAuth token from: https://quay.io/organization/[your-org]?tab=applications Change the token, namespace, containers and tag (if needed) If your docker.io registry requires authentication, you\u0026rsquo;ll need to run podman login docker.io (--src-creds option could also be used with Skopeo) You\u0026rsquo;ll need to authenticate against your quay.io registry with podman login quay.io (--dest-creds option could also be used with Skopeo) #!/bin/sh set -ex # get OAuth token from https://quay.io/organization/[your-org]?tab=applications token=\u0026#39;secrete\u0026#39; namespace=yourorg containers=\u0026#39;app1 app2\u0026#39; tag=latest retry() { local -r -i max_attempts=\u0026#34;$1\u0026#34;; shift local -r cmd=\u0026#34;$@\u0026#34; local -i attempt_num=1 until $cmd do if ((attempt_num==max_attempts)) then echo \u0026#34;Attempt $attempt_num failed and there are no more attempts left!\u0026#34; return 1 else echo \u0026#34;Attempt $attempt_num failed! Trying again in $attempt_num seconds...\u0026#34; sleep $((attempt_num++)) fi done } for container in $containers; do # create empty public repo first otherwise skopeo will create the image as private curl -X POST https://quay.io/api/v1/repository \\ -d \u0026#39;{\u0026#34;namespace\u0026#34;:\u0026#34;\u0026#39;$namespace\u0026#39;\u0026#34;,\u0026#34;repository\u0026#34;:\u0026#34;\u0026#39;$container\u0026#39;\u0026#34;,\u0026#34;description\u0026#34;:\u0026#34;Container image \u0026#39;$container\u0026#39;\u0026#34;,\u0026#34;visibility\u0026#34;:\u0026#34;public\u0026#34;}\u0026#39; \\ -H \u0026#39;Authorization: Bearer \u0026#39;$token\u0026#39;\u0026#39; -H \u0026#34;Content-Type: application/json\u0026#34; # workaround if quay.io returns 500 error, likely due to an internal bug when using skopeo against docker.io copy=\u0026#34;skopeo copy docker://docker.io/$namespace/$container:$tag docker://quay.io/$namespace/$container:$tag\u0026#34; retry 5 $copy done As you can see, there are 2 unusual things in this script:\nThe curl creates an empty public image otherwise quay.io would create a private image by default when copying the image with Skopeo. As far as I know, there is no option in quay.io to change the default policy. Of course, remove it if you don\u0026rsquo;t want your image to be public by default. The retry mechanism is to workaround the 500 error that you might get when it provisions a new repository, and it says it already exists (sounds specific to how the registry receives authentication from Skopeo vs Docker CLI). Enjoy Skopeo \u0026amp; quay.io!\n","permalink":"https://my1.fr/blog/moving-container-images-from-docker-io-to-quay-io/","summary":"\u003cp\u003eHave a look at how we can move container images from the docker.io registry to quay.io.\u003c/p\u003e","title":"Moving container images from docker.io to quay.io"},{"content":"I finally took some time to write some thoughts about what Leadership means to me.\nDuring April 2017, I’ve been very lucky to attend the Leadership training, organized by Zingtrain, paid by the OpenStack Foundation and sponsored by my employer (Red Hat) who paid for the trip to go in Ann Arbor. Thank you to all of them!. I also would like to thank Colette Alexander who made this happen.\nIn this blog post, I’ll explain what I’ve learned and also what I took away during this training but also my career; also I’ll give some personal opinions that only engage myself and nobody else.\nFour {Levels, Stages} of Learning Being a leader starts with the willingness of learning. Let’s start by the four levels of learning:\nListening, Reflecting, Assimilating and acting, Teaching (Repeat)\nThe things mentioned during the training were very close of how I personally learned how to be an Open-Source contributor. It starts by listening around you. It has been a little bit frustrating for me at the beginning to not being able to quickly take actions when new ideas come up, but being patient is really worth it.\nThe time to reflect is important to assimilate what happens out there: “what people do” and “why”, “how to they work together” and “how my contribution would fit in there” are the biggest questions I ask myself most of the time I’m jumping into something new for me.\nThen it’s time for actions. That time is really interesting because it’s very exiting at the beginning when contributing for the first time to a project, but can also be frustrating when getting the first feedback of this contribution. It’s like an “emotional elevator” where you go from total happiness of finally feeling useful in this project to “I’m so frustrated, the way I proposed my idea was rejected, I just want to thrash everything and run away”. This moment is to me very crucial and usually I manage to get my frustration out by going for a run or do some other activities that I like. Coming back on keyboard, I take time to retrospect and see how can I do better the next time.\nNow you’re part of the project and you know how to contribute, the work is not finished. Quite some times I see some projects where it’s hard to join the team because there is simply nobody willing to take time and explain you the really basics. Note: on the other side, it also comes with the capacity of saying “I don’t know” (yeah it happens, period.) and learners have to be ready to be mentored. Anyway, if you know something, teach it so more people will know it and your project will remain a cool place to work for.\nLet’s talk about the four stages of the learning journey.\nIt starts by being Unconsciously Incompetent. You underestimate the skills required to contribute and you jump into this hole without knowing that it’s not going to be easy. This stage is usually fast when you become consciously incompetent and realize it won’t be so easy. Don’t give up and go learning, you’ll become unconsciously incompetent (when you start to be productive and teach what you’ve learned). And then it comes the time to be consciously competent. If you didn’t start to teach the skill to someone else, it’s never to late to do it. If you want to read my personal experience of being a Project Lead in OpenStack community, I wrote a blog post that mentions these Learning steps.\nThe importance of a vision There are different versions out there of what is a Vision. A Vision is not a Mission Statement nor a Strategic Plan. My definition from what I’ve seen and learned over the last years would be: “a vision tells a successful story about what you want to be and where you want to go”.\nAn effective Vision is:\nwritten collectively (where all individuals part of the story can contribute) inspiring people who work with you but also externals strategically sound documented and communicated It starts by taking your pen and write yourself on paper your first draft. I find it important to highlight “you” and “draft” because to me a good Vision takes time and iterations to be well written by yourself and not by any consultant.\nDuring the training, for the first time I wrote a vision of my life in 1 year and I found the exercise interesting. Also, when I came back I started this work with my team at Red Hat. So far it has been very helpful to document where the team wants to go.\nGood Leaders offer great service to staff A good Leader is not a boss, nor a Chief. A Leader is an human who makes the best as possible to serve a team who work on a common purpose. During my last 5 years, people who inspired me were Leaders in some sorts. They help others to be better, share their knowledge, accept failures and learn from them.\nTo me, a good Leader is someone able to drive a project to success without taking any decision, but instead, influence her / his peers by engaging collaboration to make the work happen.\nSomething we learned during the training: Power = 1 / ( Authority x Frequency of use )\nAlso two things I’ve learned over the last years and also were confirmed during the training:\nMultitasking doesn’t work. Being a Leader doesn’t mean you have to be busier than others so you can do multiple things in the same time. First of all: everyone is busy (period again.); Second: it’s impossible for the most common brains to perform in a successful way multiple tasks at the same time. High performing has nothing to do with skills. It’s a matter of how much your team shares a common understanding at how they can work together for a specific purpose (“It’s easy to do the right thing, but hard to do the bad things”). Working fourteen hours per day is not efficient and knowing everything doesn’t mean you’re a good Leader. Bottom-line change is leadership I’m convinced that it exists multiple methods to be a great leader and bring new ideas. One of them might be the BLC (Bottom-line change). It appears to be useful when you (leader) wants to bring a new idea in your team.\nFirst of all, you need to make sure you’ll have some time to dedicate because people won’t always buy your idea so quickly. You need to prepare your idea: write some background, define a problem to solve, and if possible get some valid data to justify your proposal.\nOne of the key things is to get the right people involved in your idea. If your idea is a new feature, get all stakeholders involved (one person per group is enough), and rewrite the idea with them, so all of them agree on it. This step is very useful so when you’ll present results, people will recognize their interests since you asked to the right people. Engage the microcosm to work on the vision and prepare a plan for the change. Share the results with your team and help them to implement the change by giving support and accept feedback.\nAs a conclusion, I would define Leadership as a skill that you can’t learn only in the books (but some books are very useful like Being a Better Leader). You need to practice, try, fail, retrospect and try again. Being a leader in some tasks is very rewarding and in my opinion sometimes reduces frustration. At least but not least, being a good leader and going the extra-mile can be to create new leaders around you by sharing techniques, trusting and promoting people. Have fun!\n","permalink":"https://my1.fr/blog/what-leadership-means-to-me/","summary":"\u003cp\u003eI finally took some time to write some thoughts about what Leadership means to me.\u003c/p\u003e","title":"What Leadership Means To Me"},{"content":"This story explains why I started to stop working as a anarchistic-multi-tasking-schedule-driven and learnt how to become a good team leader.\nHow it started March 2015, Puppet OpenStack project just moved under the Big Tent. What a success for our group!\nOne of the first step was to elect a Project Team Lead. Our group was pretty small (~10 active contributors) so we thought that the PTL would be just a facilitator for the group, and the liaison with other projects that interact with us. I mean, easy, right?\nAt that time, I was clearly an unconsciously incompetent PTL. I thought I knew what I was doing to drive the project to success.\nBut situation evolved. I started to deal with things that I didn\u0026rsquo;t expect to deal with like making sure our team works together in a way that is efficient and consistent. I also realized nobody knew what a PTL was really supposed to do (at least in our group), so I took care of more tasks, like release management, organizing Summit design sessions, promoting core reviewers, and welcoming newcomers. That was the time where I realized I become a consciously incompetent PTL. I was doing things that nobody taught me before.\nIn fact, there is no book telling you how to lead an OpenStack project so I decided to jump in this black hole and hopefully I would make mistakes so I can learn something.\nSet your own expectations I made the mistake of engaging myself into a role where expectations were not cleared with the team. The PTL guide is not enough to clear expectations of what your team will wait from you. This is something you have to figure out with the folks you\u0026rsquo;re working with. You would be surprised by the diversity of expectations that project contributors have for their PTL. Talk with your team and ask them what they want you to be and how they see you as a team lead. I don\u0026rsquo;t think there is a single rule that works for all projects, because of the different cultures in OpenStack community.\nEmbrace changes … and accept failures. There is no project in OpenStack that didn\u0026rsquo;t had outstanding issues (technical and human). The first step as a PTL is to acknowledge the problem and share it with your team. Most of the conflicts are self-resolved when everyone agrees that yes, there is a problem. It can be a code design issue or any other technical disagreement but also human complains, like the difficulty to start contributing or the lack of reward for very active contributors who aren\u0026rsquo;t core yet. Once a problem is resolved: discuss with your team about how we can avoid the same situation in the future. Make a retrospective if needed but talk and document the output.\nI continuously encourage at welcoming all kind of changes in TripleO so we can adopt new technologies that will make our project better.\nKeep in mind it has a cost. Some people will disagree but that\u0026rsquo;s fine: you might have to pick a rate of acceptance to consider that your team is ready to make this change.\nDelegate We are humans and have limits. We can\u0026rsquo;t be everywhere and do everything. We have to accept that PTLs are not supposed to be online 24/7. They don\u0026rsquo;t always have the best ideas and don\u0026rsquo;t always take the right decisions. This is fine. Your project will survive.\nI learnt that when I started to be PTL of TripleO in 2016. The TripleO team has become so big that I didn\u0026rsquo;t realize how many interruptions I would have every day. So I decided to learn how to delegate. We worked together and created TripleO Squads where each squad focus on a specific area of TripleO. Each squad would be autonomous enough to propose their own core reviewers or do their own meetings when needed. I wanted small teams working together, failing fast and making quick iterations so we could scale the project, accept and share the work load and increase the trust inside the TripleO team.\nThis is where I started to be a Consciously Competent PTL.\nWhere am I now I have reached a point where I think that projects wouldn\u0026rsquo;t need a PTL to run fine if they really wanted. Instead, I start to believe about some essential things that would actually help to get rid of this role:\nAs a team, define the vision of the project and document it. It will really help to know where we want to go and clear all expectations about the project. Establish trust to each individual by default and welcome newcomers. Encourage collective and distributed leadership. Try, Do, Fail, Learn, Teach. and start again. Don\u0026rsquo;t stale. This long journey helped me to learn many things in both technical and human areas. It has been awesome to work with such groups so far. I would like to spend more time on technical work (aka coding) but also in teaching and mentoring new contributors in OpenStack. Therefore, I won\u0026rsquo;t be PTL during the next cycle and my hope is to see new leaders in TripleO, who would come up with fresh ideas and help us to keep TripleO rocking.\nThanks for reading so far, and also thanks for your trust.\n","permalink":"https://my1.fr/blog/my-journey-as-an-openstack-ptl/","summary":"\u003cp\u003eThis story explains why I started to stop working as a anarchistic-multi-tasking-schedule-driven and learnt how to become a good team leader.\u003c/p\u003e","title":"My Journey As An OpenStack PTL"},{"content":" I don\u0026rsquo;t post much about my personal life, but I\u0026rsquo;m a dual-citizen (french and canadian) living in Quebec-City. Outside work, I love outdoors, spending time in family, flying aircrafts (private pilot) and a bunch of other things. My professional path is frequently updated on my Linkedin profile. I sometimes post on Twitter. If you want to reach out, please send me an email. ","permalink":"https://my1.fr/about-me/","summary":"\u003cul\u003e\n\u003cli\u003eI don\u0026rsquo;t post much about my personal life, but I\u0026rsquo;m a dual-citizen (french and canadian) living in Quebec-City. Outside work, I love outdoors, spending time in family, flying aircrafts (private pilot) and a bunch of other things.\u003c/li\u003e\n\u003cli\u003eMy professional path is frequently updated on my \u003ca href=\"https://www.linkedin.com/in/emilienmacchi\"\u003eLinkedin profile\u003c/a\u003e.\u003c/li\u003e\n\u003cli\u003eI sometimes post on \u003ca href=\"https://twitter.com/EmilienMacchi\"\u003eTwitter\u003c/a\u003e.\u003c/li\u003e\n\u003cli\u003eIf you want to reach out, please send me an \u003ca href=\"mailto:emacchi@pm.me\"\u003eemail\u003c/a\u003e.\u003c/li\u003e\n\u003c/ul\u003e","title":"About me"}]
\ No newline at end of file
diff --git a/index.xml b/index.xml
index 1a99923..47c8a03 100644
--- a/index.xml
+++ b/index.xml
@@ -9,7 +9,7 @@
https://my1.fr/%3Clink%20or%20path%20of%20image%20for%20opengraph,%20twitter-cards%3E
https://my1.fr/%3Clink%20or%20path%20of%20image%20for%20opengraph,%20twitter-cards%3E
- Hugo -- 0.136.4
+ Hugo -- 0.136.5en-usFri, 18 Oct 2024 00:00:00 +0000
diff --git a/page/2/index.html b/page/2/index.html
index ed611bf..a2900ed 100644
--- a/page/2/index.html
+++ b/page/2/index.html
@@ -1,4 +1,4 @@
-
/home/emilien
+/home/emilien
Patching containers in TripleO
Read this post to learn more how to update a container in TripleO on a live system.
...
Moving container images from docker.io to quay.io
Have a look at how we can move container images from the docker.io registry to quay.io.
...
What Leadership Means To Me
I finally took some time to write some thoughts about what Leadership means to me.
diff --git a/posts/index.html b/posts/index.html
index fbe6c24..7d957df 100644
--- a/posts/index.html
+++ b/posts/index.html
@@ -1,7 +1,7 @@
How AI-Assisted is Transforming Software Development
AI-powered coding assistants are revolutionizing the way developers write software. By providing contextual code suggestions and reducing repetitive tasks, these tools can significantly increase productivity. In this post, we’ll explore how AI-assisted coding works, its benefits, potential challenges, and tips for using it effectively in your development workflow. I’ll also share examples with two solutions and compare them so it’ll help you to make decisions on what tools to use.
-...
Developing cluster-api-provider-openstack with Tilt
This is a quick tutorial (mainly brain dump) on how I’m using Tilt do quickly iterate over my cluster-api-provider-openstack work.
+...
Developing cluster-api-provider-openstack with Tilt
This is a quick tutorial (mainly brain dump) on how I’m using Tilt do quickly iterate over my cluster-api-provider-openstack work.
...
Deploying OpenShift on OpenStack with an External Load-Balancer for your control plane in multiple Failure Domains
This is my second post of a series which will cover how you can distribute your OpenShift cluster across multiple datacenter domains and increase availability and performance of your control plane.
...
Deploying OpenShift with an External Load-Balancer for your control plane
This is my first post of a series which will cover how you can distribute your OpenShift cluster across multiple datacenter domains and increase availability and performance of your control plane.
...
SR-IOV network operator improvements for OpenStack
Stay tuned on our recent achievements in the Kubernetes and OpenStack space when running Fast-Datapath applications.
diff --git a/posts/index.xml b/posts/index.xml
index 1562317..e2eeb32 100644
--- a/posts/index.xml
+++ b/posts/index.xml
@@ -9,7 +9,7 @@
https://my1.fr/%3Clink%20or%20path%20of%20image%20for%20opengraph,%20twitter-cards%3E
https://my1.fr/%3Clink%20or%20path%20of%20image%20for%20opengraph,%20twitter-cards%3E
- Hugo -- 0.136.4
+ Hugo -- 0.136.5en-usFri, 18 Oct 2024 00:00:00 +0000
diff --git a/tags/index.xml b/tags/index.xml
index 9451564..a19b537 100644
--- a/tags/index.xml
+++ b/tags/index.xml
@@ -9,7 +9,7 @@
https://my1.fr/%3Clink%20or%20path%20of%20image%20for%20opengraph,%20twitter-cards%3E
https://my1.fr/%3Clink%20or%20path%20of%20image%20for%20opengraph,%20twitter-cards%3E
- Hugo -- 0.136.4
+ Hugo -- 0.136.5en-usMon, 29 May 2017 21:28:49 +0000
diff --git a/tags/leadership/index.xml b/tags/leadership/index.xml
index 2bef768..94fddbc 100644
--- a/tags/leadership/index.xml
+++ b/tags/leadership/index.xml
@@ -9,7 +9,7 @@
https://my1.fr/%3Clink%20or%20path%20of%20image%20for%20opengraph,%20twitter-cards%3E
https://my1.fr/%3Clink%20or%20path%20of%20image%20for%20opengraph,%20twitter-cards%3E
- Hugo -- 0.136.4
+ Hugo -- 0.136.5en-usMon, 29 May 2017 21:28:49 +0000
diff --git a/tags/openstack/index.xml b/tags/openstack/index.xml
index 42a404d..a2acb64 100644
--- a/tags/openstack/index.xml
+++ b/tags/openstack/index.xml
@@ -9,7 +9,7 @@
https://my1.fr/%3Clink%20or%20path%20of%20image%20for%20opengraph,%20twitter-cards%3E
https://my1.fr/%3Clink%20or%20path%20of%20image%20for%20opengraph,%20twitter-cards%3E
- Hugo -- 0.136.4
+ Hugo -- 0.136.5en-usFri, 14 Apr 2017 00:14:49 +0000
diff --git a/tags/ptl/index.xml b/tags/ptl/index.xml
index e09ca72..ae3f655 100644
--- a/tags/ptl/index.xml
+++ b/tags/ptl/index.xml
@@ -9,7 +9,7 @@
https://my1.fr/%3Clink%20or%20path%20of%20image%20for%20opengraph,%20twitter-cards%3E
https://my1.fr/%3Clink%20or%20path%20of%20image%20for%20opengraph,%20twitter-cards%3E
- Hugo -- 0.136.4
+ Hugo -- 0.136.5en-usFri, 14 Apr 2017 00:14:49 +0000