title | last revised |
---|---|
Deployment Project Template |
2022/10/16 |
This is an active learning/demonstration of several different tools in the DevOps/GitOps space. Feel free to use this as inspiration for your own work, but this is not a turn-key solution. My ultimate goal is to have a unified environment running tools locally to develop in a kubernetes environment, but to also quickly deploy a "production" ready environment and application on prem or cloud.
For local development, I built this solution with Rancher Desktop and Windows in mind.
I had been experimenting with using devcontainers as a solution to create consistent and ephemeral development environments, but there are headaches in that solution. First, it's still reliant on Docker. Second, the solution has a lot of overhead. Even avoiding adding shell scripts at the end, it does a lot of rebuilding for what should be based on an image. And on something like a regular laptop, it's going to use a lot of resources- forget running Zoom/WebEx and continue development work.
I don't want to oversell it- WSL isn't a slam dunk replacement. Think of it more like an incremental change. It feels a little more efficient with resources. It's more permanent than a container, so there's less re-building.
It's the re-building that has me the most excited. The problem that I've been trying to solve lately has been making my local and other environments more consistent. That's why things like Rancher Desktop are so appealing to me. While Docker is ephemeral and idempotent, Docker-Compose doesn't replicate Kubernetes. Also, devcontainers doesn't help me with my goal either. Building out all the parameters in a devcontainer.json doesn't do anything for the project- it'll never be like dev, stage, prod or whatever. But in using WSL, I can leverage ansible, which is a solution I love and does get me closer to parity between my environments.
So the steps you'll need to get going are something like below. If different OS, you can probably get through this okay, but this approach needs some tweaking. I like WSL2 as a solution because it's isolated from the main OS - can recreate if something goes south. Linux and Mac may want to stick with the devcontainer idea or something else that provides a degree of isolation from the OS - QEMU, LXC or KVM. I've left the devcontainer logic in place but I'm not improving/maintaining it at present.
I started building out example terraform and ansible-pull projects in this repo. I'm not sure where I'm going to go with those, but I was thinking through which tools I'd use for this solution. At first I was thinking Terraform, and ultimately I may change my mind, but I think it's far more interesting to use Crossplane. The downside to crossplane is that in a local situation, it requires a bit more effort to stand up an environment, so I decided Ansible was reasonable to handle that task.
NOTE: Currently there's a bug in WSL2 that affects devcontainers. Terraform is impacted. before running terraform commands vim into /etc/resolve.conf and set the nameserver to 1.1.1.1 or 8.8.8.8, something like that. This is just a workaround, so I'm not going to update the files to automatically fix. --Github issue
Currently the dev environment (Ubuntu) installs several things via Ansible script:
- shell, languages or cli tools (gh, ansible, k9s, Go, Rust, Conda, argocd-cli, flux-cli, Oh My Zsh, k3d, cmctl, minio-cli, kustomize)
- Visual Studio Code extensions
- Helm charts via the local ansible script (Helm, Traefik, Cert-Manager, ArgoCD, Grafana/Prometheus, Airflow)
The specific features selected can be tweaked via ansible/vars/localhost.yml
. My plans are to add back some additional features that were included in the devcontainer solution (gcloud-cli, azure-cli, aws-cli, terraform). I still need to add additional configurations to Oh My Zsh. If you have immediate need of these, some links to well done scripts can be found in the devcontainer section. My plans are to create/recreate the same idea within ansible roles to be consistent.
- To access traefik dashboard
- To access argocd UI:
- Username is admin
- Password is
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo
- To access airflow UI:
- Username is admin
- Password is admin
- To access grafana UI
- Username is admin
- Password is
kubectl -n monitoring get secret grafana-admin-credentials -o jsonpath="{.data.password}" | base64 -d; echo
- Enable Virtualization in BIOS
- Install Rancher Desktop - I believe this will prompts enabling WSL in Windows, but if not...
- Install Window Subsystem for Linux V2
- Optional: Install Windows Terminal.
- Install Visual Studio Code and the Remote Development Extension Pack
- Deploy Linux image in WSL This is a link to the Ubuntu 22.04 image. Feel free to sub if you so choose. There are links out there that don't use the Microsoft store if needed.
- In PowerShell, type
wsl --list
. If needed, typewsl -s Ubuntu-22.04
to update the Default distribution. - type
wsl
to enter the distribution. From here on out we're in bash. sudo apt-add-repository ppa:ansible/ansible
curl -fsSL https://cli.github.com/packages/githubcli-archive-keyring.gpg | sudo dd of=/usr/share/keyrings/githubcli-archive-keyring.gpg
sudo chmod go+r /usr/share/keyrings/githubcli-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/githubcli-archive-keyring.gpg] https://cli.github.com/packages stable main" | sudo tee /etc/apt/sources.list.d/github-cli.list > /dev/null
sudo apt-get update && sudo apt-get -y upgrade
sudo apt-get -y install ansible gh python3-pip
gh auth login
- follow the prompts. This is optional, but if nothing else the gh-cli configures git for you so you don't have to worry about it. Otherwise it can be annoying.mkdir -p ~/code && cd ~/code
- I usually create a folder to store my projects.gh repo clone wfordwfu/deployment-template && code deployment-template
- this should download the code server and open everything up in VS Code.
To deploy locally, run ansible-playbook ansible/local.yml -K
. The roles/applications that are installed are maintained in the ansible/vars/localhost.yml file. I have not implemented a delete feature, so if you deploy a helm chart you don't wish to have, just delete the corresponding namespace. This pattern mostly follows Ansible-Pull solutions I've implemented in the past.
- To reset the cluster, go into Rancher Desktop > Troubleshooting and click the Reset Kubernetes button
- To reset WSL:
- Go into powershell and type
wsl --unregister Ubuntu-22.04
- Pick up with step 6 under Configure Rancher Desktop and WSL above
- Go into powershell and type
helm show values traefik/traefik > temp/traefik-values.yaml
kubectl config get-contexts -o name
kubectl config use-context rancher-desktop
Local environment is using a self signed cert. Environments that are publically accessible should at a minimum use Let's Encrypt.
Example to connect traefik to lets encrypt, but won't work in dev (needs to be on public internet):
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
# You must replace this email address with your own.
# Let's Encrypt will use this to contact you about expiring
# certificates, and issues related to your account.
email: [email protected]
server: https://acme-staging-v02.api.letsencrypt.org/directory
# server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
# Secret resource that will be used to store the account's private key.
name: letsencrypt-stage
# name: letsencrypt-prod
solvers:
- http01: {}
# ingress:
# class: traefik
export KUBECONFIG=~/.kube/config:~/someotherconfig
kubectl config view --flatten
dynamic nfs provisioning - Expect to replace with Longhorn
open https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner
open https://phoenixnap.com/kb/nfs-docker-volumes
open https://github.com/justmeandopensource/kubernetes
in traefik/values.yaml
persistence:
enabled: true
- Python
- Go
- Rust
- Ansible
- Terraform
- Rustup
- devcontainers
- GitHub Actions Azure Login
- K9s
- kubectx
- Miniconda
- ZSH