-
-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kubernetes for local dev #118
Conversation
dd41150
to
e5d7f93
Compare
I'm probably going to change my approach significantly: I've just learned about https://skaffold.dev/, which seems perfect for our needs. But I'm currently having a problem building the |
Using Kubernetes from https://k3s.io/ for local development, we don't have a weekly-snapshots-retain-4 storage class, so we need local-path. Also, local-path doesn't allow ReadWriteMany -- but we don't need that anyway since all of our containers are running in the same pod, and therefore on the same node. ReadWriteMany is only required for volumes being mounted from different nodes.
I'll put the namespace back everywhere eventually. But for local dev, we don't (yet) need a namespace.
The `data:` key tells Kubernetes these are base64-encoded values. If they are not encoded, the `stringData` key is correct.
These are the same values found in the .env file checked into the repo, so this commit is not exposing any actual secrets, just the ones we use during local development.
This creates a container that will hold the Git repo and allow us to push to it. Next we'll set up the frontend and backend containers to watch that repo.
The "localdev" task does everything needed to get a Git container up and running, but does not (yet) create a .ssh/config entry for you.
This will enable us to run `skaffold dev` and have a live-reloading version of our app running in Kubernetes.
The hasura container was being OOMKilled, so we'll try bumping its memory allocation and its hard limit.
Long-term, we need to specify exactly which clients (e.g., the frontend box) are allowed to connect, but AllowAnyOrigin is good enough during dev work.
9fa5ed3
to
08f5f88
Compare
All k8s deployment files now use the "languagedepot" namespace. To use that namespace by default and not have to specify `-n lanugagedepot` all the time, add `namespace: languagedepot` to your ~/.kube/config file in the appropriate context stanza, just after the cluster: and user: lines.
Now the login process will redirect to a server page, forcing a cookie reload so that the rest of the app will correctly get the user from the load() function in src/routes. This will mean that the user store should no longer be stale.
The `crypto.subtle` module is a browser-only module; if we're doing server-side rendering then we need the NodeJS equivalent.
Docker desktop's embedded Kubernetes doesn't have an ingress controller, so part of local k8s dev must include deploying one. It also doesn't have local-path storageClass, so we have to use hostpath.
Recent docker images of Node 20 have an issue where node-gyp's postinstall step can fail because it's trying to replace a binary while it's still running. Moving to Node 18 Docker images, just for now, to avoid the problem.
Can't pre-hash the password if the page was SSR rendered, so as long as we're on Node 18, we need to turn off password prehashing.
If there's a tls config block in the ingress deployment YAML, it sets HSTS for dev.languagedepot.org even though the TLS is only configured for staging.languagedepot.org, and that causes issues. In the future we'll use kustomize.yaml to configure this.
Port issues here
Currently not working: Hasura is saying "inconsistent metadata" and claiming that the tables Projects and Users don't exist.
Since Hasura needs the backend to be up and healthy before it will accept its metadata, we change `wait-db` to `wait-lexbox` in the Hasura container (and the lexbox deployment no longer waits for Hasura, but instead waits for the database to be up and running).
This file will end up containing all PersistentVolumeClaims, so that it can be deployed separately from skaffold
This way skaffold won't consider that it deployed the persistent volume claims in pvc.yaml and won't clean them up after skaffold dev.
Alright, the first time I ran it I got a weird error because I was trying to use the staging context but it couldn't connect. I changed to the docker desktop k8s context and then I get the following error:
looks like it's trying to get something from docker but it can't. To prevent a mixup of the k8s context we should probably configure skaffold to only work on the docker desktop context: https://skaffold.dev/docs/environment/kube-context/ |
…ssing ports for doing dotnet locally.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good Robin, I left a few comments I'd like resolved before we merge in, I suspect some things might been forgotten from earlier work and are unnecessary now. Also I think I found a solution for the base/secrets issue but I'm not sure, you let me know.
|
||
FROM build AS publish | ||
RUN dotnet publish /p:InformationalVersion=$APP_VERSION "/LexBoxApi/LexBoxApi.csproj" -c Release -o /app/publish | ||
RUN --mount=type=cache,target=/root/.nuget/packages dotnet publish /p:InformationalVersion=$APP_VERSION "/LexBoxApi/LexBoxApi.csproj" -c Release -o /app/publish |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
since this docker file isn't used for local dev does this make any difference?
# Conflicts: # deployment/base/lexbox-deployment.yaml
I'm going to do my best to pick this up and carry it across the line. |
…sing env variable for hgweb
…via port forwarding in dev
Adding lines to /etc/hosts isn't actually needed on Linux, but is necessary on Windows, so we'll mention that. And we'll rearrage the README to put the extra setup steps immediately after the setup.
This reverts commit 7adca7c.
This was an early attempt at getting k8s working for local dev, obsoleted by using skaffold.
test this:
On a Linux box, download https://k3s.io/ and run it in server modesudo k3s kubectl create configmap otel-config --from-file=otel/collector-config.yaml
for f in deployment/*.yaml; do sudo k3s kubectl apply -f "$f"; done
On Windows, there are many ways to run Kubernetes, but the simplest way is probably to use the Kubernetes installation built into Docker Desktop. There's an "Enable Kubernetes" checkbox in the Docker settings that will do all the setup for you, though it doesn't install the kubectl binary. In order to run kubectl commands you'll need to install kubectl yourself.
UPDATE: Now partway through writing atask localdev
task. Parts of it are working: you should be able to do the following:task localdev
In another terminal,ssh k8sdev
Inside the ssh session,ls /app
and you should see the Git repoNot yet working: pushing to that Git repo and having it auto-update