Skip to content

Commit

Permalink
feat(ci): WIP on local testing cluster
Browse files Browse the repository at this point in the history
  • Loading branch information
stevenj committed May 29, 2024
1 parent c8143da commit a4ab977
Show file tree
Hide file tree
Showing 12 changed files with 66 additions and 40 deletions.
1 change: 1 addition & 0 deletions docs/Earthfile
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,7 @@ local:
# pushing to a local registry even if `--push` is passed.
# this should be "registry.cluster.test:5000" but Earthly can not reliably locate it in a hosts file.
ARG local_registry="192.168.58.10:5000"
# ARG local_registry="registry.cluster.test"
# Build a self contained service to show built docs locally.
DO docs-ci+PACKAGE

Expand Down
1 change: 1 addition & 0 deletions utilities/local-cluster/Earthfile
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@ start-cluster:
RUN python scripts/check-cluster-dns.py ./shared/extra.hosts
# Everything checks out so far, try and start the cluster.
IF [ "$NATIVEPLATFORM" = "darwin/arm64" ]
# Install necessary Vagrant plugins for ARM Mac
RUN VAGRANT_DISABLE_STRICT_DEPENDENCY_ENFORCEMENT=1 vagrant plugin install vagrant-qemu
END
RUN vagrant up
Expand Down
24 changes: 24 additions & 0 deletions utilities/local-cluster/Readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -86,3 +86,27 @@ To display the registry service logs:
```sh
docker logs registry
```

## Local UI to access ScyllaDB

Found (and tested) description how to connect using only open-source via DBeaver: <https://javaresolutions.blogspot.com/2018/04/opensource-db-ui-tool-for-cassandra-db.html>

1. Download dbeaver (Community Edition)
2. Download cassandra jdbc jar files: <http://www.dbschema.com/cassandra-jdbc-driver.html> (Downloading and Testing the Driver Binaries section have links to binary and source)
3. extract cassandra jdbc zip
4. run dbeaver
5. go to Database > Driver Manager
6. click New
7. Fill in details as follow:
* Driver Name: Cassandra (or whatever you want it to say)
* Driver Type: Generic
* Class Name: com.dbschema.CassandraJdbcDriver
* URL Template: jdbc:cassandra://{host}[:{port}][/{database}]
* Default Port: 9042
* Embedded: no
* Category:
* Description: Cassandra (or whatever you want it to say)
8. click Add File and add all of the jars in the cassandra jdbc zip
9. click Find Class to make sure the Class Name is found okay
10. click OK
11. Create New Connection, selecting the database driver you just added
37 changes: 9 additions & 28 deletions utilities/local-cluster/Vagrantfile
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,8 @@
control_ip = "192.168.58.10"

# Determine the maximum number of agents, and set their IP addresses
agents = { "agent1" => "192.168.58.11",
"agent2" => "192.168.58.12" }
agents = { "agent86" => "192.168.58.86",
"agent99" => "192.168.58.99" }

# This is sized so that a machine with 16 threads and 16GB will allocate at most
# ~3/4 of its resources to the cluster.
Expand Down Expand Up @@ -93,8 +93,7 @@ cert_manager_install_script = <<-SHELL
longhorn_install_script = <<-SHELL
# See: https://docs.k3s.io/storage
# https://longhorn.io/docs/1.6.2/deploy/install/install-with-kubectl
# Note: This fails to work because virtualbox file shares do not support the necessary
# filesystem operations required by longhorn.\
# https://longhorn.io/docs/1.6.2/deploy/install/install-with-helm/
sudo -i
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
helm repo add longhorn https://charts.longhorn.io
Expand Down Expand Up @@ -136,6 +135,7 @@ monitoring_install_script = <<-SHELL
sudo -i
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo add grafana https://grafana.github.io/helm-charts
helm repo update
helm install prometheus-stack --version 58.6.0 -f /vagrant_shared/k3s/grafana-prometheus/prometheus-values.yaml prometheus-community/kube-prometheus-stack
kubectl apply -f /vagrant_shared/k3s/grafana-prometheus/alert-manager-ingress.yaml
Expand All @@ -146,25 +146,14 @@ monitoring_install_script = <<-SHELL
SHELL

scylladb_install_script = <<-SHELL
# See: https://operator.docs.scylladb.com/stable/generic.html
# https://github.com/scylladb/scylla-operator/blob/master/docs/source/helm.md
# See: https://github.com/scylladb/scylla-operator/blob/master/docs/source/helm.md
sudo -i
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
helm repo add scylla https://scylla-operator-charts.storage.googleapis.com/stable
helm repo update
helm install scylla-operator scylla/scylla-operator --values /vagrant_shared/k3s/scylladb/values.operator.yaml --create-namespace --namespace scylla-operator --wait
# For now disable the manager. See: https://github.com/scylladb/scylla-operator/blob/master/docs/source/manager.md
# helm install scylla-manager scylla/scylla-manager --values /vagrant_shared/k3s/scylladb/values.manager.yaml --create-namespace --namespace scylla-manager
helm install scylla-manager scylla/scylla-manager --values /vagrant_shared/k3s/scylladb/values.manager.yaml --create-namespace --namespace scylla-manager
helm install scylla scylla/scylla --values /vagrant_shared/k3s/scylladb/values.cluster.yaml --create-namespace --namespace scylla
#git clone --single-branch --branch v1.12.2 --depth=1 https://github.com/scylladb/scylla-operator.git scylla-operator
#pushd scylla-operator
#kubectl apply -f deploy/operator.yaml
#kubectl wait --for condition=established crd/scyllaclusters.scylla.scylladb.com
#kubectl -n scylla-operator rollout status deployment.apps/scylla-operator
#kubectl -n scylla-operator logs deployment.apps/scylla-operator
#kubectl create -f /vagrant_shared/k3s/scylladb/cluster.yaml
#popd
#rm -rf scylla-operator
SHELL


Expand All @@ -188,17 +177,13 @@ Vagrant.configure("2") do |config|
if !is_darwin_arm64
# x86 anything should work with this
control.vm.synced_folder "./shared", "/vagrant_shared"
# control.vm.synced_folder "./shared/longhorn/control", "/var/lib/longhorn"
control.vm.synced_folder "./shared/storage/control", "/opt/local-path-provisioner"
control.vm.provider "virtualbox" do |vb|
vb.memory = control_memory
vb.cpus = control_vcpu
end
else
# Specific config just for Arm Macs.
control.vm.synced_folder "./shared", "/vagrant_shared", type: "smb"
# control.vm.synced_folder "./shared/longhorn/control", "/var/lib/longhorn", type: "smb"
control.vm.synced_folder "./shared/storage/control", "/opt/local-path-provisioner", type: "smb"
control.vm.provider "qemu" do |qe|
qe.memory = control_memory
qe.smp = control_vcpu
Expand All @@ -209,12 +194,12 @@ Vagrant.configure("2") do |config|
control.vm.provision "shell", inline: helm_install_script
control.vm.provision "shell", inline: control_plane_script
control.vm.provision "shell", inline: cert_manager_install_script
control.vm.provision "shell", inline: local_path_provisioner_script
# control.vm.provision "shell", inline: longhorn_install_script
# We use longhorn, so don;t setup the local-path-provisioner
# control.vm.provision "shell", inline: local_path_provisioner_script
control.vm.provision "shell", inline: longhorn_install_script
control.vm.provision "shell", inline: monitoring_install_script
control.vm.provision "shell", inline: registry_script
control.vm.provision "shell", inline: scylladb_install_script
# control.vm.provision "shell", inline: cassandra_install_script
end

agents.each do |agent_name, agent_ip|
Expand All @@ -223,16 +208,12 @@ Vagrant.configure("2") do |config|
agent.vm.hostname = agent_name
if !is_darwin_arm64
agent.vm.synced_folder "./shared", "/vagrant_shared"
# agent.vm.synced_folder "./shared/longhorn/"+agent_name, "/var/lib/longhorn"
agent.vm.synced_folder "./shared/storage/"+agent_name, "/opt/local-path-provisioner"
agent.vm.provider "virtualbox" do |vb|
vb.memory = agent_memory
vb.cpus = agent_vcpu
end
else
agent.vm.synced_folder "./shared", "/vagrant_shared", type: "smb"
#agent.vm.synced_folder "./shared/longhorn/"+agent_name, "/var/lib/longhorn", type: "smb"
agent.vm.synced_folder "./shared/storage/"+agent_name, "/opt/local-path-provisioner", type: "smb"
agent.vm.provider "qemu" do |qe|
qe.memory = agent_memory
qe.smp = agent_vcpu
Expand Down
4 changes: 4 additions & 0 deletions utilities/local-cluster/manifests/cat-voices-docs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -62,3 +62,7 @@ spec:
name: cat-gateway-docs
port:
number: 80
tls:
- secretName: cat-voices-docs-tls
hosts:
- docs.voices.cluster.test
4 changes: 3 additions & 1 deletion utilities/local-cluster/shared/extra.hosts
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,10 @@
192.168.58.10 grafana.cluster.test # The Grafana dashboard
192.168.58.10 alert-manager.cluster.test # The Alert manager service.
192.168.58.10 prometheus.cluster.test # The Prometheus service.
192.168.58.10 longhorn.cluster.test # The Longhorn Storage service UI.
192.168.58.10 scylladb.cluster.test # The exposed scylladb running on the cluster.

# Catalyst Voices specific Hostnames
192.168.58.10 voices.cluster.test # cat voices - Front end
192.168.58.10 docs.voices.cluster.test # local docs server for cat voices (port 80)
192.168.59.10 db.voices.cluster.test # cat-voices exposed DB. UI on 80, DB itself on 5432

Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ prometheus:
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-path
storageClassName: longhorn-best-effort-local
resources:
requests:
storage: 2Gi
Expand All @@ -40,7 +40,7 @@ grafana:
persistence:
enabled: true
type: pvc
storageClassName: local-path
storageClassName: longhorn-best-effort-local
accessModes:
- ReadWriteOnce
size: 4Gi
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ metadata:
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-path
storageClassName: longhorn-best-effort-local
resources:
requests:
storage: 20Gi
Expand All @@ -22,6 +22,7 @@ spec:
- "HEAD"
accessControlAllowOriginList:
- "https://registry-ui.cluster.test"
- "http://registry-ui.cluster.test"
accessControlAllowCredentials: false
accessControlMaxAge: 100
addVaryHeader: true
Expand All @@ -41,6 +42,19 @@ spec:
selector:
app: registry
---
apiVersion: v1
kind: Service
metadata:
name: registry-raw
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 5000
targetPort: 5000
selector:
app: registry
---
apiVersion: apps/v1
kind: Deployment
metadata:
Expand Down
6 changes: 1 addition & 5 deletions utilities/local-cluster/shared/k3s/registry/registry-ui.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ spec:
- name: REGISTRY_TITLE
value: "Catalyst Test Cluster - Container Registry"
- name: REGISTRY_URL
value: "https://registry.cluster.test"
value: "http://registry.cluster.test"
resources:
limits:
cpu: "0.2"
Expand Down Expand Up @@ -64,7 +64,3 @@ spec:
name: registry-ui
port:
number: 80
tls:
- secretName: registry-ui-tls
hosts:
- registry-ui.cluster.test
5 changes: 4 additions & 1 deletion utilities/local-cluster/shared/k3s/scylladb/cluster.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ spec:
members: 3
storage:
capacity: 5Gi
storageClassName: local-path
storageClassName: longhorn-strict-local

resources:
requests:
Expand All @@ -46,3 +46,6 @@ spec:
volumeMounts:
- mountPath: /tmp/coredumps
name: coredumpfs
exposeOptions:
nodeService:
type: LoadBalancer
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ racks:
members: 3
storage:
capacity: 5Gi
storageClassName: local-path
storageClassName: longhorn-strict-local

resources:
limits:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ scylla:
members: 1
storage:
capacity: 5Gi
storageClassName: local-path
storageClassName: longhorn-strict-local
resources:
limits:
cpu: 1
Expand Down

0 comments on commit a4ab977

Please sign in to comment.