Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Extending tf definitions of kubestack clusters with custom requirements #136

Open
pijemcolu opened this issue Sep 30, 2020 · 3 comments
Open

Comments

@pijemcolu
Copy link

pijemcolu commented Sep 30, 2020

Currently none of the kubestack modules have any outputs. This might make extending the clusters with custom terraform code difficult if one also wishes not to touch the internals in order to provision custom resources around the cluster.

There's a couple questions really:

  1. How do we envision extending the cluster with custom terraform declarations?
  2. How do we upgrade the kubestack version?
  3. Should we start implementing outputs similar to the output proposed in Expose cluster name_servers as module output #133 ?

I envision upgrades being a rather manual process, maybe git merge upstream, keeping kbst/terraform as the upstream.
At the same time I'd envision a single extensions.tf using the proposed outputs in order to extend the clusters with custom terraform declarations.

@pst
Copy link
Member

pst commented Oct 1, 2020

1. How do we envision extending the cluster with custom terraform declarations?

I think it depends on what type of declarations. High level, the goal would need to be to cover the most common configurations with the cluster modules. There may be certain configurations that will require forking and replacing the cluster module. Possibly, as the user community and use-cases grow maybe there will be variants for different use-cases of the cluster modules. Similarly to how I implemented the local dev env with the cluster-local variant of each cluster module.

2. How do we upgrade the kubestack version?

Currently, upgrades require bumping the version in clusters.tf and Dockerfile*. I document specific requirements in the upgrade notes of each release. https://github.com/kbst/terraform-kubestack/releases

The new CLI kbst that provides the local development environment, also has a feature to scaffold new repositories. Similarly, I'd like it to assist in upgrading existing repositories. It does have install, remove and update functionality for manifests from the catalog already.

$ kbst -h
Kubestack Framework CLI

Usage:
  kbst [command]

Available Commands:
  help        Help about any command
  local       Start a localhost development environment
  manifest    Add, update and remove services from the catalog
  repository  Create and change Kubestack repositories

Flags:
  -h, --help          help for kbst
  -p, --path string   path to the working directory (default ".")

Use "kbst [command] --help" for more information about a command.
3. Should we start implementing outputs similar to the output proposed in #133 ?

I've been avoiding getting into this so far, because I think it should be well thought through and I didn't feel of making that decision alone. I'd appreciate a constructive discussion how to drive this forward in a way that works for the three supported providers and has a decent chance of not needing a breaking change in the next release already.

I envision upgrades being a rather manual process, maybe git merge upstream, keeping kbst/terraform as the upstream.
At the same time I'd envision a single extensions.tf using the proposed outputs in order to extend the clusters with custom terraform declarations.

The user repositories are scaffolded from the starters built for each release. The starters are built during release from /quickstart/src. A git merge would not work. That's why I was careful to limit update requirements to changing the versions in clusters.tf and in Dockerfile* and hope to provide a even easier UX with the CLI.

@jeacott1
Copy link

jeacott1 commented Nov 24, 2021

has this advice changed at all re kubestack upgrades? took me a while to find this issue, couldn't find any specific advice in the published guide (but perhaps its there somewhere?).
it seems like perhaps service module versions might also need updating? ie

  source  = "kbst.xyz/catalog/argo-cd/kustomization"
  version = "2.0.5-kbst.0"

are these always backward compatible?

re 'extending the cluster with custom terraform declarations" - I'd also love to know for example what is envisioned for adding basic resources like shared disk for example? it would be great if adding a 'resource "azurerm_storage_share"...' somewhere for example could also take advantage of the built in configuration opps/apps/etc mechanism somehow without recreating it all.

@pst
Copy link
Member

pst commented Nov 24, 2021

The cluster service module versions define which upstream version of the service you get. So while yes, you want to update them, it wouldn't be on the framework module version schedule at all.

A bit more info here: https://www.kubestack.com/framework/documentation/cluster-service-modules#module-attributes

If you have suggestions what else you'd like to see in the docs, I'd be happy to hear your thoughts.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants