This is a set of questions and answers relating to the btp-setup-automator.
If you have other questions you can ask a question in SAP Community or raise an issue as a feature request.
According to the Boosters documentation, boosters "are a set of guided interactive steps that enable you to select, configure, and consume services on SAP BTP to achieve a specific technical goal".
So they do share a similar purpose to the btp-setup-automator
, but there are some fundamental differences:
-
First, while guided interactive steps have their place, so do processes that can be automated and executed in an unattended fashion. The
btp-setup-automator
is designed to be usable in such automated environments, in continuous integration / continuous delivery (CI/CD) pipelines, in platform-wide setup scripts, and beyond. -
Then, there's the open source nature. A deliberate side-effect of making the
btp-setup-automator
available in this project is to demonstrate how to use the various command line interface (CLI) tools that work with SAP BTP, because that's how the actual setup work is achieved. Moreover, we want you to be able to create your own automation mechanisms using the project contents, to be inspired by it and to configure those mechanisms as much or as little as you want. -
Finally, because it's open source, and you're in control, you don't have to request a new booster from SAP or wait for one to be created for you.
Containers are a great way to encapsulate independent sets of tools and configuration. What's more, that encapsulation can be made available to everyone regardless of their underlying platform. One of the biggest challenges of managing platforms, running development operations (DevOps) processes, and interacting with environments, is the setup and configuration required to do so at an individual level.
A container based approach levels the field and allows you to start working immediately without having to work through a boot load of prerequisites to get the basic tools in place.
📝 Tip - For more on how containers enable a better developer experience, but from a slightly different angle, you may be interested in the 3-part blog post series Boosting tutorial UX with dev containers.
The purpose of the btp-setup-automator
is to show how you can automate the setup of an SAP BTP account. It's meant to be an inspiration for you to think of other ways to integrate SAP BTP into your development landscape or to simply use the tool as is.
Feel free to create your own/better version of btp-setup-automator
in a programming language that you prefer, or contribute to this tool with a pull request.
Yes, you can. The file setup_task_center.json contains the setup of services for the SAP Task Center including establishing the trust with your custom IAS tenant.
The btp-setup-automator
was started as a script to be integrated into CI/CD pipelines or other command-line setups. But of course you can create your own/better version of btp-setup-automator
in a programming language that you prefer, or contribute to this tool with a pull request.
See the Requirements section of the main README for details on what you need.
If you prefer you can set the parameter loginmethod to sso in the parameters.json file and the script will ask you to click on a URL when a login is needed (you have to open a browser with the link). This happens for logging-in via the SAP BPT CLI as well as for the Cloud Foundry CLI.
If you are using a windows machine there might be a default setup for the end of line sequence that is not compatible with Linux namely the \r\n
as line breaks. To get rid of the error you have two options:
-
Switch the end of line sequence setting in the VS Code window you opened via the shortcut the lower right corner of VS Code (you need to have the
btpsa
file open to see the option):This opens the command palette where you must choose
LF
-
Set the end of line sequence fixed to
\n
viaFile
-Preferences
-Settings
-Files:EOL
You might be using the container image that is in your computers' cache. Execute the following steps to delete the cache:
-
Identify the container via:
docker ps
-
Stop the
btp-setup-automator
container using the container ID from the previous step:docker stop container_id
-
Delete the image and run the following command to delete the cache:
docker system prune -a -f
Now get the most current btp-setup-automator
image (as stated in the Download and Installation
section of the main README.md) and start the container.
Just script the script :-). You can create 3 different parameter files, which only differ in the subaccountname
parameter, like this:
parameterDEV.json
file:
{
"usecasefile": "usecases/released/cap_app_launchpad.json",
"region": "us10",
"globalaccount": "youraccount-ga",
"myemail": "[email protected]",
"loginmethod": "basicAuthentication",
"subaccountname": "DEV",
}
parameterTEST.json
file:
{
"usecasefile": "usecases/released/cap_app_launchpad.json",
"region": "us10",
"globalaccount": "youraccount-ga",
"myemail": "[email protected]",
"loginmethod": "basicAuthentication",
"subaccountname": "TEST",
}
parameterPROD.json
file:
{
"usecasefile": "usecases/released/cap_app_launchpad.json",
"region": "us10",
"globalaccount": "youraccount-ga",
"myemail": "[email protected]",
"loginmethod": "basicAuthentication",
"subaccountname": "PROD",
}
You find all available parameters for the btpsa
CLI tool in the file libs/btpsa-parameters.json
. All parameters including their data types and default values are defined in there. The CLI is using this definition during runtime.
As an alternative you can also use the CLI directly and key in:
./btpsa -h
A detailed description of the parameters for the parameters.json
is available here: link
A detailed description of the parameters for the usecase.json
is available here: link
We want to keep the size of the docker image as small as possible and to contain the essential tool set within that image.
At the same time you can add any additional packages inside the executeBeforeAccountSetup
section of your usecase file as we have installed the sudo
Alpine package in the image. Just add something like this to your usecase file:
..
"executeBeforeAccountSetup": [
{
"description": "install the `uuidgen` and the `nano` package",
"command": "sudo apk add uuidgen nano"
}
]
..
SaaS applications are deployed in a global account and if registered available in all subaccounts. They have to be handled differently when creating an app subscription. You have to provide the following values in your btpsa files:
- In the
parameters.json
file you must provide a value for the"customAppProviderSubaccountId"
. This subaccount must be a subaccount where the application is available e. g. the provider subaccount. This is needed to execute the validation of the use case file. - In the
usecase.json
file when you specify the values for theAPPLICATION
that you want to subscribe to, make sure that you set the value of"customerDeveloped"
totrue
as this will distinguish it from a regular application available on SAP BTP.
📝 Tip - When deploying the app to SAP BTP into the provider subaccount make sure that the value for the plan is available via the BTP CLI. You can check that via
btp --format json list accounts/subscription --subaccount <YOUR SUBACCOUNT ID>
.
Suddenly I'm getting an "docker: Error response from daemon: Head "../ghcr.io/v2/sap-samples/btp-setup-automator/manifests/main": denied: denied.". What's going on here?
One possibility: your docker login is trying to connect with GitHub via an expired GitHub token (that you have previously connected with docker). To fix this issue run this command in the command line:
docker logout ghcr.io
🚧 No FAQ yet 🚧
As the SAP BTP. Kyma runtime is using open-source Kyma the flow to access Kyma is based on an OIDC flow using the kubelogin plugin to execute the flow. This flows need access to a browser and does not support the native execution from a Docker image (see "Known Limitations" of the kubelogin plugin). The BTP setup automator cannot remove this limitation.
There are some workarounds that can help you to bypass the limitation:
-
You can execute the BTP setup automator from within VS Code using the "Remote Containers: Attach to running container ..." functionality to access the running container via VS Code. This will enable the OIDC flow as the necessary opening of the browser can be executed by the plugin in this setup.
-
Use a service account to access the Kyma cluster. This also means that you need to split the use case into two parts, part one that sets up the Kyma cluster and part two that executes the desired steps in the cluster. The flow looks like this
- Define and execute a use case to provision the Kyma cluster.
- Create the service account manually and store the
kubeconfig
of the service account in the container - Define and execute a use case that contains the setup in the Kyma cluster using the service account to log on in a non-interactive way bypassing kubelogin.
-
Execute the non-interactive authentication flow via a "technical" user in the IAS tenant. You must create your own tenant in IAS and define the necessary user, then use a custom OIDC provider when provisioning the Kyma clusters (assigning the user as admin) and and generate the tokens that you can insert in the
kubeconfig
and avoid kubelogin. OIDC provider besides IAS usually have a similar flow for these scenarios. This is probably the procedure with the biggest effort. It comes with the some downside as the scenario mentioned before as you must split your use case file into two parts.
To make your life easier we provided some scripts to make the creation and fetching of the Kubernetes configuration aka kubeconfig
easier. You find the following files in the folder config/kubernetes:
service-account.yaml
: file containing the definition of the service account and the cluster role.cluster-role-binding.yaml
: file containing the cluster role binding, connecting the service account with the corresponding cluster rolekubeconfig-sa-mac.sh
/kubeconfig-sa-windows.ps1
: scripts to fetch the relevant data for thekubeconfig.yaml
Execute the following steps:
-
Create a namespace for the
ClusterRoleBinding
-
Set the namespace as a variable:
-
MacOS
export ns=<your_namespace>
-
Windows
$ns = "<your_namespace>"
-
-
Replace
<YOUR NAMESPACE>
with your value of the namespace in the filecluster-role-binding.yaml
. -
Apply the file
service-account.yaml
viakubectl
:kubectl apply -f config/kubernetes/service-account.yaml -n $ns
-
Apply the file
cluster-role-binding.yaml
viakubectl
:kubectl apply -f config/kubernetes/cluster-role-binding.yaml -n $ns
You can now download the kubeconfig
file either via the Kyma dashboard or via the scripts provided in this repository:
-
From the Kyma Dashboard:
- Navigate to your namespace.
- Access Configurations --> Service Accounts
- Open the Service Account created by you in the previous step.
- Download kubeconfig and store it
-
Via the script
-
If using Mac set the execute permission on the file
templates/kubeconfig-sa-mac.sh
and run it:chmod +x templates/kubeconfig-sa-mac.sh ./config/kubernetes/kubeconfig-sa-mac.sh
-
If using Windows set
Set-ExecutionPolicy Unrestricted
to change the execution policy if needed and run it:.\config\kubernetes\kubeconfig-sa-windows.ps1
-
📝 Tip - Do not execute the script multiple times. It will append the config data over and over, which will end up in an invalid config file.
You can then use this file to create services in the Kyma runtime via kubectl
specifying the path using the paramter "kubeconfigpath"
in your parameters.json
file.
📝 Tip - When you copy your configuration into the container, be aware that the btp-setup-automator will store the OIDC-based kubeconfig of Kyma as
.kube/config
. Make sure that you either copy your service account configuration at a different place or name it differently e.g.config-sa