Skip to content

Commit

Permalink
Merge pull request #175 from epics-containers/post-first-workshop
Browse files Browse the repository at this point in the history
remove reference to .profile
  • Loading branch information
gilesknap authored Sep 12, 2024
2 parents 5b1a2f4 + 0793fb4 commit 05e2012
Show file tree
Hide file tree
Showing 12 changed files with 52 additions and 54 deletions.
2 changes: 1 addition & 1 deletion docs/reference/docker.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ We do fully support `docker`, please report any issues you find.

There are a few things to know if you are using `docker` in your developer containers:

1. add `export EC_REMOTE_USER=$USER` into your `$HOME/.profile`. The epics-containers devcontainer.json files will use this to set the account that your user will use inside devcontainers.
1. add `export EC_REMOTE_USER=$USER` into your `$HOME/.bashrc` (or `$HOME/.zshrc` for zsh users). The epics-containers devcontainer.json files will use this to set the account that your user will use inside devcontainers.
1. you may need to tell git that you are ok with the repository permissions. `vscode` may ask you about this or you may need to do the command:
```bash
git config --global --add safe.directory <Git folder>
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/environment.md
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ Perhaps the simplest way to achieve this is to install `ec` into your user space
pip install --user ec-cli
```

Then add the following to your `$HOME/.profile` file:
Then add the following to your `$HOME/.bashrc` (or `$HOME/.zshrc` for zsh users):

```bash
PATH=$PATH:$HOME/.local/bin
Expand Down
10 changes: 5 additions & 5 deletions docs/tutorials/create_ioc.md
Original file line number Diff line number Diff line change
Expand Up @@ -126,7 +126,7 @@ The config folder can contain a variety of different files [as listed here](http

Each `entity` listed in the *IOC yaml* file will create an instance of the support module `entity_model` that it refers to. It will pass a number of arguments to the `entity_model` that will be used to generate the startup script entries and EPICS Database entries for that entity. The `entity_model` is responsible for declaring the parameters it expects and how they are used in the script and DB entries it generates. It supplies types and descriptions for each of these parameters, plus may supply default values.

We will be creating a simulation detector from the `ioc-adsimdetector` Generic IOC. The following *Support yaml* for the simulation detector is baked into the container. Once you have your container up and running you can use `dc exec bl01t-ea-cam-01 bash` to get a shell inside and see this file at **/epics/ibek_defs/ADSimDetector.ibek.support.yaml**.
We will be creating a simulation detector from the `ioc-adsimdetector` Generic IOC. The following *Support yaml* for the simulation detector is baked into the container. Once you have your container up and running you can use `docker compose exec bl01t-ea-cam-01 bash` to get a shell inside and see this file at **/epics/ibek_defs/ADSimDetector.ibek.support.yaml**.

```yaml
# yaml-language-server: $schema=https://github.com/epics-containers/ibek/releases/download/3.0.1/ibek.support.schema.json
Expand Down Expand Up @@ -299,7 +299,7 @@ We can launch all the services in the beamline as we did in the earlier tutorial
```bash
cd t01-services
source ./environment.sh
dc up -d
docker compose up -d
```

The new screen will allow you to hit 'Acquire' on the CAMERA pane, and 'Enable' on the Standard Array pane. You should see the moving image from the simulation detector in the right hand image pane.
Expand Down Expand Up @@ -384,7 +384,7 @@ Fill out the rest of your NDProcess entity as follows:
Now restart your simulation detector IOC:

```bash
dc restart bl01t-ea-cam-01
docker compose restart bl01t-ea-cam-01
```

Once it is back up you can click on the bl01t-ea-cam-01 button in the 'Autogenerated Engineering Screens' pane and you will see a new 'NDProcess' entity. If you know about wiring up AreaDetector you can now wire this plugin into your pipeline and make modifications to the image data as it passes through.
Expand All @@ -400,7 +400,7 @@ folder. Or alternatively you could override behaviour completely by placing
To see what ibek generated you can go and look inside the IOC container:

```bash
dc exec bl01t-ea-test-02
docker compose exec bl01t-ea-test-02
cd /epics/runtime/
cat ioc.subst
cat st.cmd
Expand Down Expand Up @@ -431,7 +431,7 @@ Your IOC Instance will now be using the raw startup script and database. But sho

Restart the IOC to see it operating as before (except that engineering screen generation will no longer happen):
```bash
dc restart bl01t-ea-cam-01
docker compose restart bl01t-ea-cam-01
```

:::{note}
Expand Down
28 changes: 13 additions & 15 deletions docs/tutorials/deploy_example.md
Original file line number Diff line number Diff line change
Expand Up @@ -80,23 +80,21 @@ You can see the status of the services by running the following command:
docker compose ps
```

In environment.sh we created an alias for `docker compose` named `dc` from now on we'll shorten the commands to use `dc` instead of `docker compose`.

## Managing the Example IOC Instance

### Starting and Stopping IOCs

To stop / start the example IOC try the following commands. Note that `dc ps -a` shows you all IOCs including stopped ones.
To stop / start the example IOC try the following commands. Note that `docker compose ps -a` shows you all IOCs including stopped ones.

Also note that tab completion should allow you to complete the names of your commands and services. e.g.
`dc star <tab> ex <tab>`, should complete to `dc start example-test-01`.
`docker compose star <tab> ex <tab>`, should complete to `docker compose start example-test-01`.

```bash
dc ps -a
dc stop example-test-01
dc ps -a
dc start example-test-01
dc ps
docker compose ps -a
docker compose stop example-test-01
docker compose ps -a
docker compose start example-test-01
docker compose ps
```

:::{Note}
Expand All @@ -112,7 +110,7 @@ This is a Generic IOC image and all IOC Instances must be based upon one of thes
To run a bash shell inside the IOC container:

```bash
dc exec example-test-01 bash
docker compose exec example-test-01 bash
caget EXAMPLE:SUM
```

Expand All @@ -138,13 +136,13 @@ In the Virtual Machine supplied for testing epics-containers we do not install E
To get the current logs for the example IOC:

```bash
dc logs example-test-01
docker compose logs example-test-01
```

Or follow the IOC log until you hit ctrl-C:

```bash
dc logs example-test-01 -f
docker compose logs example-test-01 -f
```

You should see the log of ibek loading and generating the IOC startup assets and then the ioc shell startup script log. Ibek is the tool that runs inside of the IOC container and generates the ioc shell script and database file by interpreting the /epics/ioc/config/ioc.yaml at launch time.
Expand All @@ -154,7 +152,7 @@ You should see the log of ibek loading and generating the IOC startup assets and
You can stop all the services with the following command.

```bash
dc stop
docker compose stop
```

This will stop all the currently running containers described in the `compose.yml` file.
Expand All @@ -166,7 +164,7 @@ However this will leave the resources themselves in place:
To take down the services and remove all of their resources use the following command:

```bash
dc down
docker compose down
```

### Monitoring and interacting with an IOC shell
Expand All @@ -177,7 +175,7 @@ shell. In the next tutorial we will use this command to interact with
iocShell.

```bash
dc attach example-test-01
docker compose attach example-test-01
dbl
# ctrl-p ctrl-q to detach
```
Expand Down
8 changes: 4 additions & 4 deletions docs/tutorials/dev_container.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ This means making changes to the IOC instance folders which appear in the `servi
To make a change like this requires:

- change the IOC instance ioc.yaml or other configuration files in the services repository
- re-launch the IOC with `dc restart <ioc-name>`
- re-launch the IOC with `docker compose restart <ioc-name>`
- that's it. No compilation required because we are only changing instance configuration here, not the IOC binary or dbd.

(changes_2)=
Expand All @@ -43,7 +43,7 @@ To make a change like this requires:
- make changes to the Generic IOC Dockerfile (which holds the build instructions for a Generic IOC - we will discuss this in {any}`generic_ioc`)
- push the changes and tag the repo - this will build and publish a new container image using CI
- change the IOC instance in the services repo to point at the new container image
- redeploy the IOC with `dc restart <ioc-name>`
- redeploy the IOC with `docker compose restart <ioc-name>`


(changes_3)=
Expand Down Expand Up @@ -130,7 +130,7 @@ Before continuing this tutorial make sure you have not left any IOCs running fro
```bash
cd t01-services
. ./environment.sh
dc down
docker compose down
```
:::

Expand All @@ -154,7 +154,7 @@ Users of docker need to instruct the devcontainer to use their own user id insid
export EC_REMOTE_USER=$USER
```

It is recommended that you place this command in `$HOME/.profile` to make it permanent.
It is recommended that you place this command in `$HOME/.bashrc` (or `$HOME/.zshrc` for zsh users) to make it permanent.

If you do not do this, your devcontainer will run as root. Although it will still work, it is not recommended. Also, forgetting to set EC_REMOTE_USER before launching a pre-existing devcontainer will cause errors. (my apologies to docker users - I wanted to make the devcontainer compatible with both docker and podman and this is the least invasive method I could come up with).
:::
Expand Down
2 changes: 1 addition & 1 deletion docs/tutorials/dev_container2.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ IMPORTANT: the commands we are about to run must be executed on the host, not in
```bash
cd /workspaces/ioc-adsimdetector/compose
. ./environment.sh
dc up -d
docker compose up -d
```

Phoebus will be launched and attempt to load the bob file called **opi/ioc/index.bob**. The **opi** folder is a place where the author of this generic IOC could place some screens. The subfolder **ioc** is not committed to git and is where the IOC instance will place its autogenerated engineering screens. The autogenerated screens always include and index.bob which is the entry point to all other autogenerated screens.
Expand Down
2 changes: 1 addition & 1 deletion docs/tutorials/generic_ioc.md
Original file line number Diff line number Diff line change
Expand Up @@ -708,6 +708,6 @@ https://github.com/orgs/YOUR_GITHUB_ACCOUNT/packages?repo_name=ioc-lakeshore340
## EXERCISE
Now you have a published Generic IOC container image for ioc-lakeshore340. See if you can add an IOC instance that uses this into your `bl01t` beamline. You should then be able to run up your IOC instance with `dc deploy-local`. You could also run a local version of the simulator and see if you can get the IOC to talk to it.
Now you have a published Generic IOC container image for ioc-lakeshore340. See if you can add an IOC instance that uses this into your `bl01t` beamline. You should then be able to run up your IOC instance with `docker compose deploy-local`. You could also run a local version of the simulator and see if you can get the IOC to talk to it.
8 changes: 4 additions & 4 deletions docs/tutorials/ioc_changes1.md
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@ Make sure the ca-gateway from the previous tutorial is stopped before launching
# IMPORTANT: do this in a terminal outside of the devcontainer
cd ioc-adsimdetector/compose
. ./environment.sh
dc down
docker compose down
```

Now you can launch your test beamline and it will have picked up the new extras.db. Note that we run caget inside the IOC container because not all users will have caget on their host. Those that have it on the host can just type: `caget BL01T-EA-CAM-01:TEST`.
Expand All @@ -85,11 +85,11 @@ Now you can launch your test beamline and it will have picked up the new extras.
# IMPORTANT: do this in a terminal outside of the devcontainer
cd t01-services
. ./environment.sh
dc up -d
dc exec example-test-01 caget BL01T-EA-CAM-01:TEST
docker compose up -d
docker compose exec example-test-01 caget BL01T-EA-CAM-01:TEST

# Now shut down the beamline again so we can continue with further developer container tutorials
dc down
docker compose down
```

## Raw Startup Assets
Expand Down
2 changes: 1 addition & 1 deletion docs/tutorials/ioc_changes2.md
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ From *outside of the developer container* start up phoebus and the ca-gateway as
```bash
cd ioc-adsimdetector/compose
. ./environment.sh
dc up -d
docker compose up -d
```

Phoebus should now be up and running and showing the auto generated **index.bob**, In phoebus do the following steps:
Expand Down
27 changes: 13 additions & 14 deletions docs/tutorials/launch_example.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,8 +30,7 @@ git clone https://github.com/epics-containers/example-services
cd example-services
# setup some environment variables
source ./environment.sh
# launch the docker compose configuration (with dc alias for docker compose)
dc up -d
docker compose up -d
```

If all is well you should see phoebus lauch with an overview of the beamline like the following:
Expand All @@ -49,28 +48,28 @@ caget BL01T-DI-CAM-01:DET:Acquire_RBV

# OR if you don't have caget/put locally then use one of the containers instead:
# execute caget from inside one of the example IOCs
dc exec bl01t-ea-test-01 caget BL01T-DI-CAM-01:DET:Acquire_RBV
docker compose exec bl01t-ea-test-01 caget BL01T-DI-CAM-01:DET:Acquire_RBV
# or get a shell inside an example IOC and use caget
dc exec bl01t-ea-test-01 bash
docker compose exec bl01t-ea-test-01 bash
caget BL01T-DI-CAM-01:DET:Acquire_RBV

# attach to logs of a service (-f follows the logs, use ctrl-c to exit)
dc logs bl01t-di-cam-01 -f
docker compose logs bl01t-di-cam-01 -f
# stop a service
dc stop bl01t-di-cam-01
docker compose stop bl01t-di-cam-01
# restart a service
dc start bl01t-di-cam-01
docker compose start bl01t-di-cam-01
# attach to a service stdio
dc attach bl01t-di-cam-01
docker compose attach bl01t-di-cam-01
# exec a process in a service
dc exec bl01t-di-cam-01 bash
docker compose exec bl01t-di-cam-01 bash
# delete a service (deletes the container)
dc down bl01t-di-cam-01
docker compose down bl01t-di-cam-01
# create and launch a single service (plus its dependencies)
dc up bl01t-di-cam-01 -d
docker compose up bl01t-di-cam-01 -d
# close down and delete all the containers
# volumes are not deleted to preserve the data
dc down
docker compose down
```

This tutorial is a simple introduction to validate that the setup is working. In the following tutorials you will get to create your own beamline and start adding IOCs to it.
Expand All @@ -79,9 +78,9 @@ This tutorial is a simple introduction to validate that the setup is working. In
Before moving on to the next tutorial always make sure to stop and delete the containers from your current example as follows:

```bash
dc down
docker compose down
```

If you do not do this you will get name clashes when trying the next example. If this happens - return to the previous project directory and use `dc down`.
If you do not do this you will get name clashes when trying the next example. If this happens - return to the previous project directory and use `docker compose down`.

This is a deliberate choice as we can only run a single ca-gateway on a given host port at a time. For more complex setups you could share one ca-gateway between multiple compose configurations, but we have chosen to keep things simple for these tutorials.
9 changes: 5 additions & 4 deletions docs/tutorials/setup_k8s.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,11 +59,12 @@ Kubectl is the command line tool for interacting with Kubernetes Clusters.
Note that by default, the kubectl that comes with k3s reads its config from /etc/rancher/k3s/k3s.yaml and would therefore be run with sudo. By using $KUBECONFIG we conform to the standard version that reads its config from $HOME/.kube/config.

```
echo 'export KUBECONFIG=$HOME/.kube/config' >> $HOME/.profile
source $HOME/.profile
echo 'export KUBECONFIG=$HOME/.kube/config' >> $HOME/.bashrc
source $HOME/.bashrc
```
(replace `$HOME/.bashrc` with `$HOME/.zshrc` for zsh user)

Then log out for this to be set for all shells.
Then log out and back in for this to be set for all shells.

### Configure kubectl

Expand Down Expand Up @@ -221,7 +222,7 @@ To install the `argocd` cli tool:
```
curl -sSL -o argocd-linux-amd64 https://github.com/argoproj/argo-cd/releases/latest/download/argocd-linux-amd64
sudo install -m 555 argocd-linux-amd64 /usr/local/bin/argocd
rm argocd-linux-amd64
rm argocd-linux-amd64
```

Create a new argocd project from the command line, with permissions to deploy into your namespace:
Expand Down
6 changes: 3 additions & 3 deletions docs/tutorials/setup_workstation.md
Original file line number Diff line number Diff line change
Expand Up @@ -149,11 +149,11 @@ Other users of podman please see these instructions [rootless podman with docker

### Important Notes Regarding docker and podman

From here on when we refer to `docker` in a command line, you can replace it with `podman` if you are using podman. The two tools have the same CLI. For convenience if you are a podman user you might want to place
From here on when we refer to `docker` in a command line, you can replace it with `podman` if you are using podman. The two tools have (almost) the same CLI. For convenience if you are a podman user you might want to place
```bash
alias docker=podman
```
in your `$HOME/.profile` file.
in your `$HOME/.bashrc` (or `$HOME/.zshrc` for zsh users).

`docker` users should also take a look at this page: [](../reference/docker.md) which describes a couple of extra steps that are required to make docker work in developer containers.

Expand Down Expand Up @@ -207,7 +207,7 @@ source $HOME/ec-venv/bin/activate
python3 -m pip install --upgrade pip
```

Note that each time you open a new shell you will need to activate the virtual environment again. (Or place its bin folder in your path by adding `PATH=$HOME/ec-venv/bin:$PATH` in your `$HOME/.profile`).
Note that each time you open a new shell you will need to activate the virtual environment again. (Or place its bin folder in your path by adding `PATH=$HOME/ec-venv/bin:$PATH` in your `$HOME/.bashrc` (or `$HOME/.zshrc` for zsh users)).

(copier)=

Expand Down

0 comments on commit 05e2012

Please sign in to comment.