diff --git a/.github/workflows/scan-images.yaml b/.github/workflows/scan-images.yaml index e635eaa8..d64c5fcf 100644 --- a/.github/workflows/scan-images.yaml +++ b/.github/workflows/scan-images.yaml @@ -13,32 +13,21 @@ jobs: # specfy location of bundle(s) to be scanned bundle: - releases/1.8/stable/kubeflow - - releases/1.9/stable/kubeflow + - releases/1.9/stable - releases/latest/edge - runs-on: ubuntu-20.04 + runs-on: [self-hosted, linux, X64, jammy, large] steps: - # Ideally we'd use self-hosted runners, but this effort is still not stable - # This action will remove unused software (dotnet, haskell, android libs, codeql, - # and docker images) from the GH runner, which will liberate around 60 GB of storage - # distributed in 40GB for root and around 20 for a mnt point. - - name: Maximise GH runner space - uses: easimon/maximize-build-space@v7 - with: - root-reserve-mb: 29696 - remove-dotnet: 'true' - remove-haskell: 'true' - remove-android: 'true' - remove-codeql: 'true' - remove-docker-images: 'true' - name: Checkout uses: actions/checkout@v3 with: fetch-depth: 0 + - name: Setup tools id: setup run: | sudo snap install yq echo "date=$(date '+%Y-%m-%d-%H-%M-%S')" >> $GITHUB_OUTPUT + - name: Checkout kubeflow-ci uses: actions/checkout@v3 with: @@ -46,6 +35,7 @@ jobs: sparse-checkout: scripts/images/ ref: main path: kubeflow-ci + - name: Get images id: images run: | @@ -53,18 +43,22 @@ jobs: BUNDLE_SPLIT=(${BUNDLE//\// }) RELEASE=${BUNDLE_SPLIT[1]} RISK=${BUNDLE_SPLIT[2]} - IMAGES=$(./kubeflow-ci/scripts/images/get-all-images.sh ${{ matrix.bundle }}/bundle.yaml ${RELEASE}-${RISK}) - echo "$IMAGES" > ./image_list.txt + + pip3 install -r scripts/requirements.txt + python3 scripts/get-all-images.py ${{ matrix.bundle }}/bundle.yaml > image_list.txt echo "Image list:" cat ./image_list.txt echo "release_risk=${RELEASE}-${RISK}" >> $GITHUB_OUTPUT + - name: Scan images run: | ./kubeflow-ci/scripts/images/scan-images.sh ./image_list.txt ./kubeflow-ci/scripts/images/get-summary.py --report-path ./trivy-reports --print-header > scan-summary-${{ steps.setup.outputs.date}}-${{ steps.images.outputs.release_risk }}.csv + - name: Prepare artifacts run: | tar zcvf trivy-reports-${{ steps.setup.outputs.date}}-${{ steps.images.outputs.release_risk }}-${{ strategy.job-index }}.tar.gz ./trivy-reports + - name: Upload Trivy reports uses: actions/upload-artifact@v3 with: diff --git a/scripts/README.md b/scripts/README.md new file mode 100644 index 00000000..c44953dd --- /dev/null +++ b/scripts/README.md @@ -0,0 +1,26 @@ +# Utility Script + +This directory contains helper scripts for Charmed Kubeflow, during CI and not only. + +## Gather images used by a bundle + +You can get a list of all the OCI images used by the bundle by running the following command: +```bash +pip3 install -r scritps/requirements.txt + +python3 scripts/get-all-images.py \ + --append-images tests/airgapped/ckf-1.8-testing-images.txt \ + releases/1.8/stable/kubeflow/bundle.yaml \ + > images-all.txt +``` + +The script will gather the images in the following way: +1. For each `application` in the provided `bundle.yaml` file: +2. detect if it's owned by us or another team (by looking at the `_github_dependency_repo_name` and such metadata) +3. clone its repo, by looking at `_github_repo_name` and such metadata +4. If owned by another team: only parse it's `metadata.yaml` and look for `oci-resources` +5. If owned by us: run the `tools/get-images.sh` script the repo **must** have +6. If a repo does not have `tools/get-images.sh` (i.e. kubeflow-roles) then the script should skip the repo +7. If the `get-images.sh` script either fails (return code non zero) or has error logs then the script should **fail** +8. Aggregate the outputs of all `get-images.sh` scripts to one output +9. If user passed an argument `--append-images` then the script will amend a list of images we need for airgap testing diff --git a/scripts/airgapped/README.md b/scripts/airgapped/README.md index 97b49871..82bdf7c1 100644 --- a/scripts/airgapped/README.md +++ b/scripts/airgapped/README.md @@ -7,12 +7,13 @@ to create airgap artifacts or via our testing scripts. We'll document some use-case scenarios here for the different scripts. ## Prerequisites +NOTE: All the commands are expected to be run from the root directory of the repo To use the scripts in this directory you'll need to install a couple of Python and Ubuntu packages on the host machine, driving the test (not the LXC machine that will contain the airgapped environment). ``` -pip3 install -r requirements.txt +pip3 install -r scripts/airgapped/requirements.txt sudo apt install pigz sudo snap install docker sudo snap install yq @@ -32,7 +33,10 @@ This script makes the following assumptions: the images for that repo ```bash -./scripts/airgapped/get-all-images.sh releases/1.7/stable/kubeflow/bundle.yaml > images.txt +python3 scripts/get-all-images.py \ + --append-images=tests/airgapped/ckf-1.8-testing-images.txt \ + releases/1.8/stable/kubeflow/bundle.yaml \ + > images.txt ``` ## Pull images to docker cache diff --git a/scripts/airgapped/get-all-images.sh b/scripts/airgapped/get-all-images.sh deleted file mode 100755 index 2f4085b6..00000000 --- a/scripts/airgapped/get-all-images.sh +++ /dev/null @@ -1,62 +0,0 @@ -#!/usr/bin/bash -# -# This script parses given bundle file for github repositories and branches. Then checks out each -# charm's repository one by one using specified branch and collects images referred by that charm -# using that repository's image collection script -# -BUNDLE_FILE=$1 -IMAGES=() -# retrieve all repositories and branches for CKF -REPOS_BRANCHES=($(yq -r '.applications[] | select(._github_repo_name) | [(._github_repo_name, ._github_repo_branch)] | join(":")' $BUNDLE_FILE | sort --unique)) - -# TODO: We need to not hardcode this and be able to deduce all images from the bundle -# https://github.com/canonical/bundle-kubeflow/issues/789 -RESOURCE_DISPATCHER_BRANCH=track/1.0 -RESOURCE_DISPATCHER_REPO=https://github.com/canonical/resource-dispatcher - -for REPO_BRANCH in "${REPOS_BRANCHES[@]}"; do - IFS=: read -r REPO BRANCH <<< "$REPO_BRANCH" - git clone --branch $BRANCH https://github.com/canonical/$REPO - cd $REPO - IMAGES+=($(bash ./tools/get-images.sh)) - cd - > /dev/null - rm -rf $REPO -done - -# retrieve all repositories and branches for dependencies -DEP_REPOS_BRANCHES=($(yq -r '.applications[] | select(._github_dependency_repo_name) | [(._github_dependency_repo_name, ._github_dependency_repo_branch)] | join(":")' $BUNDLE_FILE | sort --unique)) - -for REPO_BRANCH in "${DEP_REPOS_BRANCHES[@]}"; do - IFS=: read -r REPO BRANCH <<< "$REPO_BRANCH" - git clone --branch $BRANCH https://github.com/canonical/$REPO - cd $REPO - # for dependencies only retrieve workload containers from metadata.yaml - IMAGES+=($(find -type f -name metadata.yaml -exec yq '.resources | to_entries | map(select(.value.upstream-source != null)) | .[] | .value | ."upstream-source"' {} \;)) - cd - > /dev/null - rm -rf $REPO -done - -# manually retrieve resource-dispatcher -git clone --branch $RESOURCE_DISPATCHER_BRANCH $RESOURCE_DISPATCHER_REPO -cd resource-dispatcher -IMAGES+=($(bash ./tools/get-images.sh)) -cd .. -rm -rf resource-dispatcher - -# manually retrieve pipelines runner image to test pipelines -IMAGES+=($(echo "charmedkubeflow/pipelines-runner:ckf-1.8")) - -# manually retrieve katib experiment image -IMAGES+=($(echo "docker.io/kubeflowkatib/simple-pbt:v0.16.0")) - -# manually retrieve helloworld image to test knative -IMAGES+=($(echo "ghcr.io/knative/helloworld-go:latest")) - -# manually retrieve tf-mnist-with-summaries image to test training operator -IMAGES+=($(echo "gcr.io/kubeflow-ci/tf-mnist-with-summaries:1.0")) - -# ensure we only show unique images -IMAGES=($(echo "${IMAGES[@]}" | tr ' ' '\n' | sort -u | tr '\n' ' ')) - -# print full list of images -printf "%s\n" "${IMAGES[@]}" diff --git a/scripts/airgapped/requirements.txt b/scripts/airgapped/requirements.txt index 5c754a83..751a4b38 100644 --- a/scripts/airgapped/requirements.txt +++ b/scripts/airgapped/requirements.txt @@ -1,4 +1,3 @@ docker -#FIXME: remove requests pin when https://github.com/docker/docker-py/issues/3256 is solved -requests<2.32.0 +requests PyYAML diff --git a/scripts/get-all-images.py b/scripts/get-all-images.py new file mode 100755 index 00000000..08d4e983 --- /dev/null +++ b/scripts/get-all-images.py @@ -0,0 +1,268 @@ +#!/usr/bin/env python3 + +import argparse +import logging +import subprocess +import os +import sys +import contextlib +import tempfile + +import git +import yaml + +from typing import Iterator +from pathlib import Path + +# logging +LOG_FORMAT = "%(levelname)s \t| %(message)s" +logging.basicConfig(format=LOG_FORMAT, level=logging.INFO) +log = logging.getLogger(__name__) + +# consts +EXCLUDE_CHARMS = ["kubeflow-roles"] + +GH_REPO_KEY = "_github_repo_name" +GH_BRANCH_KEY = "_github_repo_branch" +GH_DEPENDENCY_REPO_KEY = "_github_dependency_repo_name" +GH_DEPENDENCY_BRANCH_KEY = "_github_dependency_repo_branch" +GET_IMAGES_SH = "tools/get-images.sh" + + +def is_dependency_app(app: dict) -> bool: + """ + Return True if app in bundle is not owned by Analytics team, False + otherwise. + + Args: + app(dict): app metadata from a bundle.yaml in dictionary form + + Returns: + True if app has GH dependency metadata, else False + """ + if GH_DEPENDENCY_REPO_KEY in app and GH_DEPENDENCY_BRANCH_KEY in app: + return True + + return False + + +def bundle_app_contains_gh_metadata(app: dict) -> bool: + """ + Given an application in a bundle check if it contains github metadata keys + to be able to be parsed properly. + + Args: + app(dict): app metadata from a bundle.yaml in dictionary form + + Returns: + True if app has GH metadata, else False + """ + if is_dependency_app(app): + return True + + if GH_REPO_KEY in app and GH_BRANCH_KEY in app: + return True + + return False + + +def validate_bundle(bundle: dict): + """ + Given a bundle, parse all the applications and ensure they contain the + correct metadata. + + Args: + bundle: Dictionary of the loaded bundle + """ + for app_name, app in bundle["applications"].items(): + if bundle_app_contains_gh_metadata(app): + continue + + logging.error("Application '%s' doesn't include expected gh metadata keys.", + app_name) + sys.exit(1) + + +@contextlib.contextmanager +def clone_git_repo(repo_name: str, branch: str) -> Iterator[git.PathLike]: + """ + Clones locally a repo and returns the path of the folder created. + + Args: + repo_name(str): name of the repo to clone + branch(str): branch to checkout to, once cloned the repo + """ + repo_url = f"https://github.com/canonical/{repo_name}.git" + + # we can't use the default /tmp/ dir because of + # https://github.com/mikefarah/yq/issues/1808 + with tempfile.TemporaryDirectory(dir=os.getcwd()) as tmp: + logging.info(f"Cloning repo {repo_url}") + repo = git.Repo.clone_from(repo_url, tmp) + + logging.info(f"Checking out to branch {branch}") + repo.git.checkout(branch) + + yield repo.working_dir + + +def get_analytics_app_images(app: dict) -> list[str]: + """ + This function gets the images used by a charm developed by us, by: + 1. Cloning the repo of the charm + 2. Running the repo's tools/get-images.sh + 3. Delete the repo + + If the tools/get-images.sh of a repo fails for any reason then this + script will also fail. + """ + images = [] + repo_name = app[GH_REPO_KEY] + repo_branch = app[GH_BRANCH_KEY] + + with clone_git_repo(repo_name, repo_branch) as repo_dir: + logging.info(f"Executing repo's {GET_IMAGES_SH} script") + try: + process = subprocess.run(["bash", "tools/get-images.sh"], + cwd=repo_dir, capture_output=True, + text=True, check=True) + except subprocess.CalledProcessError as exc: + logging.error("Script '%s' for charm '%s' failed: %s", + GET_IMAGES_SH, app["charm"], exc.stderr) + raise exc + + images = process.stdout.strip().split("\n") + + logging.info("Found the following images:") + for image in images: + logging.info("* " + image) + + return images + + +def get_dependency_app_images(app: dict) -> list[str]: + """ + This function gets the images used by a dependency charm by: + 1. Cloning the repo of the charm + 2. Looking at its metadata.yaml for "upstream-source" keys + 3. Delete the repo + """ + images = [] + repo_name = app[GH_DEPENDENCY_REPO_KEY] + repo_branch = app[GH_DEPENDENCY_BRANCH_KEY] + with clone_git_repo(repo_name, repo_branch) as repo_dir: + metatada_file = f"{repo_dir}/metadata.yaml" + metadata_dict = yaml.safe_load(Path(metatada_file).read_text()) + + for _, rsrc in metadata_dict["resources"].items(): + if rsrc.get("type", "") != "oci-image": + continue + + images.append(rsrc["upstream-source"]) + + logging.info("Found the following images:") + for image in images: + logging.info("* " + image) + + return images + + +def cleanup_images(images: list[str]) -> list[str]: + """ + Given a list of OCI registry images ensure + 1. there are no duplicates + 2. there are no images with empty name + 3. the list is sorted + + Args: + images: List of images to be processed + + Returns: + A list with unique and sorted values. + """ + images_set = set(images) + if "" in images_set: + images_set.remove('') + + unique_images = list(images_set) + unique_images.sort() + + return unique_images + + +def get_bundle_images(bundle_path: str) -> list[str]: + """Return a list of images used by a bundle""" + bundle_dict = yaml.safe_load(Path(bundle_path).read_text()) + validate_bundle(bundle_dict) + + images = [] + + for app_name, app in bundle_dict["applications"].items(): + logging.info(f"Handling app {app_name}") + + # exclude repos we know don't have images + # if we find we keep extending this const, we should introduce an + # argument in the script for dynamically exluding repos/charms + if app_name in EXCLUDE_CHARMS: + logging.info("Ignoring charm %s", app["charm"]) + continue + + # Follow default image-gather process for dependency apps + if is_dependency_app(app): + logging.info("Dependency app '%s' with charm '%s'", app_name, + app["charm"]) + images.extend(get_dependency_app_images(app)) + continue + + # image from analytics team + images.extend(get_analytics_app_images(app)) + + return cleanup_images(images) + + +def get_static_images_from_file(images_file_path: str) -> list[str]: + """ + Return a list of images stored in a text file and separated by \n. + + Args: + images_file_path: Path of the file containing images + + Returns: + list of strings, containing the images in the file + """ + with open(images_file_path, "r") as file: + images = file.readlines() + + cleaned_images = [image.strip() for image in images] + for image in cleaned_images: + logging.info(image) + + return cleaned_images + + +def main(): + parser = argparse.ArgumentParser( + description="Gather all images from a bundle" + ) + parser.add_argument("bundle") + parser.add_argument("--append-images", + help="Appends list of images from input file.") + + args = parser.parse_args() + + images = get_bundle_images(args.bundle) + + # append the airgap images + if args.append_images: + logging.info("Appending images found in file '%s'", args.append_images) + extra_images = get_static_images_from_file(args.append_images) + images.extend(extra_images) + + logging.info(f"Found {len(images)} different images") + + for img in images: + print(img) + + +if __name__ == "__main__": + main() diff --git a/scripts/requirements.txt b/scripts/requirements.txt new file mode 100644 index 00000000..8fe79fa6 --- /dev/null +++ b/scripts/requirements.txt @@ -0,0 +1,5 @@ +gitpython +tenacity +boto3 +click +pyyaml diff --git a/tests/airgapped/README.md b/tests/airgapped/README.md index 98436a43..721ebf3f 100644 --- a/tests/airgapped/README.md +++ b/tests/airgapped/README.md @@ -7,6 +7,20 @@ The scripts for testing an airgapped installation have the following requirement Additionally, the host machine will use Docker the pull all the images needed, and LXC to create the airgapped container. +## Update testing images + +The scripts that setup and test the airgapped environment will also need to load +a list of predefined images, for running tests in the airgapped environment. + +Every release has its own set of images. If you are creating a new release, you'll +need to create a new directory and the corresponding images file. + +You can find such files under `tests/airgapped//testing-images.txt` + +To understand how those image are being used by the tests, please take a look +at the +[Test Charmed Kubeflow components in airgapped](#test-charmed-kubeflow-components-in-airgapped) + ## Setup the environment This repository contains a script for setting up the environment: @@ -25,6 +39,7 @@ You can run the script that will spin up an airgapped microk8s cluster with: --node-name airgapped-microk8s \ --microk8s-channel 1.29-strict/stable \ --bundle-path releases/1.9/stable/bundle.yaml \ + --testing-images-path tests/airgapped/ckf-1.9-testing-images.txt \ --juju-channel 3.4/stable ``` @@ -50,7 +65,9 @@ Developers are urged to instead define their own `images.txt` file with the imag they'd like to be loaded during tests. ```bash -./scripts/airgapped/get-all-images.sh releases/1.9/stable/bundle.yaml > images-all.txt +python3 scripts/airgapped/get-all-images.py \ + releases/1.9/stable/bundle.yaml \ + > images-all.txt ``` This will generate an `images-all.txt`, with all images of CKF 1.9. You can @@ -126,3 +143,6 @@ To test Charmed Kubeflow components in airgapped, follow the instructions in the * [KNative](./knative/README.md) * [Pipelines](./pipelines/README.md) * [Training Operator](./training/README.md) + +Make sure to follow the first part of this guide on updating the oci images that need to be present +in the airgapped cluster in order to execute tests. diff --git a/tests/airgapped/airgap.sh b/tests/airgapped/airgap.sh index 661293d2..62fbe598 100755 --- a/tests/airgapped/airgap.sh +++ b/tests/airgapped/airgap.sh @@ -125,6 +125,7 @@ DISTRO="${DISTRO:-"ubuntu:22.04"}" MICROK8S_CHANNEL="${MICROK8S_CHANNEL:-}" JUJU_CHANNEL="${JUJU_CHANNEL:-"2.9/stable"}" BUNDLE_PATH="${BUNDLE_PATH:-"releases/latest/edge/bundle.yaml"}" +TESTING_IMAGES_PATH="${TESTING_IMAGES_PATH:-"tests/airgapped/ckf-1.8-testing-images.txt"}" LIBRARY_MODE=false @@ -136,6 +137,7 @@ while true; do --microk8s-channel ) MICROK8S_CHANNEL="$2"; shift 2 ;; --juju-channel) JUJU_CHANNEL="$2"; shift 2 ;; --bundle-path) BUNDLE_PATH="$2"; shift 2 ;; + --testing-images-path) TESTING_IMAGES_PATH="$2"; shift 2 ;; -h | --help ) prog=$(basename -s.wrapper "$0") echo "Usage: $prog [options...]" @@ -158,7 +160,7 @@ done if [ "$LIBRARY_MODE" == "false" ]; then echo "1/X -- (us) Create images tar.gz" - create_images_tar "$BUNDLE_PATH" + create_images_tar "$BUNDLE_PATH" "$TESTING_IMAGES_PATH" echo "2/X -- (us) Create charms tar.gz" create_charms_tar "$BUNDLE_PATH" echo "3/X -- (client) Setup K8s cluster (MicroK8s)" diff --git a/tests/airgapped/ckf-1.8-testing-images.txt b/tests/airgapped/ckf-1.8-testing-images.txt new file mode 100644 index 00000000..6a944ee0 --- /dev/null +++ b/tests/airgapped/ckf-1.8-testing-images.txt @@ -0,0 +1,4 @@ + charmedkubeflow/pipelines-runner:ckf-1.8 + docker.io/kubeflowkatib/simple-pbt:v0.16.0 + ghcr.io/knative/helloworld-go:latest + gcr.io/kubeflow-ci/tf-mnist-with-summaries:1.0 diff --git a/tests/airgapped/ckf.sh b/tests/airgapped/ckf.sh index 75ae5a56..23357a91 100644 --- a/tests/airgapped/ckf.sh +++ b/tests/airgapped/ckf.sh @@ -1,12 +1,13 @@ #!/usr/bin/env bash -# This file includes helper functions for +# This file includes helper functions for # 1. Fetching CKF artifacts (images,tars) # 2. Pushing the artifacts to the airgapped VM # 3. Initialising juju and preparing the model for CKF function create_images_tar() { local BUNDLE_PATH=$1 + local TESTING_IMAGES_PATH=$2 if [ -f "images.tar.gz" ]; then echo "images.tar.gz exists. Will not recreate it." @@ -16,7 +17,10 @@ function create_images_tar() { pip3 install -r scripts/airgapped/requirements.txt echo "Generating list of images of Charmed Kubeflow" - bash scripts/airgapped/get-all-images.sh "$BUNDLE_PATH" > images.txt + python3 scripts/get-all-images.py \ + --append-images "$TESTING_IMAGES_PATH" \ + "$BUNDLE_PATH" \ + > images.txt echo "Using produced list to load it into our machine's docker cache" python3 scripts/airgapped/save-images-to-cache.py images.txt diff --git a/tests/airgapped/katib/README.md b/tests/airgapped/katib/README.md index 1f16b514..eceb91cc 100644 --- a/tests/airgapped/katib/README.md +++ b/tests/airgapped/katib/README.md @@ -6,7 +6,7 @@ This directory is dedicated to testing Katib in an airgapped environment. Prepare the airgapped environment and deploy CKF by following the steps in [Airgapped test scripts](https://github.com/canonical/bundle-kubeflow/tree/main/tests/airgapped#testing-airgapped-installation). -Once you run the test scripts, the `kubeflowkatib/simple-pbt:v0.16.0` image used in the `simple-pbt` experiment will be included in your airgapped environment. It's specifically added in the [`get-all-images.sh` script](../../../scripts/airgapped/get-all-images.sh). +Once you run the test scripts, the `kubeflowkatib/simple-pbt:v0.16.0` image used in the `simple-pbt` experiment will be included in your airgapped environment. It's specifically added in the [`get-all-images.py` script](../../../scripts/get-all-images.py). ## How to test Katib in an Airgapped environment 1. Connect to the dashboard by visiting the IP of your airgapped VM. To get the IP run: @@ -19,4 +19,4 @@ Once you run the test scripts, the `kubeflowkatib/simple-pbt:v0.16.0` image used 3. Go to `Experiments (AutoML)` tab from the dashboard sidebar. 4. Click `New Experiment` then `Edit and submit YAML`. 5. Paste the contents of the `simple-pbt.yaml` file found in this directory. -6. Create the Experiment, and monitor its status to check it is `Succeeded`. \ No newline at end of file +6. Create the Experiment, and monitor its status to check it is `Succeeded`. diff --git a/tests/airgapped/knative/README.md b/tests/airgapped/knative/README.md index 75b6b9e9..6f7bab2f 100644 --- a/tests/airgapped/knative/README.md +++ b/tests/airgapped/knative/README.md @@ -6,7 +6,7 @@ This directory is dedicated to testing Knative in an airgapped environment. Prepare the airgapped environment and deploy CKF by following the steps in [Airgapped test scripts](https://github.com/canonical/bundle-kubeflow/tree/main/tests/airgapped#testing-airgapped-installation). -Once you run the test scripts, the `knative/helloworld-go` image used in the `helloworld` example will be included in your airgapped environment. It's specifically added in the [`get-all-images.sh` script](../../../scripts/airgapped/get-all-images.sh). +Once you run the test scripts, the `knative/helloworld-go` image used in the `helloworld` example will be included in your airgapped environment. It's specifically added in the [`get-all-images.py` script](../../../scripts/get-all-images.py). ## How to test Knative in an Airgapped environment 1. Connect to the dashboard by visiting the IP of your airgapped VM. To get the IP run: @@ -25,7 +25,7 @@ kubectl get ksvc -n Expected output: ``` NAME URL LATESTCREATED LATESTREADY READY REASON -helloworld http://helloworld.admin.10.64.140.43.nip.io helloworld-00001 helloworld-00001 True +helloworld http://helloworld.admin.10.64.140.43.nip.io helloworld-00001 helloworld-00001 True ``` 5. Curl the Knative Service using the `URL` from the previous step ``` @@ -34,4 +34,4 @@ curl -L http://helloworld.admin.10.64.140.43.nip.io Expected output: ``` Hello World! -``` \ No newline at end of file +``` diff --git a/tests/airgapped/pipelines/README.md b/tests/airgapped/pipelines/README.md index 23ff497e..a4e45c46 100644 --- a/tests/airgapped/pipelines/README.md +++ b/tests/airgapped/pipelines/README.md @@ -3,7 +3,7 @@ ## The `kfp-airgapped-ipynb` Notebook To test Pipelines in Airgapped, we are using the Notebook in this directory. It contains the Data passing pipeline example, with the configuration of the Pipeline components to use the `pipelines-runner` [image](./pipelines-runner/README.md). -The `pipelines-runner` image will be included in your airgapped environment given that you used the [Airgapped test scripts](../README.md). It's specifically added in the [`get-all-images.sh` script](../../../scripts/airgapped/get-all-images.sh). +The `pipelines-runner` image will be included in your airgapped environment given that you used the [Airgapped test scripts](../README.md). It's specifically added in the [`get-all-images.py` script](../../../scripts/get-all-images.py). ## How to test Pipelines in an Airgapped environment 1. Prepare the airgapped environment and Deploy CKF by following the steps in [Airgapped test scripts](../README.md). @@ -14,4 +14,4 @@ lxc ls | grep eth0 3. Create a Notebook server and choose `jupyter-tensorflow-full` image from the dropdown 4. Connect to the Notebook server and upload the `kfp-airgapped-ipynb` Notebook 5. Run the Notebook -6. Click on `Run details` from the output of the last cell in the Notebook to see the status of the Pipeline run. \ No newline at end of file +6. Click on `Run details` from the output of the last cell in the Notebook to see the status of the Pipeline run. diff --git a/tests/airgapped/setup/setup.sh b/tests/airgapped/setup/setup.sh index a8ae032e..0f14238a 100755 --- a/tests/airgapped/setup/setup.sh +++ b/tests/airgapped/setup/setup.sh @@ -7,4 +7,6 @@ cat tests/airgapped/lxd.profile | lxd init --preseed ./scripts/airgapped/prerequisites.sh ./tests/airgapped/setup/lxd-docker-networking.sh +pip3 install -r ./scripts/requirements.txt + echo "Setup completed. Reboot your machine before running the tests for the docker commands to run without sudo." diff --git a/tests/airgapped/training/README.md b/tests/airgapped/training/README.md index 5d2c7a11..752a78b9 100644 --- a/tests/airgapped/training/README.md +++ b/tests/airgapped/training/README.md @@ -6,7 +6,7 @@ This directory is dedicated to testing training operator in an airgapped environ Prepare the airgapped environment and deploy CKF by following the steps in [Airgapped test scripts](../README.md#testing-airgapped-installation). -Once you run the test scripts, the `kubeflow-ci/tf-mnist-with-summaries:1.0` image used in the `tfjob-simple` training job will be included in your airgapped environment. It's specifically added in the [`get-all-images.sh` script](../../../scripts/airgapped/get-all-images.sh). +Once you run the test scripts, the `kubeflow-ci/tf-mnist-with-summaries:1.0` image used in the `tfjob-simple` training job will be included in your airgapped environment. It's specifically added in the [`get-all-images.py` script](../../../scripts/get-all-images.py). ## How to test training operator in an Airgapped environment 1. Connect to the dashboard by visiting the IP of your airgapped VM. To get the IP run: @@ -28,4 +28,4 @@ Expected output: ``` NAME STATE AGE tfjob-simple Succeeded 2m5s -``` \ No newline at end of file +```