Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: docs: Misc stuff #507

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 8 additions & 4 deletions documentation/ztp-for-factories/main.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -13,12 +13,12 @@

include::ztp-for-factory-overview-ocp-docs.adoc[leveloffset=+1]

include::ztp-intro-factory-install.adoc[leveloffset=+1]

include::ztp-install-factory-hubedge-architecture.adoc[leveloffset=+1]

include::ztp-create-factory-hub-cluster.adoc[leveloffset=+1]

include::ztp-intro-factory-install.adoc[leveloffset=+1]

include::ztp-install-factory-pipeline-overview.adoc[leveloffset=+1]

include::ztp-hub-factory-pipeline.adoc[leveloffset=+2]
Expand All @@ -43,10 +43,14 @@ include::ztp-post-install-edge-factory-pipeline-checks.adoc[leveloffset=+2]

include::ztp-troubleshooting-factory-pipelines.adoc[leveloffset=+1]

include::ztp-common-expected-errors.adoc[leveloffset=+1]
include::ztpfw-pipelines-flags-arguments.adoc[leveloffset=+2]

include::ztp-common-expected-errors.adoc[leveloffset=+2]


include::ztp-configuring-edge-at-remote-site.adoc[leveloffset=+1]

include::ztpfw-pipelines-flags-arguments.adoc[leveloffset=+1]
include::modules/ztp-for-factory-makeconnected.adoc[leveloffset=+2]
include::modules/ztp-for-factory-update.adoc[leveloffset=+2]

include::ztp-for-factory-development.adoc[leveloffset=+1]
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
[id="ztp-for-factory-makeconnected"]
= Make the cluster connected
include::common-attributes.adoc[]
:context: ztp-for-factory

After the Edge Cluster installation and detach from ACM operation, the cluster has an internal Quay registry that holds all the pieces and it's the source for all the images that the cluster might require to work as a disconnected cluster.

In order to revert it to become an online cluster again, there are several steps required:



Steps to make the cluster connected:

- Remove ICSP's
[source,terminal]
----

$ oc get ImageContentSourcePolicy
NAME AGE
image-policy-0 20h
image-policy-1 20h
image-policy-2 20h
image-policy-3 20h
ztpfw-edgecluster0-cluster 18h
----

From the above command output you'll get a list of defined mirrors to find the relevant catalogs and mirrors used. The mirrors are used when the referenced repositories can't be accessed.


- Change catalog source for operators

[source,terminal]
----
$ oc get catalogsource -A
NAMESPACE NAME DISPLAY TYPE PUBLISHER AGE
openshift-marketplace ztpfw-catalog Disconnected Lab grpc disconnected-lab 18h
openshift-marketplace ztpfw-catalog-certfied Disconnected Lab Certified grpc disconnected-lab-certified 18h
----

Red Hat provided catalogs are defined in the documentation at https://docs.openshift.com/container-platform/4.10/operators/understanding/olm-rh-catalogs.html#olm-rh-catalogs

Ensure that the required catalogs are defined in order to reach upstream data.
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
[id="ztp-for-factory-makeconnected"]
= Update the Edge Cluster
include::common-attributes.adoc[]
:context: ztp-for-factory

Follow regular update procedures in the documentation after connecting the EdgeCluster to upstream catalogs.
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ $ tkn pipeline logs deploy-ztp-edgecluster-run-2rklt -f -n edgecluster-deployer
+
[NOTE]
====
The `edgecluster-deployer` pipeline stores all the artefacts for {product-title} Pipelines.
The `edgecluster-deployer` namespace stores all the artifacts for {product-title} Pipelines.
====
. Select **PipelineRuns** to drill down into the details of the pipeline runs.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,11 @@
= Common and expected errors
include::modules/common-attributes.adoc[]

A common issue that may occur during the ZTP pipelines run is a failure during the check hub stage.
- A common issue that may occur during the ZTP pipelines run is a failure during the **check hub** stage.

- Kubelet is restarted and access to the Kubernetes API is temporarily interrupted during the run of the "hub-deploy-disconnected-registry" task. This is normal behavior and an error message similar to the following is printed.


Another expected error occurs during the running of **deploy registry** stage of the hub cluster pipeline `kubelet` is restarted and access to the Kubernetes API is temporarily interrupted. This is normal behavior and an error message similar to the following is printed.

[source,terminal]
----
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,10 @@ nameserver 192.168.7.10
----

. Configure a static IP on the connected laptop:
[NOTE]
====
Starting with Red Hat Enterprise Linux 8, network configurations are stored in the `/etc/NetworkManager/system-connections/` directory. This new configuration location uses the key file format instead of the ifcfg format. However, the previously stored configurations at `/etc/sysconfig/network-scripts/` continue to work. The `/etc/NetworkManager/system-connections/` directory stores any changes with the nmcli con mod name command.
====

.. Determine the name of the laptop's network interface card (NIC) as follows.
+
Expand Down Expand Up @@ -73,7 +77,7 @@ Here `eth0` is the network card name, and it can be different for different comp
BOOTPROTO=static
IPADDR=192.168.7.21
NETMASK=255.255.255.0
GATEWAY=192.168.0.1
GATEWAY=192.168.7.1
DNS1=8.8.8.8
DNS2=8.8.4.4
----
Expand Down Expand Up @@ -113,11 +117,15 @@ This `kubeadmin` username and password was created at the factory and should hav
This new user account is granted `cluster-admin` privileges and should be used rather than the factory created `kubeadmin` account.
====

.. In the **API** screen assign the IP address that will be used for API traffic. A default value is assigned but you are free to update this.
.. In the **API** screen assign the IP address that will be used for API traffic. The default value should be replaced with an IP from the respective subnet

.. In the **Ingress** screen assign the IP address that will be used for new routes and traffic managed by the ingress controller. A default value is assigned but you are free to update this.
.. In the **Ingress** screen assign the IP address that will be used for new routes and traffic managed by the ingress controller. The default value should be replaced with an IP from the respective subnet

.. Optional: Under Domain create unique URLs for the setup and console URLs for your edge cluster.
Optional: Change the domain name to create unique URLs for the setup and console of your edge cluster.
[NOTE]
====
The new and the old domain names should be properly configured in DNS.
====

.. Click **Download** in the **SSH** screen and download the edge cluster private SSH key.
+
Expand All @@ -128,9 +136,9 @@ You need this to access the nodes of the edge cluster.

.. Click **Finish setup**.

.. Selecting **OpenShift console** brings you direct to the web console.
.. Selecting **OpenShift console** redirects you to the web console.

.. Under **Settings** you have the option to change the values of the **API address**, **Ingress address** and the previously configured **Domain**.
.. Under **Settings** you have the option to change the values of the **API address**, **Ingress address** and the **Domain name**.
+
[NOTE]
====
Expand Down
23 changes: 13 additions & 10 deletions documentation/ztp-for-factories/ztp-create-factory-hub-cluster.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ include::modules/common-attributes.adoc[]

* Cluster is reachable using a `KUBECONFIG` file.

* The dns names for `api.<hub-clustername>.<baseDomain>`, `api-int.<hub-clustername>.<baseDomain>` and `*.apps.<hub-clustername>.<baseDomain>` should be deployed on edge cluster on the DHCP external network.
* The dns entries for `api.<hub-clustername>.<baseDomain>`, `api-int.<hub-clustername>.<baseDomain>` and `*.apps.<hub-clustername>.<baseDomain>` should be resolvable and reachable from the edge cluster via the DHCP external network.

* link:https://metal3.io/[Metal³] has to be available in the hub cluster.

Expand All @@ -33,30 +33,29 @@ include::modules/common-attributes.adoc[]
If the cluster is greater than 3 nodes, the recommendation is to use OpenShift Data Foundation. If it is a single-node OpenShift cluster, use the Local Storage Operator.
====

* Create the following persistent volumes with at least 200GB of storage (NVMe or SSD) for:
* Create the following persistent volumes (NVMe or SSD) for:

** 2 for Assisted Installer.
** 1 for the hub internal registry that is for the mirror of the images. At least 200GB is required on the hub, more may be required if ODF is installed.
** 1 for HTTPD that hosts the Red Hat Enterprise Linux CoreOS (RHCOS) images.
** 1 for zero touch provisioning factory workflows (ZTPFW).
** 1 for Red Hat Advanced Cluster Manager (RHACM)
** 1 for Zero Touch Provisioning Factory Workflows (ZTPFW).

[discrete]
=== Networking prerequisites

The hub cluster requires internet connectivity and should be installed on a private network with customer configured DNS and DHCP services. Configure DNS for the API on the ingress of the hub to reach some routes on the hub cluster. Configure enough DNS entries for the number of edge clusters you intend to deploy in parallel.
The hub cluster requires internet connectivity and should be installed on a private network with customer configured DNS and DHCP services. Configure DNS for the API and the ingress of the hub to be able to reach routes on the hub cluster. Configure enough DNS entries for the number of edge clusters you intend to deploy in parallel.

You need enough DHCP addresses to host the number of edge clusters you intend to deploy. Each {product-title} node in the cluster must have access to an NTP server. {product-title} nodes use NTP to synchronize their clocks. For example, cluster nodes use SSL certificates that require validation, which might fail if the date and time between the nodes are not in sync.

Specific requirements are:

* DNS entries need to be configured and resolvable from the external network, with DNS on the DHCP external network.
* Hub
** `api.<hub-clustername>.<baseDomain>` and `api-int.<hub-clustername>.<baseDomain>` entries to the same IP address.
** `api.<hub-clustername>.<baseDomain>` and `api-int.<hub-clustername>.<baseDomain>` should resolve entries to the same IP address.
** ingress (`*.apps.<hub-clustername>.<baseDomain>`).

* Edge
** `api.<edge-cluster-name>.<baseDomain>` and `api-int.<edge-cluster-name>.<baseDomain>` entries to the same IP address.
** `api.<edge-cluster-name>.<baseDomain>` and `api-int.<edge-cluster-name>.<baseDomain>` should resolve entries to the same IP address.
** ingress (`*.apps.<edge-cluster-name>.<baseDomain>`).

[NOTE]
Expand All @@ -66,8 +65,12 @@ When deploying a single-node OpenShift cluster, the `api.<edge-cluster-name>.<ba

* External DHCP with enough free IPs on the factory network to provide access to the edge cluster by using the external network interface.

* Every edge cluster needs at least 6 IPs from this external network (without the broadcast and network IP).
** 1 per node.
* Every edge compact cluster needs at least 6 IPs from this external network (without the broadcast and network IP).
** 1 per node (3 masters + 1 worker).
** 1 for API.
** 1 for the Ingress entry (`*.apps.<edge-cluster-name>.<baseDomain>`).

* Every edge SNO cluster needs at least 3 IPs from this external network (without the broadcast and network IP).
** 1 per node (SNO node).
** 1 for API.
** 1 for API-INT.
** 1 for the Ingress entry (`*.apps.<edge-cluster-name>.<baseDomain>`).
39 changes: 31 additions & 8 deletions documentation/ztp-for-factories/ztp-edge-factory-pipeline.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,11 +6,11 @@
= The edge factory pipeline
include::modules/common-attributes.adoc[]

This stage deploys and configures the edge clusters. When this pipeline is completed, the edge clusters are ready for use when the enclosure gets relocated to the end customer's remote site.
This stage deploys and configures the edge clusters. When this pipeline is completed successfully, the edge clusters are ready to be shipped to end customer's site.

The flow associated with deploying the edge cluster is:
The flow associated with deploying the edge cluster is (`oc get -n edgecluster-deployer pipeline deploy-ztp-edgeclusters -o json|jq ".spec.tasks[].name"`):

Check hub::
Check hub(pre-flight)::

This step installs the various tools on the edge cluster that are needed. It downloads `jq`, `oc`, `opm` and `kubectl`. It proceeds to verify that various hub install prerequisites exist before proceeding, for example it checks the:

Expand All @@ -25,25 +25,48 @@ Deploy edge::

This step starts with the edge cluster provisioning. This process ends with pushing a notification from the edge cluster to the hub and answering with an ACK.

Deploy ICSP::

This step adds the Image Content Source Policy to the Edge Cluster

Deploy NMState and MetalLB::

This step deploys the NMState and the MetalLB Operators. NMState creates one profile per node to obtain an IP from the external network's DHCP. Then the MetalLB creates a resource called an AddressPool to build the relationship between the internal and external interface using a LoadBalancer interface. Finally it creates a service for the API and the ingress. Without this step you will not be able to access the API or ingress by using the external address.

Deploy worker::

This step deploys the worker node, and adds it to the edge cluster.



Deploy UI::

The deploy UI stage helps to simplify the configuration of the edge cluster after it is relocated to the customer's site.

Deploy OpenShift Data Foundation::

This step deploys the Local Storage Operator and also OpenShift Data Foundation (ODF). ODF and the Local Storage Operator uses disks defined in the `storage_disk` section of the `edgeclusters.yaml` configuration file to create persistent volumes. ODF generates the storage classes and dynamically provisions the persistent volumes. This provides the storage necessary to host the disconnected registry images (Quay).

Deploy Quay::
Deploy Quay (disconnected-registry)::

This step deploys the Quay Operator and components of Quay, because the end customer needs a fully supported solution in the edge and the factory is expected to have their own internal registry. This Quay deployment has a small footprint enabling only the features needed to host an internal registry with basic functions.

Deploy workers::

This step deploys the worker nodes, and adds these to the edge cluster.

Deploy UI::
Mirror of images(OCP and OLM)::

This step mirrors the images into the disconnected-registry that has been configured

Deploy Quay Pull Secret (disconnected-registry-ps)::
This step configures the pull secret for the registry.

Deploy ICSP post (ICSP post)::
This step configures the ICSP to connect to internal registry (Quay), not the one in the hub anymore.

GPU Operator::

In case it has been enabled, this step deploys the GPU Operator in the cluster.

The deploy UI stage helps to simplify the configuration of the edge cluster after it is relocated to the customer's site.

Detach cluster::

Expand Down
17 changes: 12 additions & 5 deletions documentation/ztp-for-factories/ztp-for-factory-development.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ NOTE:: This documentation it's mostly for the developers/qes etc... working in t

== Deploying the environment in Virtual

This is a very expensive option to work with all nodes in virtual, which means, you will need a big boy to make this work:
This is a very expensive option to work with all nodes in virtual, which means, you will need a big server to make this work:

=== Hardware requirements
Hardware Reqs for the Hub (3 Nodes):
Expand All @@ -34,7 +34,7 @@ Worker Node:
=== Software requirements

- Libvirtd/Qemu/KVM
- Kcli for the scripts.
- Kcli (https://github.com/karmab/kcli) for the scripts.
- Some binaries oc, kubectl, tkn, yq, jq and ketall (for debugging)

=== Deploying the Base Hub
Expand All @@ -44,7 +44,7 @@ Deploys the Hub cluster with an NFS as a Base Storage for the requirements
```console
git clone [email protected]:rh-ecosystem-edge/ztp-pipeline-relocatable.git
cd ztp-pipeline-relocatable/hack/deploy-hub-local
./build-hub.sh ${HOME}/openshift_pull.json 1
./build-hub.sh ${HOME}/openshift_pull.json ${OCP_VERSION} ${ACM_VERSION} ${ODF_VERSION} compact
```

=== Bootstraping OpenShift Pipelines
Expand All @@ -70,7 +70,7 @@ tkn pipeline start -n edgecluster-deployer -p ztp-container-image="quay.io/ztpfw
Creates 4 VMs and the proper DNS entries for the involved network

```sh
./build-edgecluster.sh ${HOME}/openshift_pull.json 1
./build-edgecluster.sh ${HOME}/openshift_pull.json ${OCP_VERSION} ${ACM_VERSION} ${ODF_VERSION} compact
```

=== Executing the Edge Cluster Pipeline
Expand All @@ -79,7 +79,14 @@ You can customize the parameter `git-revision=<BRANCH>` to point to your own bra

```sh
export KUBECONFIG=/root/.kcli/clusters/test-ci/auth/kubeconfig
tkn pipeline start -n edgecluster-deployer -p ztp-container-image="quay.io/ztpfw/pipeline:main" -p edgeclusters-config="$(cat /root/amorgant/ztp-pipeline-relocatable/hack/deploy-hub-local/edgeclusters.yaml)" -p kubeconfig=${KUBECONFIG} -w name=ztp,claimName=ztp-pvc --timeout 5h --use-param-defaults deploy-ztp-edgeclusters
tkn pipeline start -n edgecluster-deployer \
-p ztp-container-image="$(PIPE_IMAGE):$(BRANCH)" \
-p edgeclusters-config="$$(cat $(EDGECLUSTERS_FILE))" \
-p kubeconfig=${KUBECONFIG} \
-w name=ztp,claimName=ztp-pvc \
--timeout 5h \
--pod-template ./pipelines/resources/common/pod-template.yaml \
--use-param-defaults deploy-ztp-edgeclusters
```

== Build Images
Expand Down
Loading