Skip to content
This repository has been archived by the owner on Nov 11, 2019. It is now read-only.

Commit

Permalink
Merge pull request #436 from csymons-suse/doc-06-04-edits
Browse files Browse the repository at this point in the history
doc: copyedits, consistency
  • Loading branch information
jgu17 committed Jun 6, 2019
2 parents 5fba3e4 + cede526 commit fa5f66f
Show file tree
Hide file tree
Showing 3 changed files with 47 additions and 37 deletions.
6 changes: 3 additions & 3 deletions doc/source/index.rst
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
Welcome to SUSE Containerized OpenStack (SCO)
=============================================
Welcome to SUSE Containerized OpenStack
=======================================

The socok8s project automates SUSE Containerized OpenStack (SCO) provisioning
The socok8s project automates SUSE Containerized OpenStack provisioning
and lifecycle management on SUSE Container as a Service Platform (CaaSP) and
SUSE Enterprise Storage (SES), using Airship, shell scripts, and Ansible playbooks.

Expand Down
74 changes: 42 additions & 32 deletions doc/source/operations.rst
Original file line number Diff line number Diff line change
Expand Up @@ -54,10 +54,10 @@ run the following command in the root of the socok8s directory:
./run.sh remove_deployment
This will delete all Helm releases, all Kubernetes resources in the ucp and
openstack namespaces, and all persistent volumes that were provisioned for use
in the deployment. After this operation is complete, only the original Kubernetes
services deployed by the SUSE CaaS Platform will remain.
This will delete all Helm releases, all Kubernetes resources in the ``ucp`` and
``openstack`` namespaces, and all persistent volumes that were provisioned for
use in the deployment. After this operation is complete, only the original
Kubernetes services deployed by the SUSE CaaS Platform will remain.

Testing
-------
Expand Down Expand Up @@ -97,7 +97,8 @@ command from the root of the socok8s directory:
It can take a few minutes for the new host to initialize and show in the
OpenStack hypervisor list.

To remove a compute node, run the following command from the root of the socok8s directory:
To remove a compute node, run the following command from the root of the socok8s
directory:

.. code-block:: console
Expand All @@ -106,18 +107,20 @@ To remove a compute node, run the following command from the root of the socok8s
.. note::

Compute nodes must be removed individually. When the node has been successfully
removed, the host details must be manually removed from "airship-openstack-compute-workers"
group in the inventory.
removed, the host details must be manually removed from
"airship-openstack-compute-workers" group in the inventory.

Control plane horizontal scaling
--------------------------------

SUSE Containerized OpenStack provides two built-in scale profiles: "minimal",
which is the default profile, deploys a single Pod for each service, and "ha",
deploys a minimum of two Pods for each service, three or more Pods for services
that will be heavily utilized or require a quorum. Change scale profiles by
adding a "scale_profile" key to ${WORKSPACE}/env/extravars and specifying a
profile value:
SUSE Containerized OpenStack provides two built-in scale profiles:

- **minimal**, the default profile, deploys a single Pod for each service
- **ha** deploys a minimum of two Pods for each service. Three or more Pods are
suggested for services that will be heavily utilized or require a quorum.

Change scale profiles by adding a "scale_profile" key to ${WORKSPACE}/env/extravars
and specifying a profile value:

.. code-block:: yaml
Expand All @@ -127,13 +130,17 @@ The built-in profiles are defined in playbooks/roles/airship-deploy-ucp/files/pr
and can be modified to suit custom use cases. Additional profiles can be created
and added to this directory following the file naming convention in that directory.

It is recommended to use at least three controller nodes for a highly available
control plane for both Airship and OpenStack services. To add new controller nodes,
the nodes must be running SUSE CaaS Platform v3.0, have been accepted into the
cluster and bootstrapped using the Velum dashboard. After the nodes are bootstrapped,
add the host entries to the 'airship-ucp-workers', 'airship-openstack-control-workers'
and 'airship-kube-system-workers' group in your Ansible inventory in
${WORKSPACE}/inventory/hosts.yaml.
We recommend using at least three controller nodes for a highly available
control plane for both Airship and OpenStack services. To add new controller
nodes, the nodes must:

- be running SUSE CaaS Platform v3.0
- have been accepted into the cluster
- be bootstrapped using the Velum dashboard.

After the nodes are bootstrapped, add the host entries to the 'airship-ucp-workers',
'airship-openstack-control-workers', and 'airship-kube-system-workers' group in
your Ansible inventory in ${WORKSPACE}/inventory/hosts.yaml.

To apply the changes, run the following command from the root of the socok8s directory:

Expand Down Expand Up @@ -231,15 +238,16 @@ Viewing Shipyard Logs
---------------------

The deployment of OpenStack components in SUSE Containerized OpenStack is
directed by Shipyard, the Airship platform's DAG controller, So Shipyard is one
of the best places to begin troubleshooting deployment problems. The Shipyard CLI
client authenticates with Keystone, so the following environment variables must
be set before running any commands:
directed by Shipyard, the Airship platform's directed acyclic graph (DAG)
controller, so Shipyard is one of the best places to begin troubleshooting
deployment problems. The Shipyard CLI client authenticates with Keystone, so
the following environment variables must be set before running any commands:

.. code-block:: console
export OS_USERNAME=shipyard
export OS_PASSWORD=$(kubectl get secret -n ucp shipyard-keystone-user -o json | jq -r '.data.OS_PASSWORD' | base64 -d)
export OS_PASSWORD=$(kubectl get secret -n ucp shipyard-keystone-user \
-o json | jq -r '.data.OS_PASSWORD' | base64 -d)
.. note::

Expand Down Expand Up @@ -349,7 +357,7 @@ Run the following to prevent a cluster from being updated:
systemctl --now disable transactional-update.timer
Run the following if you only want to override once a week, instead of daily:
If you want to override once a week, instead of daily, run the following:

.. code-block:: console
Expand Down Expand Up @@ -421,9 +429,9 @@ If either service has stopped, start it by running:
Docker should be restarted first.

These services should start automatically each time a node boots up and should
be running at all times. If either has stopped, it may be useful to examine the
system logs to determine the root cause of the failure. This can be done by using
the journalctl command:
be running at all times. If either service has stopped, examine the system logs
to determine the root cause of the failure. This can be done by using the
journalctl command:

.. code-block:: console
Expand All @@ -443,7 +451,7 @@ and events by running:
If the cause of the Pod evictions is determined to be resource exhaustion, such
as NodeHasDiskPressure or NodeHasMemoryPressure, it may be necessary to remove
the node from the cluster temporarily to perform maintenance. To gracefully
remove all Pods from the affected node and mark it as not schedulable, run:
remove all Pods from the affected node and mark it as `not schedulable`, run:

.. code-block:: console
Expand Down Expand Up @@ -478,11 +486,13 @@ Tips and Tricks
Display all images used by a component
--------------------------------------

Use Neutron as an example:
Using Neutron as an example:

.. code-block:: console
kubectl get pods -n openstack -l application=neutron -o jsonpath="{.items[*].spec.containers[*].image}"|tr -s '[[:space:]]' '\n' | sort | uniq -c
kubectl get pods -n openstack -l application=neutron -o \
jsonpath="{.items[*].spec.containers[*].image}"|tr -s '[[:space:]]' '\n' \
| sort | uniq -c
Remove dangling Docker images
Expand Down
4 changes: 2 additions & 2 deletions doc/source/user/teardown-socok8s.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
Deleting SUSE Containerized OpenStack from OpenStack
====================================================

If you have built SUSE Containerized OpenStack (SCO) on top of OpenStack, you can
If you have built SUSE Containerized OpenStack on top of OpenStack, you can
delete your whole environment by running:

.. code-block:: console
Expand All @@ -21,5 +21,5 @@ If you want to delete your WORKDIR too, run:
.. warning::

You will lose all of your SCO data, your overrides, your certificates,
You will lose all of your SUSE Containerized OpenStack data, your overrides, your certificates,
your inventory.

0 comments on commit fa5f66f

Please sign in to comment.