Skip to content

_Installation Instructions

Jonathan Beakley edited this page Jan 17, 2019 · 2 revisions

Important Note: The procedure described on this page has been deprecated.

For installing Black Duck in Kubernetes and OpenShift environments, it is highly recommended that you use Synopsys Operator.

The documentation below is provided only for historical purposes, and should not be followed unless instructed to by Synopsys support personnel.

Deprecated: Black Duck Installation Instructions for Kubernetes/OpenShift

Prerequisites

All commands below assume:

  • you are using the namespace (or OpenShift project name) 'blackduck'.
  • you have a cluster with at least 10 cores / 20GB of allocatable memory.
  • you'll need an additional core and 4GB of memory to enable binary analysis.
  • you have administrative access to your cluster.

Black Duck installation instructions

Before you start:

Clone this repository, and cd to install/hub to run these commands, so the files are local.

Step 0:

Make a namespaces/project for Black Duck: (if you already have a namespace from a prior version you may not want to change it)

  • For openshift:oc new-project blackduck
  • For kubernetes:kubectl create ns blackduck

Step 1: Setting up service accounts (if you need them)

This may not be necessary for some users, feel free to skip to the next section if you think you don't need to setup any special service accounts (i.e. if you're running in a namespace that has administrative capabilities).

  • First, create your service account (OpenShift users, use oc):
kubectl create serviceaccount postgresapp -n blackduck
  • For OpenShift: You need to create a service account for Black Duck, and allow that user to run processes as user 70. A generic version of these steps which may work for you is defined below:
oc adm policy add-scc-to-user anyuid system:serviceaccount:blackduck:postgres
  • Optional for Kubernetes: You may need to create RBAC bindings with your cluster administrator that allow pods to run as any UID. Consult with your Kubernetes administrator and show him/her your installation workflow (as defined below) to determine if this is necessary in your cluster.

Step 2: Create your cfssl container, and the core Black Duck config map

Note: We may edit the configmap later for external postgres or other settings. For now, leave it as it is by default, and run these commands (OpenShift users: use oc instead of kubectl).

kubectl create -f 1-cfssl.yml -n blackduck
kubectl create -f 1-cm-hub.yml -n blackduck

Note on Binary Analysis

If you plan to enable Binary Analysis (this is a separately licensed feature) you'll need update the config map in 1-cm-hub.yml. You'll need to change USE_BINARY_UPLOADS to "1"

USE_BINARY_UPLOADS: "1"

Upgrade note

If you are upgrading from a previous version of Black Duck or from a version with Binary Analysis not enabled, if you already have config maps created from a previous installation you'll need to replace the config map using this file to see the new property/value:

kubectl replace -f 1-cm-hub.yml -n blackduck

If there is no existing config map then this step can be skipped.

Step 3: Choose your Postgres database type, and then setup your Postgres database

There are two ways to run Black Duck's Postgres database, and we refer to them as internal, or external.

Choose internal if you don't care about maintaining your own database and are able to run containers as any user in your cluster; otherwise, choose external.

Step 3 (INTERNAL database setup option)

If you are okay using an internal database, and are able to run containers as user 70, then you can (in most cases) just start the Hub using the snippet of kubectl create statements below.

  • Note: The default yaml files don't have persistent volumes. You will need to replace all emptyDir volumes with a persistentVolumeClaim (or Volume) of your choosing. 1G is enough for all volumes other than postgres. Postgres should have 100G, to ensure it will have plenty of storage even if you do thousands of scans early on.

  • Note: Postgres is known to have problems running in a container when writing to Gluster-based persistent volumes. (See here for details.) If you are using Gluster for your underlying file system, then you should use an external database.

  • Note: When installing an internal database, there is an initPod that runs as user 0 to set storage permissions. If you don't want to run it as user 0, and are sure your storage will be writeable by the postgres user, delete that initPod clause entirely.

kubectl create -f 2-postgres-db-internal.yml -n blackduck

That's it, now, skip ahead to step 4!

Step 3 (EXTERNAL database setup option)

Note: If you set up an internal database, please skip this step.

For a concrete example of setting up an external database, check the quickstart external db example.

  • Note that by 'external' we mean, any postgres other then the official hub-postgres image which ships with the Black Duck containers. Our official hub-postgres image bootstraps its own schema, and uses CFSSL for authentication. In this case, you will have to setup auth and the schema yourself.

  • For simplicity, we use an example password below (blackduck123).

So, now lets do our external database setup, in two steps:

  1. First lets make sure we create secrets that will match our passwords that we will set in the external database.
kubectl create secret generic db-creds --from-literal=blackduck=blackduck123 --from-literal=blackduck_user=blackduck123 -n blackduck
  1. Then, create the blackduck and blackduck_user users in the database, set their passwords to the ones above, and run the external-postgres-init script on your database to set up the schema.

  2. Finally, edit the HUB_POSTGRES_HOST field in the hub-db-config configmap to match the DNS name or IP address of your external postgres host (alternatively, use a headless service for advanced users). Use kubectl edit cm or oc edit cm to do this.

Your external database is now set up. Move on to step 4 to install Black Duck.

Step 4: Adding Binary Analysis

If you plan to enable Binary Analysis (this is a separately licensed feature) you'll need add an additional yaml file:

kubectl create -f 2-binary-analysis.yml -n blackduck

Step 5: Finally, create Black Duck's containers

You have now set up the main initial containers that Black Duck depends on, and set its database up; you can start the rest of the application. As mentioned earlier, for fully production deployment, you'll want to replace emptyDir's with real storage directories based on your admin's recommendation. Then all you have to do is create the 3rd yaml file, like so, and Black Duck will be up and running in a few minutes:

kubectl create -f 3-hub.yml -n blackduck

If all the above pods are properly scheduled and running, you can then expose the webserver endpoint, and start using Black Duck to scan projects.