-
Notifications
You must be signed in to change notification settings - Fork 176
Notes on CRC and Minishift
-
OpenShift 4 download the Red Hat CodeReady Containers (crc) at https://cloud.redhat.com/openshift/install/crc/installer-provisioned
-
an account is needed as the
secret
is bound to the account -
secret expires and new CRC is needed to be downloaded once a time
-
-
setup
crc
withcrc setup
(by default the.crc
directory is created) -
start
crc
withcrc start
(secret is needed to be handed off, the binary bundle under~/.crc
is unpacked)Noteoc
binary is at~/.crc/bin/oc
Noteto delete previous crc
installation completely trycrc delete; crc cleanup
(thencrc setup
is needed again) -
setup
crc
to work with JBoss EAP registries and that could work with EAP QE OpenShift testsuite, the steps which are necessary are documented at~/scripts/crc-setup-xpaasqe-testsuite
(script authored by pkremens)
Note
|
The yum -y install bash-completion
oc completion bash > /tmp/oc_completion
sudo cp /tmp/oc_completion /etc/bash_completion.d/ |
Note
|
To set more memory for crc |
WildFly S2I images and Operator is at https://quay.io/organization/wildfly
Note
|
Quickly download and start the WildFly just to test oc get po
oc rsh <pod id>
oc log <pod id>
oc delete deploy --all |
The WildFly S2I Operator image is at quay.io/wildfly/wildfly-operator
The setup for running WildFly Operator can be found at https://github.com/wildfly/wildfly-operator/
First setup the operator by https://github.com/wildfly/wildfly-operator/blob/main/build/run-openshift.sh
git clone https://github.com/wildfly/wildfly-operator/; cd wildfly-operator
# kubeadmin password is at ${HOME}/.crc/cache/crc_libvirt_${CRC_VERSION}/kubeadmin-password
build (main)]$ oc login -u kubeadmin -p ${KUBEADMIN_PASSWORD} https://api.crc.testing:6443
./build/run-openshift.sh
# the WildFly operator should be started now
When all the definition was uploaded and the operator was started by the run-openshift.sh
script
we can deploy the WildFly quickstart application to the cluster.
The Operator consider existence of the application definition and start WildFly StatefulSet.
# verify that ./build/run-openshift.sh script was run
# go to place where go operator code is cloned
oc apply -f deploy/crds/quickstart-cr.yaml
Setup the CRC route to registry
oc login --token=$KUBEADMIN_TOKEN
# creating the route to registry
oc get route image-registry --namespace=openshift-image-registry || oc create route passthrough --service=image-registry --namespace=openshift-image-registry
# allow all authenticated users to pull from the registry
oc create clusterrolebinding image-puller-authenticated --clusterrole=system:image-puller --group=system:authenticated || true
# obtaining registry hostname
# registry is the most probably: image-registry-openshift-image-registry.apps-crc.testing:443
# --> the YAML will refer to registry(!): image-registry.openshift-image-registry.svc:5000
REGISTRY=$(oc get route image-registry --namespace=openshift-image-registry --template={{.spec.host}}):443
For being able to push to CRC registry the docker on local machine has to permit pushing to it as to insecure registry
su -
# add the registry to insecure field of json (add the image-registry-openshift-image-registry.apps-crc.testing:443)
vim /etc/docker/daemon.json
sudo systemctl start docker
docker login -u $(oc whoami) -p $(oc whoami -t) image-registry-openshift-image-registry.apps-crc.testing:443 # content of $REGISTRY
(podman login -u $(oc whoami) -p $(oc whoami -t) image-registry-openshift-image-registry.apps-crc.testing:443 --tls-verify=false)
docker tag wildfly/wildfly-centos7:dev image-registry-openshift-image-registry.apps-crc.testing:443/$(oc project -q)/wildfly-centos7:dev
docker push image-registry-openshift-image-registry.apps-crc.testing:443/$(oc project -q)/wildfly-centos7:dev
If you change e.g. the deploy/crds/quickstart-cr.yaml
then the applicationImage
is with value
"image-registry.openshift-image-registry.svc:5000/test/wildfly-centos7:dev"
.
Released EAP containers can be found at Red Hat catalog at https://catalog.redhat.com/software/containers/search?q=%20jboss-eap-7&p=1 (https://catalog.redhat.com/software/containers/search?p=1&product_listings_names=JBoss%20Enterprise%20Application%20Platform%207)
OpenShift EAP QE testsuite is at: https://gitlab.mw.lab.eng.bos.redhat.com/jbossqe-eap/openshift-eap-tests To find the latest image it’s good idea to check what the EAP QE tests with - ie. testuite configuration properties are at https://gitlab.mw.lab.eng.bos.redhat.com/jbossqe-eap/openshift-eap-tests/blob/main/global-test.properties#L92
WildFly docker image is build at repository https://github.com/wildfly/wildfly-s2i. The README is nice and it should be good to go. To define some of the modules from a different location then use override.
cd wildfly-builder-image
# cekit -v build --overrides=dev-overrides.yaml docker
# cekit -v build docker
When using 'dev-overrides.yaml' it thows error that there is no 'maven-repo.zip'.
It will be probably needed to build the WildFly from sources and use cekit-cache
to add the zip
of the maven repository to the process.
The recommendation from Jean-Francois is to just change the image.yaml
directly.
Or you can use the script tools/build-s2i-image.sh
which use the dev-overrides.yaml
and you can use it to build your WFLY distribution to the s2i image.
On OpenShift 3.x there is not used the WildFly Operator but older OpenShift template for deployment and s2i shell scripts for management (e.g. for transaction recovery).
The OpenShift 3.x can be started locally with minishift
(https://github.com/minishift/minishift/releases).
-
download the latest release and unpack
-
run
minishift start
-
consider changing the config of minishift with unsecure registries
Minishift configminishift stop && minishift delete cat << EOF > ~/.minishift/config/config.json { "__vm_driver": "kvm", "_comment_for_xpaassqe_works_ADD___network-nameserver": [ "10.0.144.45" ], "cpus": 2, "disk-size": "75G", "insecure-registry": [ "172.30.0.0/16", "172.30.1.1:5000", "127.0.0.0/8", "brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888", "docker-registry-default.192.168.99.100.nip.io:443", "docker-registry.engineering.redhat.com", "quay.io" ], "memory": "6000" } EOF
-
to access minishift internal registry from outside:
minishift addon apply registry-route
For minishift to deploy the EAP s2i template go to template definitions at https://github.com/jboss-container-images/jboss-eap-7-openshift-image/tree/EAP_731_CR2/templates
Trouble of template is that the image stream is expected to be available at the imagestream
of the local OpenShift directly.
The EAP7 images (realased ones) are at registry.redhat.io/jboss-eap-7/eap73-openjdk8-openshift-rhel7
(https://catalog.redhat.com/software/containers/search?product_listings_names=JBoss Enterprise Application Platform 7)
To get the image to the internal repository we need to either import the image (oc import-image <image-name> --from=<image-name-with-repo> --confirm
)
or pull image with docker and then push it to repository. My way is more often the "push" one.
# setup bash to use the docker from the minishift
# this step could not be necessary if docker versions matches etc, but I usually need it
eval $(minishift docker-env); eval $(minishift oc-env)
# login to the minishift repo
docker login -u $(oc whoami) -p $(oc whoami -t) $(minishift openshift registry)
# pull images (maybe `docker login registry.redhat.io` would be necessary)
docker pull registry.redhat.io/jboss-eap-7/eap73-openjdk8-openshift-rhel7
docker pull registry.redhat.io/jboss-eap-7/eap73-openjdk8-runtime-openshift-rhel7
# tag them based on the name used in the `json` template (see evn variables EAP_IMAGE_NAME and EAP_RUNTIME_IMAGE_NAME)
docker tag registry.redhat.io/jboss-eap-7/eap73-openjdk8-openshift-rhel7 $(minishift openshift registry)/$(oc project -q)/jboss-eap73-openshift:7.3
docker tag registry.redhat.io/jboss-eap-7/eap73-openjdk8-runtime-openshift-rhel7 $(minishift openshift registry)/$(oc project -q)/jboss-eap73-runtime-openshift:7.3
# pushing to minishift registry
docker push $(minishift openshift registry)/$(oc project -q)/jboss-eap73-openshift:7.3
docker push $(minishift openshift registry)/$(oc project -q)/jboss-eap73-runtime-openshift:7.3
# then push the AMQ 72 as well...
Or (easiest and preferred way) is to define the image streams from the template
# EAP image streams
oc create -f https://raw.githubusercontent.com/jboss-container-images/jboss-eap-7-openshift-image/EAP_731_CR2/templates/eap73-image-stream.json
# AMQ image streams
oc create -f https://raw.githubusercontent.com/jboss-container-images/jboss-amq-7-broker-openshift-image/72-1.2.GA/amq-broker-7-image-streams.yaml
If images are created in the registry (or imported) we can create the template under minishift and make the application from it
oc create -f https://raw.githubusercontent.com/jboss-container-images/jboss-eap-7-openshift-image/EAP_731_CR2/templates/eap73-tx-recovery-s2i.json
oc new-app --template=eap73-tx-recovery-s2i -p IMAGE_STREAM_NAMESPACE=$(oc project -q)
Note
|
# starting different quickstart
oc new-app --template=eap73-tx-recovery-s2i -p IMAGE_STREAM_NAMESPACE=$(oc project -q) -p SOURCE_REPOSITORY_URL="https://github.com/jboss-developer/jboss-eap-quickstarts.git" -p SOURCE_REPOSITORY_REF="7.3.x-openshift" -p CONTEXT_DIR=kitchensink
# scaling
oc scale dc eap-app --replicas=1 |