From 6ee49874dd5937c9defd84972ff0cf5279c2278c Mon Sep 17 00:00:00 2001 From: Jose Roman Martin Gil Date: Tue, 29 Jun 2021 12:13:57 +0200 Subject: [PATCH] :twisted_rightwards_arrows: Upgrade to latest Quarkus, Strimizi, Apicurio Registry Operators and others (#1) Main changes included: * :zap: Updated to Quarkus 1.13.7.Final * :arrow_up: Upgrade to Strimzi 0.24.0 (Apache Kafka 2.7) * :arrow_up: Upgrade to Apicurio Registry 2.0.1.Final * :recycle: Refactored Kafka config to use latest version of Apicurio Serdes * :arrow_up: Update JKube (kubernetes|openshift)-maven-plugin to 1.3.0 * :memo: How to deploy in Minikube or CodeReady Containers * :memo: Fix minor typos in readme * :memo: Changed Maven commands to use mvnw cli Co-authored-by: manusa --- README.md | 182 +++++++++++------- pom.xml | 110 +++++++---- .../apicurio/operator/subscription-k8s.yml | 4 +- src/main/apicurio/operator/subscription.yml | 2 +- src/main/apicurio/service-registry.yml | 12 +- .../topics/kafkasql-journal-topic.yml | 11 ++ .../topics/kafkatopic-global-id-topic.yml | 13 -- .../topics/kafkatopic-storage-topic.yml | 13 -- src/main/docker/Dockerfile.jvm | 54 ++++++ src/main/docker/Dockerfile.legacy-jar | 51 +++++ src/main/docker/Dockerfile.native | 27 +++ src/main/docker/Dockerfile.native-distroless | 23 +++ .../kafka/config/KafkaConfig.java | 33 ++-- src/main/jkube/service.yml | 17 -- src/main/k8s/role.yml | 24 +++ src/main/resources/application.properties | 22 ++- src/main/strimzi/kafka/kafka-ha.yml | 158 ++------------- src/main/strimzi/kafka/kafka.yml | 21 +- .../strimzi/operator/subscription-k8s.yml | 2 +- .../strimzi/topics/kafkatopic-messages-ha.yml | 2 +- .../strimzi/topics/kafkatopic-messages.yml | 2 +- .../strimzi/users/application-user-scram.yml | 2 +- .../users/service-registry-user-scram.yml | 89 +-------- .../users/service-registry-user-tls.yml | 89 +-------- 24 files changed, 459 insertions(+), 504 deletions(-) create mode 100644 src/main/apicurio/topics/kafkasql-journal-topic.yml delete mode 100644 src/main/apicurio/topics/kafkatopic-global-id-topic.yml delete mode 100644 src/main/apicurio/topics/kafkatopic-storage-topic.yml create mode 100644 src/main/docker/Dockerfile.jvm create mode 100644 src/main/docker/Dockerfile.legacy-jar create mode 100644 src/main/docker/Dockerfile.native create mode 100644 src/main/docker/Dockerfile.native-distroless delete mode 100644 src/main/jkube/service.yml create mode 100644 src/main/k8s/role.yml diff --git a/README.md b/README.md index 951e4a9..4e20151 100644 --- a/README.md +++ b/README.md @@ -14,13 +14,13 @@ The following components were refactored from Spring Boot to Quarkus Extensions: | spring-boot-starter-actuator | [Microprofile Health](https://quarkus.io/guides/microprofile-health) | | springdoc-openapi-ui | [OpenAPI and Swagger UI](https://quarkus.io/guides/openapi-swaggerui) | | spring-kafka | [Kafka with Reactive Messaging](https://quarkus.io/guides/kafka) | -| avro | Not documented (Experimental extension) | +| avro | [Kafka, Schema Registry and Avro](https://quarkus.io/blog/kafka-avro/) | | apicurio-registry-utils-serde | [Apicurio Registry](https://github.com/Apicurio/apicurio-registry) | This new version is really fast (less than 2 seconds) ... like a :rocket: ```text -INFO: kafka-clients-quarkus-sample 1.0.0-SNAPSHOT on JVM (powered by Quarkus 1.10.5.Final) started in 3.357s. Listening on: http://0.0.0.0:8181 +2021-06-24 07:56:54,886 INFO [io.quarkus] (main) kafka-clients-quarkus-sample 2.0.0-SNAPSHOT on JVM (powered by Quarkus 1.13.7.Final) started in 1.756s. Listening on: http://0.0.0.0:8181 ``` ## :rocket: :sparkles: :rotating_light: QUARKUS EDITION :rotating_light: :sparkles: :rocket: @@ -43,36 +43,41 @@ The example includes a simple REST API with the following operations: * Send messages to a Topic * Consume messages from a Topic +To deploy this application into a Kubernetes/OpenShift environment, we use the amazing [JKube](https://www.eclipse.org/jkube/). + ## Environment -This project requires a Kubernetes or OpenShift platform available. If you do not have one, you could use +This project requires a Kubernetes or OpenShift platform available. If you do not have one, you could use one of the following resources to deploy locally a Kubernetes or OpenShift Cluster: * [Red Hat CodeReady Containers - OpenShift 4 on your Laptop](https://github.com/code-ready/crc) * [Minikube - Running Kubernetes Locally](https://kubernetes.io/docs/setup/minikube/) -> Notes for Minikube: -> * In older versions you may hit an [issue](https://github.com/kubernetes/minikube/issues/8330) -> with Persistent Volume Claims stuck in Pending status -> * Operator Lifecycle Manager is needed to deploy operators. To enable it in Minikube: -> - Option 1: ```minikube start --addons olm``` -> - Option 2: ```minikube addons enable olm``` -> * Others addons needed: ```ingress```, ```registry``` +This repo was tested with the following latest versions of Red Hat CodeReady Containers and Minikube: + +```shell +❯ crc version +CodeReady Containers version: 1.28.0+08de64bd +OpenShift version: 4.7.13 (embedded in executable) +❯ minikube version +minikube version: v1.21.0 +commit: 76d74191d82c47883dc7e1319ef7cebd3e00ee11 +``` -> Note: Whatever the platform you are using (Kubernetes or OpenShift), you could use the +> Note: Whatever the platform you are using (Kubernetes or OpenShift), you could use the > Kubernetes CLI (```kubectl```) or OpenShift CLI (```oc```) to execute the commands described in this repo. > To reduce the length of this document, the commands displayed will use the Kubernetes CLI. When a specific > command is only valid for Kubernetes or OpenShift it will be identified. To deploy the resources, we will create a new ```amq-streams-demo``` namespace in the cluster in the case of Kubernetes: -```shell script +```shell ❯ kubectl create namespace amq-streams-demo ``` If you are using OpenShift, then we will create a project: -```shell script +```shell ❯ oc new-project amq-streams-demo ``` @@ -81,16 +86,52 @@ If you are using OpenShift, then we will create a project: > > In Kubernetes: > -> ```shell script +> ```shell > ❯ kubectl config set-context --current --namespace=amq-streams-demo > ``` -> +> > In OpenShift: > -> ```shell script +> ```shell > ❯ oc project amq-streams-demo > ``` +### Start Red Hat CodeReady Containers + +To start up your local OpenShift 4 cluster: + +```shell +❯ crc setup +❯ crc start -p /PATH/TO/your-pull-secret-file.json +``` + +You could promote `developer` user as `cluster-admin` with the following command: + +```shell +❯ oc adm policy add-cluster-role-to-user cluster-admin developer +clusterrole.rbac.authorization.k8s.io/cluster-admin added: "developer" +``` + +### Start Minikube + +To start up your local Kubernetes cluster: + +```shell +❯ minikube start +❯ minikube addons enable ingress +❯ minikube addons enable registry +``` + +To install the OLM v0.18.2 in Kubernetes, execute the following commands: + +```shell +❯ kubectl apply -f "https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.18.2/crds.yaml" +❯ kubectl apply -f "https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.18.2/olm.yaml" +``` + +> Note: There is an addon in minikube to install OLM, however at the moment of writing this +> repo, the latest version available not include the latest version of the operators. + ### Deploying Strimzi and Apicurio Operators > **NOTE**: Only *cluster-admin* users could deploy Kubernetes Operators. This section must @@ -99,9 +140,9 @@ If you are using OpenShift, then we will create a project: To deploy the Strimzi and Apicurio Operators only to inspect our namespace, we need to use an ```OperatorGroup```. An OperatorGroup is an OLM resource that provides multitenant configuration to OLM-installed Operators. For more information about this object, please review the -official documentation [here](https://docs.openshift.com/container-platform/4.5/operators/understanding_olm/olm-understanding-operatorgroups.html). +official documentation [here](https://docs.openshift.com/container-platform/4.7/operators/understanding/olm/olm-understanding-operatorgroups.html). -```shell script +```shell ❯ kubectl apply -f src/main/olm/operator-group.yml operatorgroup.operators.coreos.com/amq-streams-demo-og created ``` @@ -110,7 +151,7 @@ Now we are ready to deploy the Strimzi and Apicurio Operators: For Kubernetes use the following subscriptions: -```shell script +```shell ❯ kubectl apply -f src/main/strimzi/operator/subscription-k8s.yml subscription.operators.coreos.com/strimzi-kafka-operator created ❯ kubectl apply -f src/main/apicurio/operator/subscription-k8s.yml @@ -119,33 +160,33 @@ subscription.operators.coreos.com/apicurio-registry created For OpenShift use the following subscriptions: -```shell script +```shell ❯ oc apply -f src/main/strimzi/operator/subscription.yml subscription.operators.coreos.com/strimzi-kafka-operator created ❯ oc apply -f src/main/apicurio/operator/subscription.yml subscription.operators.coreos.com/apicurio-registry created ``` -You could check that operators are successfully registered with: +You could check that operators are successfully registered with the following command: -```shell script +```shell ❯ kubectl get csv -NAME DISPLAY VERSION REPLACES PHASE -apicurio-registry.v0.0.3-v1.2.3.final Apicurio Registry Operator 0.0.3-v1.2.3.final Succeeded -strimzi-cluster-operator.v0.19.0 Strimzi 0.19.0 Succeeded +NAME DISPLAY VERSION REPLACES PHASE +apicurio-registry-operator.v1.0.0-v2.0.0.final Apicurio Registry Operator 1.0.0-v2.0.0.final Succeeded +strimzi-cluster-operator.v0.24.0 Strimzi 0.23.0 strimzi-cluster-operator.v0.23.0 Succeeded ``` or verify the pods are running: -```shell script +```shell ❯ kubectl get pod -NAME READY STATUS RESTARTS AGE -apicurio-registry-operator-cbf6fcf57-d6shn 1/1 Running 0 3m2s -strimzi-cluster-operator-v0.19.0-7555cff6d9-vlgwp 1/1 Running 0 3m7s +NAME READY STATUS RESTARTS AGE +apicurio-registry-operator-5b885fb47c-5dgw5 1/1 Running 0 2m36s +strimzi-cluster-operator-v0.24.0-888d55ccb-q8cgl 1/1 Running 0 2m43s ``` For more information about how to install Operators using the CLI command, please review this [article]( -https://docs.openshift.com/container-platform/4.5/operators/olm-adding-operators-to-cluster.html#olm-installing-operator-from-operatorhub-using-cli_olm-adding-operators-to-a-cluster) +https://docs.openshift.com/container-platform/4.7/operators/admin/olm-adding-operators-to-cluster.html#olm-installing-operator-from-operatorhub-using-cli_olm-adding-operators-to-a-cluster) ### Deploying Kafka @@ -154,7 +195,7 @@ and some Kafka Topics using the Strimzi Operators. To deploy the Kafka Cluster: -```shell script +```shell ❯ kubectl apply -f src/main/strimzi/kafka/kafka.yml kafka.kafka.strimzi.io/my-kafka created ``` @@ -164,7 +205,7 @@ kafka.kafka.strimzi.io/my-kafka created To deploy the Kafka Topics: -```shell script +```shell ❯ kubectl apply -f src/main/strimzi/topics/kafkatopic-messages.yml kafkatopic.kafka.strimzi.io/messages created ``` @@ -174,7 +215,7 @@ kafkatopic.kafka.strimzi.io/messages created There is a set of different users to connect to Kafka Cluster. We will deploy here to be used later. -```shell script +```shell ❯ kubectl apply -f src/main/strimzi/users/ kafkauser.kafka.strimzi.io/application created kafkauser.kafka.strimzi.io/service-registry-scram created @@ -183,7 +224,7 @@ kafkauser.kafka.strimzi.io/service-registry-tls created After some minutes Kafka Cluster will be deployed: -```shell script +```shell ❯ kubectl get pod NAME READY STATUS RESTARTS AGE apicurio-registry-operator-cbf6fcf57-d6shn 1/1 Running 0 4m32s @@ -198,37 +239,36 @@ strimzi-cluster-operator-v0.19.0-7555cff6d9-vlgwp 1/1 Running 0 Service Registry needs a set of Kafka Topics to store schemas and metadata of them. We need to execute the following commands to create the KafkaTopics and to deploy an instance of Service Registry: -```shell script +```shell ❯ kubectl apply -f src/main/apicurio/topics/ -kafkatopic.kafka.strimzi.io/global-id-topic created -kafkatopic.kafka.strimzi.io/storage-topic created +kafkatopic.kafka.strimzi.io/kafkasql-journal created ❯ kubectl apply -f src/main/apicurio/service-registry.yml apicurioregistry.apicur.io/service-registry created ``` -A new DeploymentConfig is created with the prefix ```service-registry-deployment-``` and a new route with -the prefix ```service-registry-ingres-```. We must inspect it to get the route created to expose the Service Registry API. +A new Deployment/DeploymentConfig is created with the prefix ```service-registry-deployment-``` and a new route with +the prefix ```service-registry-ingress-```. We must inspect it to get the route created to expose the Service Registry API. -In Kubernetes we will use an ingress entry based with ```NodePort```. To get the ingress entry: +In Kubernetes, we will use an ingress entry based with ```NodePort```. To get the ingress entry: -```shell script +```shell ❯ kubectl get deployment | grep service-registry-deployment -service-registry-deployment-m57cq -❯ kubectl expose deployment service-registry-deployment-m57cq --type=NodePort --port=8080 -service/service-registry-deployment-m57cq exposed -❯ kubectl get service/service-registry-deployment-m57cq -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -service-registry-deployment-m57cq NodePort 10.101.44.116 8080:31216/TCP 39s -❯ minikube service service-registry-deployment-m57cq --url -n amq-streams-demo -http://192.168.39.227:31216 +service-registry-deployment +❯ kubectl expose deployment service-registry-deployment --type=NodePort --port=8080 +service/service-registry-deployment exposed +❯ kubectl get service/service-registry-deployment +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +service-registry-deployment NodePort 10.110.228.232 8080:30957/TCP 12s +❯ minikube service service-registry-deployment --url -n amq-streams-demo +http://192.168.50.174:30957 ``` In OpenShift, we only need to check the ```host``` attribute from the OpenShift Route: -```shell script +```shell ❯ oc get route -NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD -service-registry-ingress-rjmr4-8gxsd service-registry.amq-streams.apps-crc.testing / service-registry-service-bfj4n None +NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD +service-registry-ingress-48txm service-registry.amq-streams-demo.apps-crc.testing / service-registry-service None ``` While few minutes until your Service Registry has deployed. @@ -236,19 +276,26 @@ While few minutes until your Service Registry has deployed. The Service Registry Web Console and API endpoints will be available from: * **Web Console**: http:///ui/ -* **API REST**: http://KUBERNETES_OPENSHIFT_SR_ROUTE_SERVICE_HOST/api/ +* **API REST**: http://KUBERNETES_OPENSHIFT_SR_ROUTE_SERVICE_HOST/apis/registry/v2 Set up the ```apicurio.registry.url``` property in the ```pom.xml``` file the Service Registry url before to publish the schemas used by this application: -```shell script -❯ oc get route service-registry-ingress-rjmr4-8gxsd -o jsonpath='{.spec.host}' +```shell +❯ oc get route -l app=service-registry -o jsonpath='{.items[0].spec.host}' ``` -To register the schemas in Service Registry execute: +To register the schemas in Service Registry running in Kubernetes: ```shell script -❯ mvn clean generate-sources -Papicurio +❯ ./mvnw clean generate-sources -Papicurio \ + -Dapicurio.registry.url=$(minikube service service-registry-deployment --url -n amq-streams-demo)/apis/registry/v2 +``` + +To register the schemas in Service Registry running in OpenShift: + +```shell +❯ ./mvnw clean generate-sources -Papicurio ``` The next screenshot shows the schemas registed in the Web Console: @@ -275,7 +322,7 @@ secret to store the password. This secret must be checked to extract the passwor To extract the password of the KafkaUser and declare as Environment Variable: -```shell script +```shell ❯ export KAFKA_USER_PASSWORD=$(kubectl get secret application -o jsonpath='{.data.password}' | base64 -d) ``` @@ -302,41 +349,42 @@ by JKube to deploy our application in Kubernetes or OpenShift. validate the schemas. ```text -apicurio.registry.url = http://service-registry.amq-streams-demo.apps-crc.testing/api +apicurio.registry.url = http://service-registry.amq-streams-demo.apps-crc.testing/apis/registry/v2 ``` To build the application: -```shell script -❯ mvn clean package +```shell +❯ ./mvnw clean package ``` To run locally: -```shell script +```shell ❯ export KAFKA_USER_PASSWORD=$(kubectl get secret application -o jsonpath='{.data.password}' | base64 -d) -❯ mvn compile quarkus:dev +❯ ./mvnw compile quarkus:dev ``` Or you can deploy into Kubernetes or OpenShift platform using [Eclipse JKube](https://github.com/eclipse/jkube) Maven Plug-ins: To deploy the application using the Kubernetes Maven Plug-In: -```shell script -❯ mvn package k8s:resource k8s:build k8s:push k8s:apply -Pkubernetes -Djkube.build.strategy=jib +```shell +❯ ./mvnw package k8s:resource k8s:build k8s:push k8s:apply -Pkubernetes -Djkube.build.strategy=jib ``` To deploy the application using the OpenShift Maven Plug-In (only valid for OpenShift Platform): ```shell script -❯ mvn package oc:resource oc:build oc:apply -Popenshift +❯ ./mvnw package oc:resource oc:build oc:apply -Popenshift,native -Dquarkus.native.container-build=true ``` To deploy the application in Minikube: ```shell script ❯ eval $(minikube docker-env) -❯ mvn package k8s:resource k8s:build k8s:apply -Pkubernetes +❯ kubectl create -f src/main/k8s/role.yml +❯ ./mvnw package k8s:resource k8s:build k8s:apply -Pkubernetes ``` # REST API diff --git a/pom.xml b/pom.xml index 952dfd4..308f6f6 100644 --- a/pom.xml +++ b/pom.xml @@ -4,7 +4,7 @@ io.jromanmartin.kafka kafka-clients-quarkus-sample - 1.0.0-SNAPSHOT + 2.0.0-SNAPSHOT jar @@ -23,23 +23,23 @@ UTF-8 11 11 + true + 3.8.1 + 3.0.0-M5 - 2.5.0 + 2.7.0 - 1.10.1 + 1.10.2 - 1.3.2.Final - - - http://service-registry.amq-streams-demo.apps.selae.sandbox1805.opentlc.com/api + 2.0.1.Final + + http://service-registry.amq-streams-demo.apps-crc.testing/apis/registry/v2 NodePort ${project.artifactId}:${project.version} 8181 - 1.10.5.Final - - 0.0.1 + 1.13.7.Final @@ -83,23 +83,11 @@ quarkus-avro - - io.quarkiverse.apicurio - quarkiverse-apicurio-registry-client - ${quarkiverse.apicurio.version} - - io.apicurio - apicurio-registry-utils-serde + apicurio-registry-serdes-avro-serde ${apicurio.version} - - - org.jboss.spec.javax.interceptor - jboss-interceptors-api_1.2_spec - - @@ -115,14 +103,32 @@ + build generate-code generate-code-tests - build + + maven-compiler-plugin + ${compiler-plugin.version} + + ${maven.compiler.parameters} + + + + maven-surefire-plugin + ${surefire-plugin.version} + + + org.jboss.logmanager.LogManager + ${maven.home} + + + + org.apache.avro @@ -153,9 +159,10 @@ org.codehaus.mojo build-helper-maven-plugin - 3.1.0 + 3.2.0 + add-source generate-sources @@ -174,6 +181,7 @@ + apicurio @@ -190,17 +198,42 @@ ${apicurio.registry.url} - AVRO - - ${project.basedir}/src/main/resources/schemas/message.avsc - - ${project.basedir}/src/main/resources/schemas/message.avsc + + default + messages + AVRO + + ${project.basedir}/src/main/resources/schemas/message.avsc + + RETURN_OR_UPDATE + true + + - ${project.basedir}/src/main/resources/schemas/message.avsc - - ${project.basedir}/src/main/resources/schemas/message.avsc + + default + messages-value + AVRO + + ${project.basedir}/src/main/resources/schemas/message.avsc + + RETURN_OR_UPDATE + true + + + + + io.jromanmartin.kafka.schema.avro + Message + AVRO + + ${project.basedir}/src/main/resources/schemas/message.avsc + + RETURN_OR_UPDATE + true + @@ -218,7 +251,7 @@ org.eclipse.jkube openshift-maven-plugin - 1.0.2 + 1.3.0 @@ -232,7 +265,7 @@ org.eclipse.jkube kubernetes-maven-plugin - 1.0.2 + 1.3.0 @@ -250,7 +283,7 @@ maven-failsafe-plugin - 3.0.0-M5 + ${surefire-plugin.version} @@ -262,9 +295,10 @@ ${project.build.directory}/${project.build.finalName}-runner - org.jboss.logmanager.LogManager + + org.jboss.logmanager.LogManager - + ${maven.home} diff --git a/src/main/apicurio/operator/subscription-k8s.yml b/src/main/apicurio/operator/subscription-k8s.yml index 27194f1..587aa11 100644 --- a/src/main/apicurio/operator/subscription-k8s.yml +++ b/src/main/apicurio/operator/subscription-k8s.yml @@ -3,9 +3,9 @@ apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: apicurio-registry - namespace: amq-streams-demo + namespace: operators spec: - channel: alpha + channel: 2.x name: apicurio-registry source: operatorhubio-catalog sourceNamespace: olm diff --git a/src/main/apicurio/operator/subscription.yml b/src/main/apicurio/operator/subscription.yml index 0f6d761..93f96fc 100644 --- a/src/main/apicurio/operator/subscription.yml +++ b/src/main/apicurio/operator/subscription.yml @@ -5,7 +5,7 @@ metadata: name: apicurio-registry namespace: amq-streams-demo spec: - channel: alpha + channel: 2.x installPlanApproval: Automatic name: apicurio-registry source: community-operators diff --git a/src/main/apicurio/service-registry.yml b/src/main/apicurio/service-registry.yml index c18b801..57249f7 100644 --- a/src/main/apicurio/service-registry.yml +++ b/src/main/apicurio/service-registry.yml @@ -1,13 +1,11 @@ -apiVersion: apicur.io/v1alpha1 +apiVersion: registry.apicur.io/v1 kind: ApicurioRegistry metadata: name: service-registry spec: configuration: - logLevel: INFO - persistence: "streams" - streams: - applicationId: "service-registry" + persistence: "kafkasql" + kafkasql: bootstrapServers: "my-kafka-kafka-bootstrap:9093" security: scram: @@ -15,10 +13,8 @@ spec: user: service-registry-scram passwordSecretName: service-registry-scram truststoreSecretName: my-kafka-cluster-ca-cert -# tls: -# keystoreSecretName: service-registry-tls -# truststoreSecretName: my-kafka-cluster-ca-cert ui: readOnly: false + logLevel: INFO deployment: replicas: 1 diff --git a/src/main/apicurio/topics/kafkasql-journal-topic.yml b/src/main/apicurio/topics/kafkasql-journal-topic.yml new file mode 100644 index 0000000..463349d --- /dev/null +++ b/src/main/apicurio/topics/kafkasql-journal-topic.yml @@ -0,0 +1,11 @@ +apiVersion: kafka.strimzi.io/v1beta2 +kind: KafkaTopic +metadata: + labels: + strimzi.io/cluster: my-kafka + name: kafkasql-journal +spec: + partitions: 1 + replicas: 1 + config: + cleanup.policy: compact diff --git a/src/main/apicurio/topics/kafkatopic-global-id-topic.yml b/src/main/apicurio/topics/kafkatopic-global-id-topic.yml deleted file mode 100644 index 268c29e..0000000 --- a/src/main/apicurio/topics/kafkatopic-global-id-topic.yml +++ /dev/null @@ -1,13 +0,0 @@ -apiVersion: kafka.strimzi.io/v1beta1 -kind: KafkaTopic -metadata: - labels: - strimzi.io/cluster: my-kafka - name: global-id-topic -spec: - partitions: 3 - replicas: 1 - # Uncomment this line if your Apache Kafka Cluster is deployed with an HA topology - #replicas: 3 - config: - cleanup.policy: compact diff --git a/src/main/apicurio/topics/kafkatopic-storage-topic.yml b/src/main/apicurio/topics/kafkatopic-storage-topic.yml deleted file mode 100644 index 48fd2c7..0000000 --- a/src/main/apicurio/topics/kafkatopic-storage-topic.yml +++ /dev/null @@ -1,13 +0,0 @@ -apiVersion: kafka.strimzi.io/v1beta1 -kind: KafkaTopic -metadata: - labels: - strimzi.io/cluster: my-kafka - name: storage-topic -spec: - partitions: 3 - replicas: 1 - # Uncomment this line if your Apache Kafka Cluster is deployed with an HA topology - #replicas: 3 - config: - cleanup.policy: compact diff --git a/src/main/docker/Dockerfile.jvm b/src/main/docker/Dockerfile.jvm new file mode 100644 index 0000000..978fff9 --- /dev/null +++ b/src/main/docker/Dockerfile.jvm @@ -0,0 +1,54 @@ +#### +# This Dockerfile is used in order to build a container that runs the Quarkus application in JVM mode +# +# Before building the container image run: +# +# ./mvnw package +# +# Then, build the image with: +# +# docker build -f src/main/docker/Dockerfile.jvm -t quarkus/kafka-clients-quarkus-sample . +# +# Then run the container using: +# +# docker run -i --rm -p 8080:8080 quarkus/kafka-clients-quarkus-sample-jvm +# +# If you want to include the debug port into your docker image +# you will have to expose the debug port (default 5005) like this : EXPOSE 8080 5005 +# +# Then run the container using : +# +# docker run -i --rm -p 8080:8080 -p 5005:5005 -e JAVA_ENABLE_DEBUG="true" quarkus/kafka-clients-quarkus-sample-jvm +# +### +FROM registry.access.redhat.com/ubi8/ubi-minimal:8.3 + +ARG JAVA_PACKAGE=java-11-openjdk-headless +ARG RUN_JAVA_VERSION=1.3.8 +ENV LANG='en_US.UTF-8' LANGUAGE='en_US:en' +# Install java and the run-java script +# Also set up permissions for user `1001` +RUN microdnf install curl ca-certificates ${JAVA_PACKAGE} \ + && microdnf update \ + && microdnf clean all \ + && mkdir /deployments \ + && chown 1001 /deployments \ + && chmod "g+rwX" /deployments \ + && chown 1001:root /deployments \ + && curl https://repo1.maven.org/maven2/io/fabric8/run-java-sh/${RUN_JAVA_VERSION}/run-java-sh-${RUN_JAVA_VERSION}-sh.sh -o /deployments/run-java.sh \ + && chown 1001 /deployments/run-java.sh \ + && chmod 540 /deployments/run-java.sh \ + && echo "securerandom.source=file:/dev/urandom" >> /etc/alternatives/jre/conf/security/java.security + +# Configure the JAVA_OPTIONS, you can add -XshowSettings:vm to also display the heap size. +ENV JAVA_OPTIONS="-Dquarkus.http.host=0.0.0.0 -Djava.util.logging.manager=org.jboss.logmanager.LogManager" +# We make four distinct layers so if there are application changes the library layers can be re-used +COPY --chown=1001 target/quarkus-app/lib/ /deployments/lib/ +COPY --chown=1001 target/quarkus-app/*.jar /deployments/ +COPY --chown=1001 target/quarkus-app/app/ /deployments/app/ +COPY --chown=1001 target/quarkus-app/quarkus/ /deployments/quarkus/ + +EXPOSE 8080 +USER 1001 + +ENTRYPOINT [ "/deployments/run-java.sh" ] diff --git a/src/main/docker/Dockerfile.legacy-jar b/src/main/docker/Dockerfile.legacy-jar new file mode 100644 index 0000000..290a6b1 --- /dev/null +++ b/src/main/docker/Dockerfile.legacy-jar @@ -0,0 +1,51 @@ +#### +# This Dockerfile is used in order to build a container that runs the Quarkus application in JVM mode +# +# Before building the container image run: +# +# ./mvnw package -Dquarkus.package.type=legacy-jar +# +# Then, build the image with: +# +# docker build -f src/main/docker/Dockerfile.legacy-jar -t quarkus/kafka-clients-quarkus-sample-legacy-jar . +# +# Then run the container using: +# +# docker run -i --rm -p 8080:8080 quarkus/kafka-clients-quarkus-sample-legacy-jar +# +# If you want to include the debug port into your docker image +# you will have to expose the debug port (default 5005) like this : EXPOSE 8080 5005 +# +# Then run the container using : +# +# docker run -i --rm -p 8080:8080 -p 5005:5005 -e JAVA_ENABLE_DEBUG="true" quarkus/kafka-clients-quarkus-sample-legacy-jar +# +### +FROM registry.access.redhat.com/ubi8/ubi-minimal:8.3 + +ARG JAVA_PACKAGE=java-11-openjdk-headless +ARG RUN_JAVA_VERSION=1.3.8 +ENV LANG='en_US.UTF-8' LANGUAGE='en_US:en' +# Install java and the run-java script +# Also set up permissions for user `1001` +RUN microdnf install curl ca-certificates ${JAVA_PACKAGE} \ + && microdnf update \ + && microdnf clean all \ + && mkdir /deployments \ + && chown 1001 /deployments \ + && chmod "g+rwX" /deployments \ + && chown 1001:root /deployments \ + && curl https://repo1.maven.org/maven2/io/fabric8/run-java-sh/${RUN_JAVA_VERSION}/run-java-sh-${RUN_JAVA_VERSION}-sh.sh -o /deployments/run-java.sh \ + && chown 1001 /deployments/run-java.sh \ + && chmod 540 /deployments/run-java.sh \ + && echo "securerandom.source=file:/dev/urandom" >> /etc/alternatives/jre/conf/security/java.security + +# Configure the JAVA_OPTIONS, you can add -XshowSettings:vm to also display the heap size. +ENV JAVA_OPTIONS="-Dquarkus.http.host=0.0.0.0 -Djava.util.logging.manager=org.jboss.logmanager.LogManager" +COPY target/lib/* /deployments/lib/ +COPY target/*-runner.jar /deployments/app.jar + +EXPOSE 8080 +USER 1001 + +ENTRYPOINT [ "/deployments/run-java.sh" ] diff --git a/src/main/docker/Dockerfile.native b/src/main/docker/Dockerfile.native new file mode 100644 index 0000000..ae58d79 --- /dev/null +++ b/src/main/docker/Dockerfile.native @@ -0,0 +1,27 @@ +#### +# This Dockerfile is used in order to build a container that runs the Quarkus application in native (no JVM) mode +# +# Before building the container image run: +# +# ./mvnw package -Pnative +# +# Then, build the image with: +# +# docker build -f src/main/docker/Dockerfile.native -t quarkus/kafka-clients-quarkus-sample . +# +# Then run the container using: +# +# docker run -i --rm -p 8080:8080 quarkus/kafka-clients-quarkus-sample +# +### +FROM registry.access.redhat.com/ubi8/ubi-minimal:8.3 +WORKDIR /work/ +RUN chown 1001 /work \ + && chmod "g+rwX" /work \ + && chown 1001:root /work +COPY --chown=1001:root target/*-runner /work/application + +EXPOSE 8080 +USER 1001 + +CMD ["./application", "-Dquarkus.http.host=0.0.0.0"] diff --git a/src/main/docker/Dockerfile.native-distroless b/src/main/docker/Dockerfile.native-distroless new file mode 100644 index 0000000..f1c0c0a --- /dev/null +++ b/src/main/docker/Dockerfile.native-distroless @@ -0,0 +1,23 @@ +#### +# This Dockerfile is used in order to build a distroless container that runs the Quarkus application in native (no JVM) mode +# +# Before building the container image run: +# +# ./mvnw package -Pnative +# +# Then, build the image with: +# +# docker build -f src/main/docker/Dockerfile.native-distroless -t quarkus/kafka-clients-quarkus-sample . +# +# Then run the container using: +# +# docker run -i --rm -p 8080:8080 quarkus/kafka-clients-quarkus-sample +# +### +FROM quay.io/quarkus/quarkus-distroless-image:1.0 +COPY target/*-runner /application + +EXPOSE 8080 +USER nonroot + +CMD ["./application", "-Dquarkus.http.host=0.0.0.0"] diff --git a/src/main/java/io/jromanmartin/kafka/config/KafkaConfig.java b/src/main/java/io/jromanmartin/kafka/config/KafkaConfig.java index 3ba5fc6..4680994 100644 --- a/src/main/java/io/jromanmartin/kafka/config/KafkaConfig.java +++ b/src/main/java/io/jromanmartin/kafka/config/KafkaConfig.java @@ -1,11 +1,10 @@ package io.jromanmartin.kafka.config; -import io.apicurio.registry.utils.serde.AbstractKafkaSerDe; -import io.apicurio.registry.utils.serde.AbstractKafkaSerializer; -import io.apicurio.registry.utils.serde.AvroKafkaDeserializer; -import io.apicurio.registry.utils.serde.AvroKafkaSerializer; -import io.apicurio.registry.utils.serde.avro.AvroDatumProvider; -import io.apicurio.registry.utils.serde.strategy.TopicIdStrategy; +import io.apicurio.registry.serde.SerdeConfig; +import io.apicurio.registry.serde.avro.AvroKafkaDeserializer; +import io.apicurio.registry.serde.avro.AvroKafkaSerdeConfig; +import io.apicurio.registry.serde.avro.AvroKafkaSerializer; +import io.apicurio.registry.serde.avro.strategy.RecordIdStrategy; import io.jromanmartin.kafka.schema.avro.Message; import org.apache.kafka.clients.admin.AdminClientConfig; import org.apache.kafka.clients.consumer.Consumer; @@ -96,16 +95,16 @@ public Producer createProducer() { props.putIfAbsent(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, AvroKafkaSerializer.class.getName()); // Service Registry - props.putIfAbsent(AbstractKafkaSerDe.REGISTRY_URL_CONFIG_PARAM, serviceRegistryUrl); + props.putIfAbsent(SerdeConfig.REGISTRY_URL, serviceRegistryUrl); // Artifact Id Strategies (implementations of ArtifactIdStrategy) // Simple Topic Id Strategy (schema = topicName) - //props.putIfAbsent(AbstractKafkaSerializer.REGISTRY_ARTIFACT_ID_STRATEGY_CONFIG_PARAM, SimpleTopicIdStrategy.class.getName()); + //props.putIfAbsent(SerdeConfig.ARTIFACT_RESOLVER_STRATEGY, SimpleTopicIdStrategy.class.getName()); // Topic Id Strategy (schema = topicName-(key|value)) - Default Strategy - props.putIfAbsent(AbstractKafkaSerializer.REGISTRY_ARTIFACT_ID_STRATEGY_CONFIG_PARAM, TopicIdStrategy.class.getName()); + //props.putIfAbsent(SerdeConfig.ARTIFACT_RESOLVER_STRATEGY, TopicIdStrategy.class.getName()); // Record Id Strategy (schema = full name of the schema (namespace.name)) - //props.putIfAbsent(AbstractKafkaSerializer.REGISTRY_ARTIFACT_ID_STRATEGY_CONFIG_PARAM, RecordIdStrategy.class.getName()); + props.putIfAbsent(SerdeConfig.ARTIFACT_RESOLVER_STRATEGY, RecordIdStrategy.class.getName()); // Topic Record Id Strategy (schema = topic name and the full name of the schema (topicName-namespace.name) - //props.putIfAbsent(AbstractKafkaSerializer.REGISTRY_ARTIFACT_ID_STRATEGY_CONFIG_PARAM, TopicRecordIdStrategy.class.getName()); + //props.putIfAbsent(SerdeConfig.ARTIFACT_RESOLVER_STRATEGY, TopicRecordIdStrategy.class.getName()); // Global Id Strategies (implementations of GlobalIdStrategy) //props.putIfAbsent(AbstractKafkaSerializer.REGISTRY_GLOBAL_ID_STRATEGY_CONFIG_PARAM, FindLatestIdStrategy.class.getName()); @@ -113,6 +112,14 @@ public Producer createProducer() { //props.putIfAbsent(AbstractKafkaSerializer.REGISTRY_GLOBAL_ID_STRATEGY_CONFIG_PARAM, GetOrCreateIdStrategy.class.getName()); //props.putIfAbsent(AbstractKafkaSerializer.REGISTRY_GLOBAL_ID_STRATEGY_CONFIG_PARAM, AutoRegisterIdStrategy.class.getName()); + // Auto-register Artifact into Service Registry + // If this property is `false` then you will be affected by the issue: https://github.com/Apicurio/apicurio-registry/issues/1592 + // This property is declared as `false` at the moment to allow follow that issue in Apicurio Registry + props.putIfAbsent(SerdeConfig.AUTO_REGISTER_ARTIFACT, false); + + // Using JSON encoding (to help in debugging) + props.put(AvroKafkaSerdeConfig.AVRO_ENCODING, "JSON"); + // Acknowledgement props.putIfAbsent(ProducerConfig.ACKS_CONFIG, acks); @@ -171,9 +178,9 @@ public Consumer createConsumer() { props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, offsetReset); // Service Registry Integration - props.put(AbstractKafkaSerDe.REGISTRY_URL_CONFIG_PARAM, serviceRegistryUrl); + props.put(SerdeConfig.REGISTRY_URL, serviceRegistryUrl); // Use Specific Avro classes instead of the GenericRecord class definition - props.put(AvroDatumProvider.REGISTRY_USE_SPECIFIC_AVRO_READER_CONFIG_PARAM, true); + props.put(AvroKafkaSerdeConfig.USE_SPECIFIC_AVRO_READER, true); return new KafkaConsumer<>(props); } diff --git a/src/main/jkube/service.yml b/src/main/jkube/service.yml deleted file mode 100644 index aba149c..0000000 --- a/src/main/jkube/service.yml +++ /dev/null @@ -1,17 +0,0 @@ -metadata: - name: ${project.artifactId} - labels: - app: ${project.artifactId} - group: ${project.groupId} - project: ${project.artifactId} - provider: jkube - expose: "true" -spec: - type: ${jkube.enricher.jkube-service.type} - ports: - - name: http - port: 8181 - protocol: TCP - targetPort: 8181 - selector: - deploymentconfig: ${project.artifactId} diff --git a/src/main/k8s/role.yml b/src/main/k8s/role.yml new file mode 100644 index 0000000..88ff671 --- /dev/null +++ b/src/main/k8s/role.yml @@ -0,0 +1,24 @@ +--- +kind: Role +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + namespace: amq-streams-demo + name: namespace-reader +rules: + - apiGroups: ["", "extensions", "apps"] + resources: ["configmaps", "pods", "services", "endpoints", "secrets"] + verbs: ["get", "list", "watch"] +--- +kind: RoleBinding +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: namespace-reader-binding + namespace: amq-streams-demo +subjects: + - kind: ServiceAccount + name: default + apiGroup: "" +roleRef: + kind: Role + name: namespace-reader + apiGroup: "" diff --git a/src/main/resources/application.properties b/src/main/resources/application.properties index f840dc1..74a223a 100644 --- a/src/main/resources/application.properties +++ b/src/main/resources/application.properties @@ -2,7 +2,7 @@ # Kafka Clients Properties # # Kafka Bootstrap Servers -#app.kafka.bootstrap-servers = localhost:9092 +%dev.app.kafka.bootstrap-servers = localhost:9092 app.kafka.bootstrap-servers = my-kafka-kafka-bootstrap:9092 # Kafka User Credentials app.kafka.user.name = application @@ -33,12 +33,11 @@ app.consumer.offsetReset = earliest # Seconds app.consumer.poolTimeout = 10 +# # Service Registry -#apicurio.registry.url = http://localhost:8080/api -apicurio.registry.url = http://service-registry.amq-streams-demo.apps-crc.testing/api -#apicurio.registry.url = http://service-registry-service-ljj9n:8080/api -#apicurio.registry.cached = true -#apicurio.registry.use-specific-avro-reader = true +# +%dev.apicurio.registry.url = http://localhost:8080/apis/registry/v2 +apicurio.registry.url = http://service-registry-service:8080/apis/registry/v2 # # Quarkus Kafka Properties (SmallRye Kafka connector) @@ -58,10 +57,11 @@ mp.messaging.incoming.messages.connector=smallrye-kafka mp.messaging.incoming.messages.group.id=${app.consumer.groupId}-mp-incoming-channel mp.messaging.incoming.messages.topic=messages mp.messaging.incoming.messages.key.deserializer=org.apache.kafka.common.serialization.StringDeserializer -mp.messaging.incoming.messages.value.deserializer=io.apicurio.registry.utils.serde.AvroKafkaDeserializer +mp.messaging.incoming.messages.value.deserializer=io.apicurio.registry.serde.avro.AvroKafkaDeserializer mp.messaging.incoming.messages.properties.partition.assignment.strategy=org.apache.kafka.clients.consumer.RoundRobinAssignor mp.messaging.incoming.messages.apicurio.registry.url=${apicurio.registry.url} -mp.messaging.incoming.messages.apicurio.registry.avro-datum-provider=io.apicurio.registry.utils.serde.avro.ReflectAvroDatumProvider +#mp.messaging.incoming.messages.apicurio.registry.avro-datum-provider=io.apicurio.registry.utils.serde.avro.ReflectAvroDatumProvider +mp.messaging.incoming.messages.apicurio.registry.use-specific-avro-reader=true # # Swagger UI Properties @@ -76,3 +76,9 @@ mp.openapi.extensions.smallrye.info.description=Sample Spring Boot REST service mp.openapi.extensions.smallrye.info.contact.email=jromanmartin@gmail.com mp.openapi.extensions.smallrye.info.license.name=Apache 2.0 mp.openapi.extensions.smallrye.info.license.url=http://www.apache.org/licenses/LICENSE-2.0.html + +# +# Quarkus Packaging properties +# +# Legacy build from Quarkus 1.12 +quarkus.package.type=uber-jar diff --git a/src/main/strimzi/kafka/kafka-ha.yml b/src/main/strimzi/kafka/kafka-ha.yml index b26f830..bea5681 100644 --- a/src/main/strimzi/kafka/kafka-ha.yml +++ b/src/main/strimzi/kafka/kafka-ha.yml @@ -1,4 +1,4 @@ -apiVersion: kafka.strimzi.io/v1beta1 +apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: labels: @@ -6,10 +6,10 @@ metadata: name: my-kafka spec: kafka: - version: 2.5.0 + version: 2.7.0 config: # Log message format - log.message.format.version: "2.5" + log.message.format.version: "2.7" # default replication factors for automatically created topics default.replication.factor: 3 # The default number of log partitions per topic @@ -41,117 +41,21 @@ spec: # How long are delete records retained?. Default value: 86400000 (24 hours) log.cleaner.delete.retention.ms: 86400000 listeners: - plain: + - name: plain + port: 9092 + tls: false + type: internal authentication: type: scram-sha-512 - tls: + - name: tls + port: 9093 + tls: true + type: internal authentication: type: scram-sha-512 -# # Only valid in OpenShift -# external: -# type: route -# authentication: -# type: scram-sha-512 livenessProbe: initialDelaySeconds: 90 timeoutSeconds: 5 - metrics: - # Inspired by config from Kafka 2.0.0 example rules: - # https://github.com/prometheus/jmx_exporter/blob/master/example_configs/kafka-2_0_0.yml - lowercaseOutputName: true - rules: - # Special cases and very specific rules - - pattern : kafka.server<>Value - name: kafka_server_$1_$2 - type: GAUGE - labels: - clientId: "$3" - topic: "$4" - partition: "$5" - - pattern : kafka.server<>Value - name: kafka_server_$1_$2 - type: GAUGE - labels: - clientId: "$3" - broker: "$4:$5" - # Some percent metrics use MeanRate attribute - # Ex) kafka.server<>MeanRate - - pattern: kafka.(\w+)<>MeanRate - name: kafka_$1_$2_$3_percent - type: GAUGE - # Generic gauges for percents - - pattern: kafka.(\w+)<>Value - name: kafka_$1_$2_$3_percent - type: GAUGE - - pattern: kafka.(\w+)<>Value - name: kafka_$1_$2_$3_percent - type: GAUGE - labels: - "$4": "$5" - # Generic per-second counters with 0-2 key/value pairs - - pattern: kafka.(\w+)<>Count - name: kafka_$1_$2_$3_total - type: COUNTER - labels: - "$4": "$5" - "$6": "$7" - - pattern: kafka.(\w+)<>Count - name: kafka_$1_$2_$3_total - type: COUNTER - labels: - "$4": "$5" - - pattern: kafka.(\w+)<>Count - name: kafka_$1_$2_$3_total - type: COUNTER - # Generic gauges with 0-2 key/value pairs - - pattern: kafka.(\w+)<>Value - name: kafka_$1_$2_$3 - type: GAUGE - labels: - "$4": "$5" - "$6": "$7" - - pattern: kafka.(\w+)<>Value - name: kafka_$1_$2_$3 - type: GAUGE - labels: - "$4": "$5" - - pattern: kafka.(\w+)<>Value - name: kafka_$1_$2_$3 - type: GAUGE - # Emulate Prometheus 'Summary' metrics for the exported 'Histogram's. - # Note that these are missing the '_sum' metric! - - pattern: kafka.(\w+)<>Count - name: kafka_$1_$2_$3_count - type: COUNTER - labels: - "$4": "$5" - "$6": "$7" - - pattern: kafka.(\w+)<>(\d+)thPercentile - name: kafka_$1_$2_$3 - type: GAUGE - labels: - "$4": "$5" - "$6": "$7" - quantile: "0.$8" - - pattern: kafka.(\w+)<>Count - name: kafka_$1_$2_$3_count - type: COUNTER - labels: - "$4": "$5" - - pattern: kafka.(\w+)<>(\d+)thPercentile - name: kafka_$1_$2_$3 - type: GAUGE - labels: - "$4": "$5" - quantile: "0.$6" - - pattern: kafka.(\w+)<>Count - name: kafka_$1_$2_$3_count - type: COUNTER - - pattern: kafka.(\w+)<>(\d+)thPercentile - name: kafka_$1_$2_$3 - type: GAUGE - labels: - quantile: "0.$4" readinessProbe: initialDelaySeconds: 60 timeoutSeconds: 5 @@ -168,50 +72,10 @@ spec: metadata: labels: custom-strimzi-label: my-kafka - terminationGracePeriodSeconds: 120 zookeeper: livenessProbe: initialDelaySeconds: 90 timeoutSeconds: 5 - metrics: - # Inspired by Zookeeper rules - # https://github.com/prometheus/jmx_exporter/blob/master/example_configs/zookeeper.yaml - lowercaseOutputName: true - rules: - # replicated Zookeeper - - pattern: "org.apache.ZooKeeperService<>(\\w+)" - name: "zookeeper_$2" - type: GAUGE - - pattern: "org.apache.ZooKeeperService<>(\\w+)" - name: "zookeeper_$3" - type: GAUGE - labels: - replicaId: "$2" - - pattern: "org.apache.ZooKeeperService<>(Packets.*)" - name: "zookeeper_$4" - type: COUNTER - labels: - replicaId: "$2" - memberType: "$3" - - pattern: "org.apache.ZooKeeperService<>(\\w+)" - name: "zookeeper_$4" - type: GAUGE - labels: - replicaId: "$2" - memberType: "$3" - - pattern: "org.apache.ZooKeeperService<>(\\w+)" - name: "zookeeper_$4_$5" - type: GAUGE - labels: - replicaId: "$2" - memberType: "$3" - # standalone Zookeeper - - pattern: "org.apache.ZooKeeperService<>(\\w+)" - type: GAUGE - name: "zookeeper_$2" - - pattern: "org.apache.ZooKeeperService<>(\\w+)" - type: GAUGE - name: "zookeeper_$2" readinessProbe: initialDelaySeconds: 60 timeoutSeconds: 5 diff --git a/src/main/strimzi/kafka/kafka.yml b/src/main/strimzi/kafka/kafka.yml index 2f2d516..02195dc 100644 --- a/src/main/strimzi/kafka/kafka.yml +++ b/src/main/strimzi/kafka/kafka.yml @@ -1,4 +1,4 @@ -apiVersion: kafka.strimzi.io/v1beta1 +apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: labels: @@ -6,10 +6,10 @@ metadata: name: my-kafka spec: kafka: - version: 2.5.0 + version: 2.7.0 config: # Log message format - log.message.format.version: "2.5" + log.message.format.version: "2.7" # default replication factors for automatically created topics default.replication.factor: 1 # The default number of log partitions per topic @@ -41,17 +41,18 @@ spec: # How long are delete records retained?. Default value: 86400000 (24 hours) log.cleaner.delete.retention.ms: 86400000 listeners: - plain: + - name: plain + port: 9092 + tls: false + type: internal authentication: type: scram-sha-512 - tls: + - name: tls + port: 9093 + tls: true + type: internal authentication: type: scram-sha-512 -# # Only valid in OpenShift -# external: -# type: route -# authentication: -# type: scram-sha-512 authorization: type: simple livenessProbe: diff --git a/src/main/strimzi/operator/subscription-k8s.yml b/src/main/strimzi/operator/subscription-k8s.yml index 44ca615..32cf042 100644 --- a/src/main/strimzi/operator/subscription-k8s.yml +++ b/src/main/strimzi/operator/subscription-k8s.yml @@ -3,7 +3,7 @@ apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: strimzi-kafka-operator - namespace: amq-streams-demo + namespace: operators spec: channel: stable name: strimzi-kafka-operator diff --git a/src/main/strimzi/topics/kafkatopic-messages-ha.yml b/src/main/strimzi/topics/kafkatopic-messages-ha.yml index 8972e24..2b8698a 100644 --- a/src/main/strimzi/topics/kafkatopic-messages-ha.yml +++ b/src/main/strimzi/topics/kafkatopic-messages-ha.yml @@ -1,4 +1,4 @@ -apiVersion: kafka.strimzi.io/v1beta1 +apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: labels: diff --git a/src/main/strimzi/topics/kafkatopic-messages.yml b/src/main/strimzi/topics/kafkatopic-messages.yml index 71038fd..739641a 100644 --- a/src/main/strimzi/topics/kafkatopic-messages.yml +++ b/src/main/strimzi/topics/kafkatopic-messages.yml @@ -1,4 +1,4 @@ -apiVersion: kafka.strimzi.io/v1beta1 +apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: labels: diff --git a/src/main/strimzi/users/application-user-scram.yml b/src/main/strimzi/users/application-user-scram.yml index a5101fa..ec209fb 100644 --- a/src/main/strimzi/users/application-user-scram.yml +++ b/src/main/strimzi/users/application-user-scram.yml @@ -1,5 +1,5 @@ --- -apiVersion: kafka.strimzi.io/v1beta1 +apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: application diff --git a/src/main/strimzi/users/service-registry-user-scram.yml b/src/main/strimzi/users/service-registry-user-scram.yml index 434e9b8..a156577 100644 --- a/src/main/strimzi/users/service-registry-user-scram.yml +++ b/src/main/strimzi/users/service-registry-user-scram.yml @@ -1,5 +1,5 @@ --- -apiVersion: kafka.strimzi.io/v1beta1 +apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: service-registry-scram @@ -15,86 +15,13 @@ spec: # Name equals to metadata.name property in ApicurioRegistry object - resource: type: group - name: service-registry + name: '*' + patternType: literal operation: Read - # Rules for the Global global-id-topic + # Rules for the kafkasql-journal topic - resource: type: topic - name: global-id-topic - operation: Read - - resource: - type: topic - name: global-id-topic - operation: Describe - - resource: - type: topic - name: global-id-topic - operation: Write - - resource: - type: topic - name: global-id-topic - operation: Create - # Rules for the Global storage-topic - - resource: - type: topic - name: storage-topic - operation: Read - - resource: - type: topic - name: storage-topic - operation: Describe - - resource: - type: topic - name: storage-topic - operation: Write - - resource: - type: topic - name: storage-topic - operation: Create - # Rules for the local topics created by our Service Registry instance - # Prefix value equals to metadata.name property in ApicurioRegistry object - - resource: - type: topic - name: service-registry- - patternType: prefix - operation: Read - - resource: - type: topic - name: service-registry- - patternType: prefix - operation: Describe - - resource: - type: topic - name: service-registry- - patternType: prefix - operation: Write - - resource: - type: topic - name: service-registry- - patternType: prefix - operation: Create - # Rules for the local transactionalsIds created by our Service Registry instance - # Prefix equals to metadata.name property in ApicurioRegistry object - - resource: - type: transactionalId - name: service-registry- - patternType: prefix - operation: Describe - - resource: - type: transactionalId - name: service-registry- - patternType: prefix - operation: Write - # Rules for internal Apache Kafka topics - - resource: - type: topic - name: __consumer_offsets - operation: Read - - resource: - type: topic - name: __transaction_state - operation: Read - # Rules for Cluster objects - - resource: - type: cluster - operation: IdempotentWrite + name: kafkasql-journal + patternType: literal + operation: All + diff --git a/src/main/strimzi/users/service-registry-user-tls.yml b/src/main/strimzi/users/service-registry-user-tls.yml index 0d3dc6d..cdadb84 100644 --- a/src/main/strimzi/users/service-registry-user-tls.yml +++ b/src/main/strimzi/users/service-registry-user-tls.yml @@ -1,5 +1,5 @@ --- -apiVersion: kafka.strimzi.io/v1beta1 +apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: service-registry-tls @@ -15,87 +15,12 @@ spec: # Name equals to metadata.name property in ApicurioRegistry object - resource: type: group - name: service-registry + name: '*' + patternType: literal operation: Read - # Rules for the Global global-id-topic + # Rules for the kafkasql-journal topic - resource: type: topic - name: global-id-topic - operation: Read - - resource: - type: topic - name: global-id-topic - operation: Describe - - resource: - type: topic - name: global-id-topic - operation: Write - - resource: - type: topic - name: global-id-topic - operation: Create - # Rules for the Global storage-topic - - resource: - type: topic - name: storage-topic - operation: Read - - resource: - type: topic - name: storage-topic - operation: Describe - - resource: - type: topic - name: storage-topic - operation: Write - - resource: - type: topic - name: storage-topic - operation: Create - # Rules for the local topics created by our Service Registry instance - # Prefix value equals to metadata.name property in ApicurioRegistry object - - resource: - type: topic - name: service-registry- - patternType: prefix - operation: Read - - resource: - type: topic - name: service-registry- - patternType: prefix - operation: Describe - - resource: - type: topic - name: service-registry- - patternType: prefix - operation: Write - - resource: - type: topic - name: service-registry- - patternType: prefix - operation: Create - # Rules for the local transactionalsIds created by our Service Registry instance - # Prefix equals to metadata.name property in ApicurioRegistry object - - resource: - type: transactionalId - name: service-registry- - patternType: prefix - operation: Describe - - resource: - type: transactionalId - name: service-registry- - patternType: prefix - operation: Write - # Rules for internal Apache Kafka topics - - resource: - type: topic - name: __consumer_offsets - operation: Read - - resource: - type: topic - name: __transaction_state - operation: Read - # Rules for Cluster objects - # Name/Prefix equals to ??? - - resource: - type: cluster - operation: IdempotentWrite + name: kafkasql-journal + patternType: literal + operation: All