The todo-backend
quickstart demonstrates how to implement a backend that exposes a HTTP API with JAX-RS
to manage a list of ToDo which are persisted in a database with JPA.
This quickstart shows how to setup a local deployment of this backend as well as a deployment on OpenShift to connect to a PostgreSQL database also hosted on OpenShift.
The todo-backend
quickstart demonstrates how to implement a backend that exposes a HTTP API with JAX-RS
to manage a list of ToDo which are persisted in a database with JPA
.
-
The backend exposes a HTTP API to manage a list of todos that complies with the specs defined at todobackend.com.
-
It requires a connection to a PostgreSQL database to persist the todos.
-
It uses the Bootable Jar for local and cloud deployment
-
It can be build with {productName} S2I images for cloud deployment
-
It is deployed on OpenShift using the Helm Chart for {productName}.
This backend is built and deployed as Bootable Jar that provisions the {productName} application server and all the feature packs it needs for its features.
The layers are defined in the pom.xml
file in the <configuration>
section of the org.wildfly.plugins:wildfly-jar-maven-plugin
plugin:
<layers>
<layer>cloud-server</layer>
<layer>postgresql-datasource</layer>
</layers>
The cloud-server
layer provides everything needed to run the backend on OpenShift. This also includes access to
Jakarta EE APIs such as CDI, JAX-RS, JPA, etc. These two layers comes from the {productName} feature pack provided by the Bootable Jar plugin:
<feature-pack>
<location>wildfly@maven(org.jboss.universe:community-universe)#${version.server.bootable-jar}</location>
</feature-pack>
The postgresql-datasource
layer provides a JDBC driver and DataSource to connect to a PostgreSQL database. It is not provided
by the {productName} feature pack but by an extra feature pack:
<feature-pack>
<groupId>org.wildfly</groupId>
<artifactId>wildfly-datasources-galleon-pack</artifactId>
<version>${version.wildfly-datasources-galleon-pack}</version>
</feature-pack>
The Git repository for this feature pack is hosted at https://github.com/wildfly-extras/wildfly-datasources-galleon-pack.
It provides JDBC drivers and datasources for different databases but for this quickstart, we will only need the postgresql-datasource
.
This backend is built using {productName} S2I Builder and Runtime images.
When the image is built, org.wildfly.plugins:wildfly-maven-plugin
plugin provisions the {productName} application server and all the feature packs it needs for its features.
The layers are defined in the pom.xml
file in the <configuration>
section of the org.wildfly.plugins:wildfly-maven-plugin
plugin:
<layers>
<layer>cloud-server</layer>
<layer>postgresql-datasource</layer>
</layers>
The cloud-server
layer provides everything needed to run the backend on OpenShift. This also includes access to
Jakarta EE APIs such as CDI, JAX-RS, JPA, etc. These two layers comes from the {productName} feature pack provided in the
{productName} S2I builder image.
The postgresql-datasource
layer provides a JDBC driver and DataSource to connect to a PostgreSQL database. It is also provided by
org.wildfly:wildfly-datasources-galleon-pack
which is included in the WildFly S2I image.
The Git repository for this feature pack is hosted at https://github.com/wildfly-extras/wildfly-datasources-galleon-pack.
It provides JDBC drivers and datasources for different databases but for this quickstart, we will only need the postgresql-datasource
.
As mentioned, the JDBC drivers and datasource configuration that the backend uses to connect to the PostgreSQL database
is provided by the org.wildfly:wildfly-datasources-galleon-pack
feature pack.
By default, it exposes a single datasource.
In the backend, the name of this datasource is ToDos
and is specified in the persistence.xml
to configure JPA:
<persistence-unit name="primary">
<jta-data-source>java:jboss/datasources/ToDos</jta-data-source>
</persistence-unit>
At runtime, we only need a few environment variables to establish the connection from {productName} to the external PostgreSQL database:
-
POSTGRESQL_DATABASE
- the name of the database (that will be calledtodos
) -
POSTGRESQL_SERVICE_HOST
- the host to connect to the database -
POSTGRESQL_SERVICE_PORT
- The port to connect to the database -
POSTGRESQL_USER
&POSTGRESQL_PASSWORD
- the credentials to connect to the database -
POSTGRESQL_DATASOURCE
- The name of the datasources (as mentioned above, it will beToDos
)
The Web frontend for this quickstart uses JavaScript calls to query the backend’s HTTP API.
We must enable Cross-Origin Resource Sharing (CORS) filters in the undertow
subsystem of {productName} to allow
these HTTP requests to succeed.
As we use Bootable Jar to build the application, we provide a CLI script that contains all the commands to create and configure the CORS filters in Undertow. This script is located in the src/scripts/cors_filters.cli
.
This script is executed at build time and will provide the following HTTP headers to enabled CORS:
-
Access-Control-Allow-Origin: *
-
Access-Control-Allow-Methods: GET, POST, OPTION, PUT, DELETE, PATCH
-
Access-Control-Allow-Headers: accept, authorization, content-type, x-requested-with
-
Access-Control-Allow-Credentials: true
-
Access-Control-Max-Age: 1
By default, the backend accepts requests from any origin (*
). This is only simplicity. It is possible to restrict
the allowed origin using the environment variable CORS_ORIGIN
at runtime.
The backend is packaged as a Bootable Jar and configured to be deployable on OpenShift with the Maven Profile bootable-jar-openshift
:
$ mvn clean package -P bootable-jar-openshift
Before running the backend locally, we need to have a local PostgreSQL database that we can connect to.
We use the postgresql
docker image to create one:
$ docker run --name todo-backend-db \
-e POSTGRES_USER=todos \
-e POSTGRES_PASSWORD=mysecretpassword \
-p 5432:5432 \
postgres
This will create a database named todos
that we can connect to on localhost:5432
with the credentials todos / mysecretpassword
.
With the PostgreSQL database running, we can start the backend by passing the required environment variables to connect to the database:
$ POSTGRESQL_DATABASE=todos \
POSTGRESQL_SERVICE_HOST=localhost \
POSTGRESQL_SERVICE_PORT=5432 \
POSTGRESQL_USER=todos \
POSTGRESQL_PASSWORD=mysecretpassword \
POSTGRESQL_DATASOURCE=ToDos \
java -jar target/todo-backend-bootable.jar
...
14:41:58,111 INFO [org.jboss.as.server] (Controller Boot Thread) WFLYSRV0010: Deployed "todo-backend.war" (runtime-name : "ROOT.war")
...
The backend is running, and we can use the HTTP API to manage a list of todos:
# get a list of todos
$ curl http://localhost:8080
[]
# create a todo with the title "This is my first todo item!"
$ curl -X POST -H "Content-Type: application/json" -d '{"title": "This is my first todo item!"}' http://localhost:8080/
{"completed":false,"id":1,"order":0,"title":"This is my first todo item!","url":"https://localhost:8080/1"}%
# get a list of todos with the one that was just created
$ curl http://localhost:8080
[{"completed":false,"id":1,"order":0,"title":"This is my first todo item!","url":"https://localhost:8080/1"}]
The backend is packaged and deployed on a provisioned server:
$ mvn clean package -Pprovisioned-server
Note
|
To execute the integration tests require a running PostgreSQL server.
|
Before running the backend locally, we need to have a local PostgreSQL database that we can connect to.
We use the postgresql
docker image to create one:
$ docker run --name todo-backend-db \
-e POSTGRES_USER=todos \
-e POSTGRES_PASSWORD=mysecretpassword \
-p 5432:5432 \
postgres
This will create a database named todos
that we can connect to on localhost:5432
with the credentials todos / mysecretpassword
.
With the PostgreSQL database running, we can start the backend by passing the required environment variables to connect to the database:
$ JBOSS_HOME=./target/server \
POSTGRESQL_DATABASE=todos \
POSTGRESQL_SERVICE_HOST=localhost \
POSTGRESQL_SERVICE_PORT=5432 \
POSTGRESQL_USER=todos \
POSTGRESQL_PASSWORD=mysecretpassword \
POSTGRESQL_DATASOURCE=ToDos \
./target/server/bin/standalone.sh
...
14:41:58,111 INFO [org.jboss.as.server] (Controller Boot Thread) WFLYSRV0010: Deployed "todo-backend.war" (runtime-name : "todo-backend.war")
...
The backend is running, and we can use the HTTP API to manage a list of todos:
# get a list of todos
$ curl http://localhost:8080/todo-backend
[]
# create a todo with the title "This is my first todo item!"
$ curl -X POST -H "Content-Type: application/json" -d '{"title": "This is my first todo item!"}' http://localhost:8080/todo-backend
{"completed":false,"id":1,"order":0,"title":"This is my first todo item!","url":"https://localhost:8080/todo-backend/1"}%
# get a list of todos with the one that was just created
$ curl http://localhost:8080/todo-backend
[{"completed":false,"id":1,"order":0,"title":"This is my first todo item!","url":"https://localhost:8080/todo-backend/1"}]
Note
|
You may also execute those tests against a running bootable jar application, but you will need to add |
-
You must be logged in OpenShift and have an
oc
client to connect to OpenShift -
Helm must be installed to deploy the backend on OpenShift.
Once you have installed Helm, you need to add the repository that provides Helm Charts for {productName}:
$ helm repo add wildfly https://docs.wildfly.org/wildfly-charts/
"wildfly" has been added to your repositories
$ helm search repo wildfly
NAME CHART VERSION APP VERSION DESCRIPTION
wildfly/wildfly ... ... Build and Deploy WildFly applications on OpenShift
wildfly/wildfly-common ... ... A library chart for WildFly-based applications
$ oc new-app postgresql-ephemeral \
-p DATABASE_SERVICE_NAME=todo-backend-db \
-p POSTGRESQL_DATABASE=todos
This will create a PostgreSQL database named todos
on OpenShift that can be accessed on the port 5432
on the service todo-backend-db
.
We don’t need to copy the credentials to connect to the database as we will retrieve them later using the todo-backend-db
secret that was created
when the database is deployed.
The backend will be built and deployed on OpenShift with a Helm Chart for {productName}.
$ helm install todo-backend --set build.ref={WildFlyQuickStartRepoTag} -f https://raw.githubusercontent.com/wildfly/wildfly-charts/main/examples/todo-backend/todo-backend-bootable-jar.yaml {helmChartName}
NAME: todo-backend
...
STATUS: deployed
REVISION: 1
The Helm Chart for this quickstart contains all the information to build an image from the source code using Bootable Jar:
build:
uri: https://github.com/wildfly/quickstart.git
mode: bootable-jar
This will create a new deployment on OpenShift and deploy the application.
If you want to see all the configuration elements to customize your deployment you can use the following command:
$ helm show readme {helmChartName}
Let’s wait for the application to be built and deployed:
$ oc get deployment {artifactId} -w
NAME READY UP-TO-DATE AVAILABLE AGE
{artifactId} 0/1 1 0 31s
...
{artifactId} 1/1 1 1 4m31s
The backend will be built and deployed on OpenShift with a Helm Chart for {productName}.
Add the bitnami repository which provides an helm chart for PostgreSQL:
$ helm repo add bitnami https://charts.bitnami.com/bitnami
"bitnami" has been added to your repositories
Install the full application (database + backend).
$ helm dependency update todo-backend-chart/
$ helm install todo-backend todo-backend-chart/
NAME: todo-backend
...
STATUS: deployed
REVISION: 1
The Helm Chart for this quickstart contains all the information to build an image from the source code using S2I and install it with the database:
dependencies:
- name: postgresql
repository: https://charts.bitnami.com/bitnami
version: ...
- name: wildfly
repository: http://docs.wildfly.org/wildfly-charts/
version: ...
Any configuration specified by this chart is described in its README that is displayed in OpenShift Dev console or using the command:
$ helm show readme {helmChartName}
Let’s wait for the application to be built and deployed:
$ oc get deployment todo-backend -w
NAME READY UP-TO-DATE AVAILABLE AGE
{artifactId} 0/1 1 0 31s
...
{artifactId} 1/1 1 1 4m31s
The Helm Chart also contains the environment variables required to connect to the PostgreSQL database.
In local deployment the credentials were passed directly as the values of the environment variables.
For OpenShift, we rely on secrets so that the credentials are never copied outside OpenShift:
deploy:
env:
- name: POSTGRESQL_PASSWORD
valueFrom:
secretKeyRef:
key: database-password
name: todo-backend-db
When the application is deployed, the value for the POSTGRESQL_PASSWORD
will be taken from the key database-password
in the secret todo-backend-db
.
Once the backend is deployed on OpenShift, it can be accessed from the route todo-backend
.
Let’s find the host that we can use to connect to this backend:
$ oc get route todo-backend -o jsonpath="{.spec.host}"
todo-backend-jmesnil1-dev.apps.sandbox.x8i5.p1.openshiftapps.com
This value will be different for every installation of the backend.
Warning
|
Make sure to prepend the host with |
We can verify that this application is properly working as a ToDo Backend by running its specs on it.
Once all tests passed, we can use the todobackend client to have a Web application connected to the backend.
Note
|
todobackend.com is an external service used to showcase this quickstart. It might not always be functional but does not impact the availability of this backend. |
The backend can be deleted from OpenShift by running the command:
$ helm uninstall todo-backend
release "todo-backend" uninstalled
The PostresSQL database can be deleted from OpenShift by running the commands:
$ oc delete all -l template=postgresql-ephemeral-template
replicationcontroller "todo-backend-db-1" deleted
service "todo-backend-db" deleted
deploymentconfig.apps.openshift.io "todo-backend-db" deleted
$ oc delete secret todo-backend-db
secret "todo-backend-db" deleted
This quickstart shows how the datasource feature pack provided by {productName} simplifies the deployment of a {productName} Jakarta EE backend on OpenShift to connect to an external database and exposes an HTTP API.
The use of a Bootable Jar deployment makes it seamless to move from a local deployment for development to a deployment on OpenShift.