Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a blog post: Using the maven-wildfly-plugin to provision the Wild… #677

Merged
merged 1 commit into from
Nov 14, 2024
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
296 changes: 296 additions & 0 deletions _posts/2024-11-14-using-maven-wildfly-plugin-with-postgresql-ds.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,296 @@
---
layout: post
title: "Using the maven-wildfly-plugin to provision the WildFly to use the PostgreSQL datasource."
date: 2024-11-14
tags: wildfly wildfly-maven-plugin postgresql jberet glow
author: liweinan
description: Introducing the use of the wildfly-maven-plugin to use the PostgreSQL datasource.
---

Recently I needed to configure a WildFly server to use a PostgreSQL datasource for some testings, and I’d like to use the`wildfly-maven-plugin` to automate the server provision process, so I invested some time into the topic and finally made it work. To sum up what I have learned, I have put the usage example in this PR:

* https://github.com/jberet/jberet-examples/pull/8[jberet-examples / add postgresql based repository example #8]

From the above pull request, we can see that the `wildfly-datasources-galleon-pack` feature packfootnote:[https://github.com/wildfly-extras/wildfly-datasources-galleon-pack[wildfly-extras/wildfly-datasources-galleon-pack]] is added into the plugin configuration:

[source,xml]
----
<feature-pack>
<location>org.wildfly:wildfly-datasources-galleon-pack:8.0.1.Final</location>
</feature-pack>
----

And this layer is used:

[source,xml]
----
<layers>
<layer>postgresql-driver</layer>
</layers>
----

The above layer will just add the PostgreSQL driver into the provisioned WildFly server, and it won’t add any datasource by default. If I run the provisioned server with the above feature pack and layer added into the configuration with the following command in the example:

[source,bash]
----
$ mvn clean wildfly:dev -Ppostgres
----

I can see from the server log that the PostgreSQL driver is enabled during the server startup process:

[source,txt]
----
22:11:00,626 INFO [org.jboss.as.connector.subsystems.datasources] (ServerService Thread Pool -- 42) WFLYJCA0005: Deploying non-JDBC-compliant driver class org.postgresql.Driver (version 42.5)
22:11:00,631 INFO [org.jboss.as.connector.deployers.jdbc] (MSC service thread 1-6) WFLYJCA0018: Started Driver service with driver-name = postgresql
----

So the feature pack and the layer will prepare the provisioned WildFly server to connect to the PostgreSQL driver, and then we can configure the datasource in the server. In the above example, there are scripts that will add the PostgreSQL datasourcefootnote:[https://github.com/jberet/jberet-examples/blob/main/deployment/add-postgresql-ds.cli]:

[source,txt]
----
xa-data-source add --name=batch_db --enabled=true --use-java-context=true --use-ccm=true --jndi-name=java:jboss/jsr352/batch_db --xa-datasource-properties={"URL"=>"jdbc:postgresql://localhost:5432/batch_db"} --driver-name=postgresql --password=123 --user-name=batch_user --same-rm-override=false --no-recovery=true
/subsystem=batch-jberet/jdbc-job-repository=batch_db:add(data-source=batch_db)
/subsystem=batch-jberet/:write-attribute(name=default-job-repository,value=batch_db)
----

The above configuration add the PostgreSQL datasource I need, and then it configures the batch subsystem to use the datasource as its job store, and is used in the `wildfly-maven-plugin` configuration section:

[source,xml]
----
<scripts>
<script>${configure.ds.script}</script>
</scripts>
----

The `configure.ds.script` property is configured in the `postgresql` profile:

[source,xml]
----
<profile>
<id>postgres</id>
<activation>
<property>
<name>postgres</name>
</property>
</activation>
<properties>
<configure.ds.script>${project.basedir}/add-postgresql-ds.cli</configure.ds.script>
</properties>
</profile>
----

With the script execution during the server startup, I can see from the server log that the datasource is bound:

[source,txt]
----
17:47:36,063 INFO [org.jboss.as.connector.subsystems.datasources] (MSC service thread 1-6) WFLYJCA0001: Bound data source [java:jboss/jsr352/batch_db]
----

Please note that the `script` section configuration can be used with the `provision` goal(and it can be used in other goals as well like `dev`):

[source,xml]
----
<execution>
<goals>
<goal>provision</goal>
</goals>
</execution>
----

If you want to use the `script` with the `package` goal:

[source,xml]
----
<execution>
<goals>
<goal>package</goal>
</goals>
</execution>
----

Then you should configure the script like this:

[source,xml]
----
<packaging-scripts>
<packaging-script>
<scripts>
<script>${configure.ds.script}</script>
</scripts>
<!-- Expressions resolved during server execution -->
<resolve-expressions>false</resolve-expressions>
</packaging-script>
</packaging-scripts>
----

Here is an example that is showing the above configuration method:

* https://docs.wildfly.org/wildfly-maven-plugin/releases/5.0/package-mojo.html#packagingScripts[WildFly Maven Plugin – wildfly:package#packagingScripts]

To sum up, if you want to play with the `jberet-example` as noted in above by using a PostgreSQL back-end DB, you can use `podman`(or Docker) to start a container that runs a PostgreSQL database:

[source,bash]
----
$ podman run -it -e POSTGRES_PASSWORD=123 -e POSTGRES_USER=batch_user -e POSTGRES_DB=batch_db -p 5432:5432 postgres
----

Then run:

[source,bash]
----
$ mvn clean wildfly:dev -Dpostgres
----

You should be able to play with the example.

Above is the introduction to the script configuration. Besides the `postgresql-driver` layer, the `wildfly-datasources-galleon-pack` also provide other layers as described in the documentfootnote:[https://github.com/wildfly-extras/wildfly-datasources-galleon-pack/blob/main/doc/postgresql/README.md]:

____
* `postgresql-default-datasource`: Provision the PostgreSQLDS non xa
datasource and configures it as the default one. Depends on
postgresql-datasource layer.
* `postgresql-datasource`: Provision the PostgreSQLDS non xa
datasource. Depends on postgresql-driver layer.
* `postgresql-driver`: Provision the postgresql driver. This layer
installs the JBoss Modules module org.postgresql.jdbc.
____

As the document described above, the `postgresql-datasource` and the `postgresql-default-datasource` layers define a `PostgreSQLDS` datasource in the provisioned server by default. In addition, the default `PostgreSQLDS` datasource uses some built-in parameters to connect to the underlying PostgreSQL database. Here are the parameters that must be defined:

* `POSTGRESQL_DATABASE`
* `POSTGRESQL_PASSWORD`
* `POSTGRESQL_URL`
* `POSTGRESQL_USER`


In the example, it also contains a profile called `postgres-ds` that showing the usage of the `postgresql-datasource` layer:

[source,xml]
----
<layers>
<layer>postgresql-datasource</layer>
</layers>
----

In the profile it defines the following properties for the `PostgreSQLDS` datasource:

[source,xml]
----
<env>
<POSTGRESQL_JNDI>java:jboss/jsr352/batch_db</POSTGRESQL_JNDI>
<POSTGRESQL_DATABASE>batch_db</POSTGRESQL_DATABASE>
<POSTGRESQL_PASSWORD>123</POSTGRESQL_PASSWORD>
<POSTGRESQL_URL>jdbc:postgresql://localhost:5432/batch_db</POSTGRESQL_URL>
<POSTGRESQL_USER>batch_user</POSTGRESQL_USER>
</env>
----

After the above configuration is used, and if the provisioned server is run by `mvn clean wildfly:dev -Ppostgres-ds`, we can check the datasource by using the CLI tool to connect to the WildFly server:

[source,bash]
----
./jboss-cli.sh --connect
[standalone@localhost:9990 /]
----

And then we can check the configured datasource in the server:

[source,txt]
----
[standalone@localhost:9990 /] /subsystem=datasources:read-resource
{
"outcome" => "success",
"result" => {
"data-source" => {"PostgreSQLDS" => undefined},
"jdbc-driver" => {"postgresql" => undefined},
"xa-data-source" => undefined
}
}
----

You can see the `PostgreSQLDS` is now the configured datasource. Then we can check the `default-job-repository` used by the `batch-jberet` subsystem:

[source,txt]
----
[standalone@localhost:9990 /] /subsystem=batch-jberet:read-resource
{
"outcome" => "success",
"result" => {
"restart-jobs-on-resume" => true,
"security-domain" => "ApplicationDomain",
"default-job-repository" => "batch_db",
"default-thread-pool" => "batch",
"in-memory-job-repository" => {"in-memory" => undefined},
"jdbc-job-repository" => {"batch_db" => undefined},
"thread-factory" => undefined,
"thread-pool" => {"batch" => undefined}
}
}
----

As the output shown above, the `default-job-repository` is configured to use the `batch_db`. Finally, we can check the definition of the `batch_db`:

[source,txt]
----
[standalone@localhost:9990 /] /subsystem=batch-jberet/jdbc-job-repository=batch_db:read-resource
{
"outcome" => "success",
"result" => {
"data-source" => "PostgreSQLDS",
"execution-records-limit" => undefined
}
}
----

It's clear that the used `data-source` is `PostgreSQLDS`. This configuration is done by the `enable-jdbc-job-repo.cli`:

[source,txt]
----
/subsystem=batch-jberet/jdbc-job-repository=batch_db:add(data-source=PostgreSQLDS)
/subsystem=batch-jberet/:write-attribute(name=default-job-repository,value=batch_db)
----

And the above script is configured to be executed in the `postgres-ds` profile of the example:

[source,xml]
----
<properties>
<configure.ds.script>${project.basedir}/enable-jdbc-job-repo.cli</configure.ds.script>
</properties>
----

If you check the configuration by running the example with the `-Ppostgres` profile. You can see the following output from the CLI:

[source,txt]
----
[standalone@localhost:9990 /] /subsystem=batch-jberet/jdbc-job-repository=batch_db:read-resource
{
"outcome" => "success",
"result" => {
"data-source" => "batch_db",
"execution-records-limit" => undefined
}
}
----

Which is expected, because we configured the data-source manually by using the `postgresql-driver` layer.

Above is the introduction to the usage of the `wildfly-datasources-galleon-pack` in the `maven-wildfly-plugin`. An alternative way to do the configuration to use the `add-resources` goal in the `wildfly-maven-plugin`. The document of the WildFly Maven Plugin shows its usage:

* https://docs.wildfly.org/wildfly-maven-plugin/releases/5.0/add-resource-example.html[WildFly Maven Plugin – Adding Resources Examples]

With the above configuration, the datasource is deployed during the `add-resource` goal running process. Because this method depends on the `add-resource` goal, so it needs server to be run firstly, and then deploy the PostgreSQL driver(The PostgreSQL driver need to be added into the dependencies section for it to be deployed).

In addition, there is another way to do the PostgreSQL datasource configuration by using the Glowfootnote:[https://github.com/wildfly/wildfly-glow[wildfly/wildfly-glow:Galleon Layers Output from War: Automatic discover of WildFly provisioning information from an application.]] in the `wildfly-maven-plugin`. Here is the relative document to describe its usage:

* https://www.wildfly.org/guides/database-integrating-with-postgresql[Integrating with a PostgreSQL database]

Please note that the `batch-processing` examplefootnote:[https://github.com/wildfly/quickstart/tree/main/batch-processing[quickstart/batch-processing at main · wildfly/quickstart]] in the WildFly Quickstart uses the above Glow solution to do the datasource configuration, and here is a relative pull request that contains the discussion related with the Glow usage:

* https://github.com/wildfly/quickstart/pull/973[WFLY-19790 Replaces -ds.xml deprecated filed with Jakarta’s DataSou… by emmartins · Pull Request #973 · wildfly/quickstart]

In the above pull request, it also contains the example usage of the `jakarta.annotation.sql.DataSourceDefinition` class that eliminates the usage of the CLI script to configure the datasourcefootnote:[https://github.com/wildfly/quickstart/pull/973/files#diff-c2ef4683cc221a472071d924c4598af19d1152ad40f3b773cf4fe9c60fbc686d[WFLY-19790 Replaces -ds.xml deprecated filed with Jakarta’s DataSou… by emmartins · Pull Request #973 · wildfly/quickstart]].

Personally I prefer to use the `wildfly-datasources-galleon-pack` to configure the datasource because I can either use the default datasource layer configuration or manually configure it by choosing different layers, but you can always choose a solution that best fits your own project’s requirement.

=== References