Skip to content

Commit

Permalink
Merge remote-tracking branch 'origin/release-32.x' into develop
Browse files Browse the repository at this point in the history
  • Loading branch information
mershad-manesh committed Jun 29, 2023
2 parents 7a335b3 + 60c9a95 commit b291e37
Show file tree
Hide file tree
Showing 33 changed files with 882 additions and 335 deletions.
16 changes: 16 additions & 0 deletions .circleci/scripts/structure-settings.xml
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,10 @@
<id>opennms-repo</id>
<name>OpenNMS Mega-Repository</name>
<url>https://maven.opennms.org/repository/everything/</url>
<releases>
<enabled>true</enabled>
<updatePolicy>never</updatePolicy>
</releases>
<snapshots>
<enabled>false</enabled>
</snapshots>
Expand All @@ -18,6 +22,10 @@
<id>central</id>
<name>Maven Central</name>
<url>https://repo1.maven.org/maven2/</url>
<releases>
<enabled>true</enabled>
<updatePolicy>never</updatePolicy>
</releases>
<snapshots>
<enabled>false</enabled>
</snapshots>
Expand All @@ -28,6 +36,10 @@
<id>opennms-repo</id>
<name>OpenNMS Mega-Repository</name>
<url>https://maven.opennms.org/repository/everything/</url>
<releases>
<enabled>true</enabled>
<updatePolicy>never</updatePolicy>
</releases>
<snapshots>
<enabled>false</enabled>
</snapshots>
Expand All @@ -36,6 +48,10 @@
<id>central</id>
<name>Maven Central</name>
<url>https://repo1.maven.org/maven2/</url>
<releases>
<enabled>true</enabled>
<updatePolicy>never</updatePolicy>
</releases>
<snapshots>
<enabled>false</enabled>
</snapshots>
Expand Down
773 changes: 578 additions & 195 deletions core/web-assets/package-lock.json

Large diffs are not rendered by default.

4 changes: 2 additions & 2 deletions debian/get-build-args.sh
Original file line number Diff line number Diff line change
Expand Up @@ -9,9 +9,9 @@ if [ -f "$OPENNMS_SETTINGS_XML" ]; then
fi

if [ -z "$OPENNMS_ENABLE_SNAPSHOTS" ] || [ "$OPENNMS_ENABLE_SNAPSHOTS" = 1 ]; then
ARGS+=(-Denable.snapshots=true -DupdatePolicy=always)
ARGS+=(-Denable.snapshots=true)
else
ARGS+=(-Denable.snapshots=false -DupdatePolicy=never)
ARGS+=(-Denable.snapshots=false)
fi

case "${CIRCLE_BRANCH}" in
Expand Down
7 changes: 7 additions & 0 deletions dependencies/jasper/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -113,6 +113,13 @@
<repository>
<id>jaspersoft-third-party</id>
<url>https://maven.opennms.org/repository/thirdparty/</url>
<releases>
<enabled>true</enabled>
<updatePolicy>never</updatePolicy>
</releases>
<snapshots>
<enabled>false</enabled>
</snapshots>
</repository>
</repositories>
</project>
1 change: 1 addition & 0 deletions dependencies/oia/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -59,6 +59,7 @@
</releases>
<snapshots>
<enabled>true</enabled>
<updatePolicy>${updatePolicy}</updatePolicy>
</snapshots>
</repository>
</repositories>
Expand Down
19 changes: 17 additions & 2 deletions docs/modules/deployment/pages/minion/docker/minion.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ services:
command: ["-c"]
volumes:
- ./minion-config.yaml:/opt/minion/minion-config.yaml<5>
- ./scv.jce:/opt/minion/scv.jce<6>
#- ./keystore/scv.jce:/opt/minion/scv.jce<6>
healthcheck:
test: "/health.sh"<7>
interval: 30s
Expand All @@ -45,6 +45,7 @@ services:
<4> Optional. Use to control the maximum Java heap size.
<5> Configuration file for connectivity and features
<6> Keystore file with encrypted credentials for authenticating broker endpoints.
Initially this line should be commented out, and will need to be activated after the file has been created on the local system.
<7> Run our health check to indicate the Minion is ready. It uses the `opennms:health-check` internally running in Karaf.
<8> Publish ports for Syslog, SNMP trap listener, and the SSH access to the Karaf shell.

Expand Down Expand Up @@ -88,10 +89,24 @@ netmgt:

TIP: To run with Apache Kafka or configure flow listeners, see the configuration reference in the link:https://github.com/OpenNMS/opennms/blob/master/opennms-container/minion/CONFD_README.md[Confd readme].

Next, we need to create the keystore file and add the credentials for connecting to the core {page-component-title} server.

.Initialize the keystore with credentials
[source, console]
----
docker-compose run -v $(pwd):/keystore minion -s
mkdir keystore
chmod 10001:10001 keystore<1>
docker-compose run -v $(pwd)/keystore:/keystore minion -s<2>
----
<1> Assign permissions so the container user account can write within the directory.
<2> When prompted, enter the username and password for a `ROLE_MINOIN` user account on the {page-component-title} server.
You may be prompted for the same username and password twice.

.Edit the `docker-compose.yml` file to uncomment the keystore file
[source, diff]
----
- #- ./keystore/scv.jce:/opt/minion/scv.jce
+ - ./keystore/scv.jce:/opt/minion/scv.jce
----

.Validate your Docker Compose file
Expand Down
16 changes: 5 additions & 11 deletions docs/modules/deployment/pages/sentinel/introduction.adoc
Original file line number Diff line number Diff line change
@@ -1,17 +1,11 @@
= Sentinel

Sentinel provides scalability for data processing, including flows and streaming telemetry.
Sentinel provides scalability for data processing of flows and streaming telemetry received by one or more Minions.
It also supports thresholding for streaming telemetry if you are using OpenNMS xref:deployment:time-series-storage/newts/introduction.adoc#ga-opennms-operation-newts[Newts].
If you are using Minions or looking for scalable flow processing, you need Sentinel.
If you need to scale processing capacity for flow and/or streaming telemetry, you need Sentinel.

Sentinel is a Karaf container that handles data processing for OpenNMS and Minion, spawning new containers as necessary to deal with increased data volume.
As your flow and streaming telemetry volumes increase, additional Sentinel instances can be deployed to meet your processing needs.

This section describes how to install the Sentinel to scale individual components.
In most cases, you can disable adapters and listeners in {page-component-title} that are also run by a Sentinel instance.

Keep the following in mind when using Sentinel:

* Use only for distribution of Telemetryd functionality (such as processing flows, or use the existing telemetry adapters to store measurements data to OpenNMS Newts).
* Requires a Minion to work as a (message) producer.
* In most cases, you should disable adapters and listeners in {page-component-title} that are also run by a Sentinel instance.
NOTE: At the moment Sentinel can distribute only flows.
This section describes how to install Sentinel components.
23 changes: 12 additions & 11 deletions docs/modules/deployment/pages/sentinel/runtime/install.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -13,13 +13,14 @@

* Linux physical server or a virtual machine running a supported Linux operating system
* Internet access to download the installation packages
* Ensure DNS is configured, localhost and your servers host name is resolved properly
* Ensure DNS is configured so that `localhost` and your server's host name are resolved properly
* {page-component-title} Core instance runs on latest stable release
* Java installed {compatible-javajdk}
* System user with administrative permissions (sudo) to perform the installation tasks
* Java {compatible-javajdk} is installed
* System user with administrative permissions (`sudo`) to perform the installation tasks
ifeval::["{page-component-title}" == "Horizon"]
NOTE: If you run Debian, you have to install and configure `sudo` yourself.
A guide can be found in the link:https://wiki.debian.org/sudo/[Debian Wiki].
+
NOTE: If you run Debian, you may have to install and configure `sudo` yourself.
See https://wiki.debian.org/sudo/[Debian Wiki] for guidance.
endif::[]

include::../../time-sync.adoc[]
Expand Down Expand Up @@ -67,7 +68,7 @@ IMPORTANT: Change the default user/password _admin/admin_ for the Karaf shell an

[{tabs}]
====
CentOS/RHEL 7/8::
CentOS/RHEL 7/8/9::
+
--
include::centos-rhel/secure-karaf.adoc[]
Expand All @@ -83,13 +84,13 @@ endif::[]
====

TIP: Password or encryption algorithm changes happen immediately.
It is not required to restart the Sentinel
It is not required to restart the Sentinel.

TIP: By default the Karaf Shell is restricted to 127.0.0.1.
If you want enable remote access, set `sshHost=0.0.0.0` in `org.apache.karaf.shell.cfg`.
The change is applied immediately and a Sentinel restart is not required.
If you have a firewall running on your host, allow `8301/tcp` to grant access to the Karaf Shell.
If you want enable remote access, set `sshHost=0.0.0.0` in `org.apache.karaf.shell.cfg`.
The change is applied immediately and a Sentinel restart is not required.
If you have a firewall running on your host, allow `8301/tcp` to grant access to the Karaf Shell.

== Set up flow processing

To set up flow processing with Sentinel, see xref:operation:deep-dive/flows/sentinel/sentinel.adoc#flows-scaling[scale flows data].
Once you have Sentinel installed, see xref:operation:deep-dive/flows/sentinel/sentinel.adoc#flows-scaling[scale flows data] to setup flow processing.
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ The flow query engine supports rendering the top _N_ metrics from pre-aggregated
You can use these statistics to help alleviate computation load on the Elasticsearch cluster, particularly in environments with large volumes of flows (more than 10,000 per second).
To use this functionality, you must <<deep-dive/flows/basic.adoc#kafka-forwarder-config, enable the Kafka forwarder>> and set up the streaming analytics tool to process flows and persist aggregates in Elasticsearch.

Set the following properties in `$OPENNMS_HOME/etc/org.opennms.features.flows.persistence.elastic.cfg` to control the query engine to use aggregated flows:
Set the following properties in `$\{OPENNMS_HOME}/etc/org.opennms.features.flows.persistence.elastic.cfg` to control the query engine to use aggregated flows:

[options="autowidth"]
|===
Expand Down
35 changes: 19 additions & 16 deletions docs/modules/operation/pages/deep-dive/flows/basic.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -57,14 +57,23 @@ elasticIndexStrategy=daily

** See <<deep-dive/elasticsearch/introduction.adoc#ga-elasticsearch-integration-configuration, General Elasticsearch configuration>> for a complete set of configuration options.

== Enable protocols
== Multi-protocol listener

Follow these steps to enable one or more of the protocols that you want to handle.
With most tools, if you are monitoring multiple flow protocols, you must set up a listener on its own UDP port for each protocol.
However, {page-component-title} allows a multi-port listener option; this listener, named `Multi-UDP-9999`, is enabled by default and monitors multiple protocols on a single UDP port (`9999`).
The default configuration includes support for Netflow v5, Netflow v9, sFlow, and IPFIX.
You can edit `$\{OPENNMS_HOME}/etc/telemetryd-configuration.xml` to change the port number and add or remove protocols.

IMPORTANT: Make sure your firewall allow list includes the ports that you have configured to receive flow data.

== Enable individual protocols

Follow these steps to enable one or more protocols that you want to handle individually or that are not included in the default multi-protocol listener.

NOTE: This example uses the NetFlow v5 protocol.
You can follow the same steps for any of the other flow-related protocols.

. Edit or create `$\{OPENNMS_HOME}/etc/telemetryd-configuration.xml` to enable protocols:
. Edit `$\{OPENNMS_HOME}/etc/telemetryd-configuration.xml` to enable protocols:
+
[source, xml]
----
Expand All @@ -79,21 +88,16 @@ You can follow the same steps for any of the other flow-related protocols.
</adapter>
</queue>
----
+
NOTE: The default configuration file provides example configurations for many protocols.
To enable one of these protocols, find the correct example `listener` and `adapter` elements and change their `enabled` attributes to `true`.

. Reload the daemons to apply your changes:
. Reload the daemon to apply your changes:
+
[source, console]
${OPENNMS_HOME}/bin/send-event.pl -p 'daemonName Telemetryd' uei.opennms.org/internal/reloadDaemonConfig

This configuration opens a UDP socket bound to `0.0.0.0:8877`; NetFlow v5 messages are forwarded to this socket (see <<deep-dive/admin/configuration/daemon-config-files.adoc#daemon-reload, Reload daemons by CLI>> for more information).

=== Multi-port listener

Normally, if you are monitoring multiple flow protocols, you must set up a listener on its own UDP port for each protocol.
By default, {page-component-title} allows a multi-port listener option; this monitors multiple protocols on a single UDP port (`9999`).
You can edit `$\{OPENNMS_HOME}/etc/telemetryd-configuration.xml` to change the port number and add or remove protocols.

IMPORTANT: Make sure your firewall allow list includes the ports that you have configured to receive flow data.
This configuration opens a UDP socket bound to `0.0.0.0:8877` to listen and process NetFlow v5 messages that are are forwarded to this port.

== Enable flows on your devices

Expand Down Expand Up @@ -163,10 +167,9 @@ admin@opennms()> config:update
After you set up basic flows monitoring, you may want to do some of the following tasks:

* Classify data flows.
+
{page-component-title} resolves flows to application names.
You can create rules to override the default classifications (see xref:deep-dive/flows/classification-engine.adoc[]).

* xref:deep-dive/flows/distributed.adoc[Enable remote flows data collection].
* xref:deep-dive/flows/sentinel/sentinel.adoc[Scale to manage large volumes of flows data].
* xref:deep-dive/flows/distributed.adoc[Enable remote flows data collection] with Minions.
* xref:deep-dive/flows/sentinel/sentinel.adoc[Scale to manage large volumes of flows data] with Sentinels.
* Add https://github.com/OpenNMS/nephron[OpenNMS Nephron] for aggregation and streaming analytics.
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,8 @@
[[ga-flow-support-classification-engine]]
= Flows Classification

For improved flows management, OpenNMS uses a classification engine that applies user- and/or pre-defined rules to filter and classify flows.
(Pre-defined rules are inspired by the https://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xhtml[IANA Service Name and Transport Protocol Port Number Registry].)
For improved flows management, {page-component-title} uses a classification engine that applies predefined and user-defined rules to filter and classify flows.
The included predefined rules are based on the https://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xhtml[IANA Service Name and Transport Protocol Port Number Registry].

You can group flows by a combination of parameters: source/destination port, source/destination address, IP protocol, and exporter filter.
Once set up, you can view classification groups on the home page.
Expand Down Expand Up @@ -66,7 +66,7 @@ Assume that you define the following rules:
{page-component-title} receives the following flows, classified according to the rules defined above:

[options="header"]
[cols="1,3"]
[cols="3,1"]
|===
| Flow
| Classification
Expand Down Expand Up @@ -243,4 +243,4 @@ An example of an evaluation:
| group 3
| 2
| rule 3.2
|===
|===
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
[[ga-flow-support-data-collection]]
= Data collection of flow applications
= Data Collection of Flow Applications

Flows are categorized in applications using the <<deep-dive/flows/classification-engine.adoc#ga-flow-support-classification-engine, Classification Engine>>.
OpenNMS supports summing up the bytesIn/bytesOut data of flow records based on these flow applications and the collection of this data.
Expand Down
8 changes: 4 additions & 4 deletions docs/modules/operation/pages/deep-dive/flows/distributed.adoc
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@

[[flows-remote]]
= Using Minions as a flow collector
= Using Minions as a Flow Collector

Beyond a basic flows setup, you may want to add a Minion to collect flows data from hard-to-reach or remote locations.

Expand All @@ -21,7 +21,7 @@ Make sure you do the following:
This example enables a generic listener for the NetFlow v5 protocol on Minion.

IMPORTANT: NetFlow v5 uses the generic UDP listener, but other protocols require a specific listener.
See the examples in `$OPENNMS_HOME/etc/telemetryd-configuration.xml`, or <<reference:telemetryd/listeners/introduction.adoc#ref-listener, Telemetryd Listener Reference>> for details.
See the examples in `$\{OPENNMS_HOME}/etc/telemetryd-configuration.xml`, or <<reference:telemetryd/listeners/introduction.adoc#ref-listener, Telemetryd Listener Reference>> for details.

To enable and configure a listener for NetFlow v5 on Minion, connect to the Karaf Console and set the following properties:

Expand All @@ -36,7 +36,7 @@ config:property-set parsers.0.class-name org.opennms.netmgt.telemetry.protocols.
config:update
----

TIP: If using a configuration management, you can create and use the properties file as startup configuration in `$MINION_HOME/etc/org.opennms.features.telemetry.listeners-udp-8877.cfg`.
TIP: If using a configuration management, you can create and use the properties file as startup configuration in `$\{MINION_HOME}/etc/org.opennms.features.telemetry.listeners-udp-8877.cfg`.

[source, properties]
----
Expand Down Expand Up @@ -75,4 +75,4 @@ The value the lookup uses corresponds to the following fields from each protocol

| BMP
| bgpId
|===
|===
Original file line number Diff line number Diff line change
@@ -1,20 +1,20 @@
[source, karaf]
----
config:edit --alias ipfix --factory org.opennms.features.telemetry.listeners
config:edit --alias ipfix --factory org.opennms.features.telemetry.adapters
config:property-set name IPFIX<1>
config:property-set adapters.0.name IPFIX-Adapter<2>
config:property-set adapters.0.class-name org.opennms.netmgt.telemetry.protocols.netflow.adapter.ipfix.IpfixAdapter<3>
config:update
----

<1> Queue name from which Sentinel will fetch messages.
The Minion parser name for IPFIX must match this name.
By default, for {page-component-title} components, the queue name convention is `IPFIX`.
<2> Set a name for the IPFIX adapter.
<3> Assign an adapter to enrich IPFIX messages.

The configuration is persisted with the suffix specified as alias in `etc/org.opennms.features.telemetry.adapters-ipfix.cfg`.

TIP: To process multiple protocols, increase the index `0` in the adapters name and class name accordingly for additional protocols.
TIP: When processing multiple protocols from the same queue, include additional adapters by adding additional `name` and `class-name` properties, increasing the index `0` for each pair.

.Run health-check to verify adapter configuration
[source, karaf]
Expand All @@ -30,4 +30,3 @@ Verifying the health of the container
...
Verifying Adapter IPFIX-Adapter (org.opennms.netmgt.telemetry.protocols.netflow.adapter.ipfix.IpfixAdapter) [ Success ]
----

Loading

0 comments on commit b291e37

Please sign in to comment.