Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add integration with Elastic #248

Merged
merged 5 commits into from
Jun 5, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 0 additions & 1 deletion integrations/.gitignore
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
elastic
opensearch
splunk
common
Expand Down
2 changes: 1 addition & 1 deletion integrations/amazon-security-lake/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -206,7 +206,7 @@ The Logstash pipeline requires access to the following secrets:

```console
sudo systemctl stop logstash
sudo -E /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/indexer-to-s3.conf --path.settings /etc/logstash ----config.test_and_exit
sudo -E /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/indexer-to-s3.conf --path.settings /etc/logstash --config.test_and_exit
```

2. After confirming that the configuration loads correctly without errors, run Logstash as a service.
Expand Down
23 changes: 23 additions & 0 deletions integrations/docker/.env
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
# Password for the 'elastic' user (at least 6 characters)
ELASTIC_PASSWORD=elastic

# Password for the 'kibana_system' user (at least 6 characters)
KIBANA_PASSWORD=elastic

# Version of Elastic products
STACK_VERSION=8.6.2

# Set the cluster name
CLUSTER_NAME=elastic

# Set to 'basic' or 'trial' to automatically start the 30-day trial
LICENSE=basic

# Port to expose Elasticsearch HTTP API to the host
ES_PORT=9201

# Port to expose Kibana to the host
KIBANA_PORT=5602

# Increase or decrease based on the available host memory (in bytes)
MEM_LIMIT=1073741824
217 changes: 217 additions & 0 deletions integrations/docker/elastic.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,217 @@
name: "elastic-integration"

services:
events-generator:
image: wazuh/indexer-events-generator
build:
context: ../tools/events-generator
container_name: events-generator
depends_on:
wazuh.indexer:
condition: service_healthy
command: bash -c "python run.py -a wazuh.indexer"

wazuh.indexer:
image: opensearchproject/opensearch:2.12.0
container_name: wazuh.indexer
depends_on:
wazuh-certs-generator:
condition: service_completed_successfully
hostname: wazuh.indexer
ports:
- 9200:9200
environment:
- node.name=wazuh.indexer
- discovery.type=single-node
- bootstrap.memory_lock=true
- "DISABLE_INSTALL_DEMO_CONFIG=true"
- plugins.security.ssl.http.enabled=true
- plugins.security.allow_default_init_securityindex=true
- plugins.security.ssl.http.pemcert_filepath=/usr/share/opensearch/config/wazuh.indexer.pem
- plugins.security.ssl.transport.pemcert_filepath=/usr/share/opensearch/config/wazuh.indexer.pem
- plugins.security.ssl.http.pemkey_filepath=/usr/share/opensearch/config/wazuh.indexer-key.pem
- plugins.security.ssl.transport.pemkey_filepath=/usr/share/opensearch/config/wazuh.indexer-key.pem
- plugins.security.ssl.http.pemtrustedcas_filepath=/usr/share/opensearch/config/root-ca.pem
- plugins.security.ssl.transport.pemtrustedcas_filepath=/usr/share/opensearch/config/root-ca.pem
- plugins.security.authcz.admin_dn="CN=wazuh.indexer,OU=Wazuh,O=Wazuh,L=California, C=US"
- "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
healthcheck:
test: curl -sku admin:admin https://localhost:9200/_cat/health | grep -q docker-cluster
start_period: 10s
start_interval: 3s
volumes:
- data:/usr/share/opensearch/data
- ./certs/wazuh.indexer.pem:/usr/share/opensearch/config/wazuh.indexer.pem
- ./certs/wazuh.indexer-key.pem:/usr/share/opensearch/config/wazuh.indexer-key.pem
- ./certs/root-ca.pem:/usr/share/opensearch/config/root-ca.pem

wazuh.dashboard:
image: opensearchproject/opensearch-dashboards:2.12.0
container_name: wazuh.dashboard
depends_on:
- wazuh.indexer
hostname: wazuh.dashboard
ports:
- 5601:5601 # Map host port 5601 to container port 5601
expose:
- "5601" # Expose port 5601 for web access to OpenSearch Dashboards
environment:
OPENSEARCH_HOSTS: '["https://wazuh.indexer:9200"]' # Define the OpenSearch nodes that OpenSearch Dashboards will query

wazuh-certs-generator:
image: wazuh/wazuh-certs-generator:0.0.1
hostname: wazuh-certs-generator
container_name: wazuh-certs-generator
entrypoint: sh -c "/entrypoint.sh; chown -R 1000:999 /certificates; chmod 740 /certificates; chmod 440 /certificates/*"
volumes:
- ./certs/:/certificates/
- ./config/certs.yml:/config/certs.yml


# =================================
# Elasticsearch, Kibana and Logstash
# =================================
# https://www.elastic.co/guide/en/elastic-stack-get-started/current/get-started-docker.html

setup:
image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
volumes:
- es_certs:/usr/share/elasticsearch/config/certs
user: '0'
command: >
bash -c '
if [ x${ELASTIC_PASSWORD} == x ]; then
echo "Set the ELASTIC_PASSWORD environment variable in the .env file";
exit 1;
elif [ x${KIBANA_PASSWORD} == x ]; then
echo "Set the KIBANA_PASSWORD environment variable in the .env file";
exit 1;
fi;
if [ ! -f config/certs/ca.zip ]; then
echo "Creating CA";
bin/elasticsearch-certutil ca --silent --pem -out config/certs/ca.zip;
unzip config/certs/ca.zip -d config/certs;
fi;
if [ ! -f config/certs/certs.zip ]; then
echo "Creating certs";
echo -ne \
"instances:\n"\
" - name: es01\n"\
" dns:\n"\
" - es01\n"\
" - localhost\n"\
" ip:\n"\
" - 127.0.0.1\n"\
> config/certs/instances.yml;
bin/elasticsearch-certutil cert --silent --pem -out config/certs/certs.zip --in config/certs/instances.yml --ca-cert config/certs/ca/ca.crt --ca-key config/certs/ca/ca.key;
unzip config/certs/certs.zip -d config/certs;
fi;
echo "Setting file permissions"
chown -R 1000:1000 config/certs;
find . -type d -exec chmod 750 \{\} \;;
find . -type f -exec chmod 640 \{\} \;;
echo "Waiting for Elasticsearch availability";
until curl -s --cacert config/certs/ca/ca.crt https://es01:9200 | grep -q "missing authentication credentials"; do sleep 30; done;
echo "Setting kibana_system password";
until curl -s -X POST --cacert config/certs/ca/ca.crt -u "elastic:${ELASTIC_PASSWORD}" -H "Content-Type: application/json" https://es01:9200/_security/user/kibana_system/_password -d "{\"password\":\"${KIBANA_PASSWORD}\"}" | grep -q "^{}"; do sleep 10; done;
echo "All done!";
'
healthcheck:
test: ['CMD-SHELL', '[ -f config/certs/es01/es01.crt ]']
interval: 1s
timeout: 5s
retries: 120

es01:
depends_on:
setup:
condition: service_healthy
image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
volumes:
- es_certs:/usr/share/elasticsearch/config/certs
ports:
- ${ES_PORT}:9200
environment:
- node.name=es01
- cluster.name=${CLUSTER_NAME}
- cluster.initial_master_nodes=es01
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
- bootstrap.memory_lock=true
- xpack.security.enabled=true
- xpack.security.http.ssl.enabled=true
- xpack.security.http.ssl.key=certs/es01/es01.key
- xpack.security.http.ssl.certificate=certs/es01/es01.crt
- xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.key=certs/es01/es01.key
- xpack.security.transport.ssl.certificate=certs/es01/es01.crt
- xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.license.self_generated.type=${LICENSE}
mem_limit: ${MEM_LIMIT}
ulimits:
memlock:
soft: -1
hard: -1
healthcheck:
test:
[
'CMD-SHELL',
"curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
]
interval: 10s
timeout: 10s
retries: 120

kibana:
depends_on:
es01:
condition: service_healthy
image: docker.elastic.co/kibana/kibana:${STACK_VERSION}
volumes:
- es_certs:/usr/share/kibana/config/certs
ports:
- ${KIBANA_PORT}:5601
environment:
- SERVERNAME=kibana
- ELASTICSEARCH_HOSTS=https://es01:9200
- ELASTICSEARCH_USERNAME=kibana_system
- ELASTICSEARCH_PASSWORD=${KIBANA_PASSWORD}
- ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES=config/certs/ca/ca.crt
mem_limit: ${MEM_LIMIT}
healthcheck:
test:
[
'CMD-SHELL',
"curl -s -I http://localhost:5601 | grep -q 'HTTP/1.1 302 Found'",
]
interval: 10s
timeout: 10s
retries: 120

logstash:
depends_on:
es01:
condition: service_healthy
image: logstash-oss:8.6.2
build:
context: ../elastic
environment:
LOG_LEVEL: info
MONITORING_ENABLED: false
volumes:
- ../elastic/logstash/pipeline:/usr/share/logstash/pipeline
- ./certs/root-ca.pem:/usr/share/logstash/root-ca.pem
- es_certs:/etc/certs/elastic
command: logstash -f /usr/share/logstash/pipeline/indexer-to-elastic.conf

volumes:
data:
es_certs:
19 changes: 19 additions & 0 deletions integrations/elastic/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
FROM opensearchproject/logstash-oss-with-opensearch-output-plugin:latest

ENV LOGSTASH_KEYSTORE_PASS "SecretPassword"
ENV LS_PATH "/usr/share/logstash"
USER logstash

# https://github.com/elastic/logstash/issues/6600
# Install plugin
RUN LS_JAVA_OPTS="-Xms1024m -Xmx1024m" logstash-plugin install logstash-input-opensearch

COPY --chown=logstash:logstash logstash/pipeline /usr/share/logstash/pipeline
# Copy and run the setup.sh script to create and configure a keystore for Logstash.
COPY --chown=logstash:logstash logstash/setup.sh /usr/share/logstash/bin/setup.sh
RUN bash /usr/share/logstash/bin/setup.sh

# Disable ECS compatibility
RUN `echo "pipeline.ecs_compatibility: disabled" >> /usr/share/logstash/config/logstash.yml`

WORKDIR /usr/share/logstash
49 changes: 49 additions & 0 deletions integrations/elastic/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
# Wazuh to Elastic Integration Developer Guide

This document describes how to prepare a Docker Compose environment to test the integration between Wazuh and the Elastic Stack. For a detailed guide on how to integrate Wazuh with Elastic Stack, please refer to the [Wazuh documentation](https://documentation.wazuh.com/current/integrations-guide/elastic-stack/index.html).

## Requirements

- Docker and Docker Compose installed.

## Usage

1. Clone the Wazuh repository and navigate to the `integrations/` folder.
2. Run the following command to start the environment:
```bash
docker compose -f ./docker/elastic.yml up -d
```

The Docker Compose project will bring up the following services:

- 1x Events Generator (learn more in [wazuh-indexer/integrations/tools/events-generator](../tools/events-generator/README.md)).
- 1x Wazuh Indexer (OpenSearch).
- 1x Wazuh Dashboards (OpenSearch Dashboards).
- 1x Logstash
- 1x Elastic
- 1x Kibana

For custom configurations, you may need to modify these files:

- [docker/elastic.yml](../docker/elastic.yml): Docker Compose file.
- [docker/.env](../docker/.env): Environment variables file.
- [elastic/logstash/pipeline/indexer-to-elastic.conf](./logstash/pipeline/indexer-to-elastic.conf): Logstash Pipeline configuration file.

Check the files above for **credentials**, ports, and other configurations.

| Service | Address | Credentials |
| ---------------- | ---------------------- | --------------- |
| Wazuh Indexer | https://localhost:9200 | admin:admin |
| Wazuh Dashboards | https://localhost:5601 | admin:admin |
| Elastic | https://localhost:9201 | elastic:elastic |
| Kibana | https://localhost:5602 | elastic:elastic |

## Importing the dashboards

The dashboards for Elastic are included in [dashboards.ndjson](./dashboards.ndjson). The steps to import them to Elastic are the following:

- On Kibana, expand the left menu, and go to `Stack management`.
- Click on `Saved Objects`, select `Import`, click on the `Import` icon and browse the dashboard file.
- Click on Import and complete the process.

Imported dashboards will appear in the `Dashboards` app on the left menu.
9 changes: 9 additions & 0 deletions integrations/elastic/dashboards.ndjson

Large diffs are not rendered by default.

Loading