Skip to content

Commit

Permalink
Remove unused code and fix ASL documentation
Browse files Browse the repository at this point in the history
  • Loading branch information
AlexRuiz7 committed Aug 5, 2024
1 parent 13c620b commit bfe9cd1
Show file tree
Hide file tree
Showing 3 changed files with 12 additions and 20 deletions.
10 changes: 4 additions & 6 deletions integrations/amazon-security-lake/CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,20 +16,18 @@ This Docker Compose project will bring up these services:
- our [events generator](../tools/events-generator/README.md)
- an AWS Lambda Python container.

On the one hand, the event generator will push events constantly to the indexer, to the `wazuh-alerts-4.x-sample` index by default (refer to the [events generator](../tools/events-generator/README.md) documentation for customization options). On the other hand, Logstash will query for new data and deliver it to output configured in the pipeline, which can be one of `indexer-to-s3` or `indexer-to-file`.
On the one hand, the event generator will push events constantly to the indexer, to the `wazuh-alerts-4.x-sample` index by default (refer to the [events generator](../tools/events-generator/README.md) documentation for customization options). On the other hand, Logstash will query for new data and deliver it to output configured in the pipeline `indexer-to-s3`. This pipeline delivers the data to an S3 bucket, from which the data is processed using a Lambda function, to finally be sent to the Amazon Security Lake bucket in Parquet format.

The `indexer-to-s3` pipeline is the method used by the integration. This pipeline delivers the data to an S3 bucket, from which the data is processed using a Lambda function, to finally be sent to the Amazon Security Lake bucket in Parquet format.

Attach a terminal to the container and start the integration by starting Logstash, as follows:
The pipeline starts automatically, but if you need to start it manually, attach a terminal to the Logstash container and start the integration using the command below:

```console
/usr/share/logstash/bin/logstash -f /usr/share/logstash/pipeline/indexer-to-s3.conf --path.settings /etc/logstash
/usr/share/logstash/bin/logstash -f /usr/share/logstash/pipeline/indexer-to-s3.conf
```

After 5 minutes, the first batch of data will show up in http://localhost:9444/ui/wazuh-aws-security-lake-raw. You'll need to invoke the Lambda function manually, selecting the log file to process.

```bash
bash amazon-security-lake/src/invoke-lambda.sh <file>
bash amazon-security-lake/invoke-lambda.sh <file>
```

Processed data will be uploaded to http://localhost:9444/ui/wazuh-aws-security-lake-parquet. Click on any file to download it, and check it's content using `parquet-tools`. Just make sure of installing the virtual environment first, through [requirements.txt](./requirements.txt).
Expand Down
22 changes: 8 additions & 14 deletions integrations/docker/compose.amazon-security-lake.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
version: "3.8"
name: "amazon-security-lake"

services:
Expand Down Expand Up @@ -79,16 +78,17 @@ services:
SERVER_SSL_CERTIFICATE: "/usr/share/opensearch-dashboards/config/certs/opensearch.pem"
OPENSEARCH_SSL_CERTIFICATEAUTHORITIES: "/usr/share/opensearch-dashboards/config/certs/root-ca.pem"

wazuh.integration.security.lake:
image: wazuh/indexer-security-lake-integration
logstash:
depends_on:
- wazuh.indexer
# image: wazuh/indexer-security-lake-integration
image: logstash-oss:${LOGSTASH_OSS_VERSION}
build:
context: ../logstash
args:
- LOGSTASH_OSS_VERSION=${LOGSTASH_OSS_VERSION}
container_name: wazuh.integration.security.lake
depends_on:
- wazuh.indexer
hostname: wazuh.integration.security.lake
# container_name: wazuh.integration.security.lake
# hostname: wazuh.integration.security.lake
environment:
LOG_LEVEL: trace
LOGSTASH_KEYSTORE_PASS: "SecretPassword"
Expand All @@ -104,10 +104,8 @@ services:
- "5044:5044"
- "9600:9600"
volumes:
- ../amazon-security-lake/logstash/pipeline:/usr/share/logstash/pipeline # TODO has 1000:1000. logstash's uid is 999
- ../amazon-security-lake/logstash/pipeline:/usr/share/logstash/pipeline
- ./certs/root-ca.pem:/usr/share/logstash/root-ca.pem
- ../amazon-security-lake/src:/usr/share/logstash/amazon-security-lake # TODO use dedicated folder
# - ./credentials:/usr/share/logstash/.aws/credentials # TODO credentials are not commited (missing)

s3.ninja:
image: scireum/s3-ninja:latest
Expand All @@ -122,13 +120,9 @@ services:
image: wazuh/indexer-security-lake-integration:lambda
build:
context: ../amazon-security-lake
dockerfile: ../amazon-security-lake/aws-lambda.dockerfile
container_name: wazuh.integration.security.lake.aws.lambda
hostname: wazuh.integration.security.lake.aws.lambda
environment:
AWS_ACCESS_KEY_ID: "AKIAIOSFODNN7EXAMPLE"
AWS_SECRET_ACCESS_KEY: "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
AWS_REGION: "us-east-1"
REGION: "us-east-1"
AWS_BUCKET: "wazuh-aws-security-lake-parquet"
S3_BUCKET_OCSF: "wazuh-aws-security-lake-ocsf"
Expand Down

0 comments on commit bfe9cd1

Please sign in to comment.