diff --git a/integrations/amazon-security-lake/CONTRIBUTING.md b/integrations/amazon-security-lake/CONTRIBUTING.md index 4f288b961ca7c..7675aa03c7961 100644 --- a/integrations/amazon-security-lake/CONTRIBUTING.md +++ b/integrations/amazon-security-lake/CONTRIBUTING.md @@ -1,6 +1,6 @@ # Wazuh to Amazon Security Lake Integration Development Guide -### Deployment guide on Docker +## Deployment guide on Docker A demo of the integration can be started using the content of this folder and Docker. Open a terminal in the `wazuh-indexer/integrations` folder and start the environment. @@ -8,8 +8,15 @@ A demo of the integration can be started using the content of this folder and Do docker compose -f ./docker/amazon-security-lake.yml up -d ``` -This Docker Compose project will bring a _wazuh-indexer_ node, a _wazuh-dashboard_ node, -a _logstash_ node, our event generator and an AWS Lambda Python container. On the one hand, the event generator will push events constantly to the indexer, to the `wazuh-alerts-4.x-sample` index by default (refer to the [events generator](./tools/events-generator/README.md) documentation for customization options). On the other hand, Logstash will query for new data and deliver it to output configured in the pipeline, which can be one of `indexer-to-s3` or `indexer-to-file`. +This Docker Compose project will bring up these services: + +- a _wazuh-indexer_ node +- a _wazuh-dashboard_ node +- a _logstash_ node +- our [events generator](./tools/events-generator/README.md) +- an AWS Lambda Python container. + +On the one hand, the event generator will push events constantly to the indexer, to the `wazuh-alerts-4.x-sample` index by default (refer to the [events generator](./tools/events-generator/README.md) documentation for customization options). On the other hand, Logstash will query for new data and deliver it to output configured in the pipeline, which can be one of `indexer-to-s3` or `indexer-to-file`. The `indexer-to-s3` pipeline is the method used by the integration. This pipeline delivers the data to an S3 bucket, from which the data is processed using a Lambda function, to finally be sent to the Amazon Security Lake bucket in Parquet format. @@ -32,7 +39,7 @@ Processed data will be uploaded to http://localhost:9444/ui/wazuh-aws-security-l parquet-tools show ``` -If the `` variable is set in the container running the AWS Lambda function, intermediate data in OCSF and JSON format will be written to a dedicated bucket. This is enabled by default, writing to the `wazuh-aws-security-lake-ocsf` bucket. Bucket names and additional environment variables can be configured editing the [amazon-security-lake.yml](./docker/amazon-security-lake.yml) file. +If the `S3_BUCKET_OCSF` variable is set in the container running the AWS Lambda function, intermediate data in OCSF and JSON format will be written to a dedicated bucket. This is enabled by default, writing to the `wazuh-aws-security-lake-ocsf` bucket. Bucket names and additional environment variables can be configured editing the [amazon-security-lake.yml](./docker/amazon-security-lake.yml) file. For development or debugging purposes, you may want to enable hot-reload, test or debug on these files, by using the `--config.reload.automatic`, `--config.test_and_exit` or `--debug` flags, respectively. @@ -44,3 +51,9 @@ See [README.md](README.md). The instructions on that section have been based on - [Working with .zip file archives for Python Lambda functions](https://docs.aws.amazon.com/lambda/latest/dg/python-package.html) - [Best practices for working with AWS Lambda functions](https://docs.aws.amazon.com/lambda/latest/dg/best-practices.html) +## Makefile + +**Docker is required**. + +The [Makefile](./Makefile) in this folder automates the generation of a zip deployment package containing the source code and the required dependencies for the AWS Lambda function. Simply run `make` and it will generate the `wazuh_to_amazon_security_lake.zip` file. The main target runs a Docker container to install the Python3 dependencies locally, and zips the source code and the dependencies together. + diff --git a/integrations/amazon-security-lake/Makefile b/integrations/amazon-security-lake/Makefile index 9a6dd674b37e7..d1c11a0b01585 100644 --- a/integrations/amazon-security-lake/Makefile +++ b/integrations/amazon-security-lake/Makefile @@ -25,4 +25,6 @@ $(TARGET): clean: @rm -rf $(TARGET) - @py3clean . \ No newline at end of file + docker run -v `pwd`:/src -w /src \ + python:3.12 \ + py3clean . \ No newline at end of file diff --git a/integrations/amazon-security-lake/README.md b/integrations/amazon-security-lake/README.md index 8025cb4feea56..7af236b61b6bb 100644 --- a/integrations/amazon-security-lake/README.md +++ b/integrations/amazon-security-lake/README.md @@ -90,7 +90,7 @@ Follow the [official documentation](https://docs.aws.amazon.com/lambda/latest/dg - Configure the runtime to have 512 MB of memory and 30 seconds timeout. - Configure a trigger so every object with `.txt` extension uploaded to the S3 bucket created previously invokes the Lambda. ![AWS Lambda trigger](./images/asl-lambda-trigger.jpeg) -- Download the zip containing the integration's code (**insert url**) and upload it to the S3 bucket created previously as per [these instructions](https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-package.html#gettingstarted-package-zip). You can also generate the zip deployment package manually by using the [Makefile](./Makefile) for this purpose (see [CONTRIBUTING](./CONTRIBUTING.md)). +- Use the [Makefile](./Makefile) to generate the zip package `wazuh_to_amazon_security_lake.zip`, and upload it to the S3 bucket created previously as per [these instructions](https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-package.html#gettingstarted-package-zip). See [CONTRIBUTING](./CONTRIBUTING.md) for details about the Makefile. - Configure the Lambda with the at least the required _Environment Variables_ below: | Environment variable | Required | Value |