diff --git a/doc/loadtestscenario/CrawlUrl.png b/doc/loadtestscenario/CrawlUrl.png new file mode 100644 index 0000000..738bb10 Binary files /dev/null and b/doc/loadtestscenario/CrawlUrl.png differ diff --git a/doc/loadtestscenario/IntellijAutomatorApplication.png b/doc/loadtestscenario/IntellijAutomatorApplication.png new file mode 100644 index 0000000..b3a9846 Binary files /dev/null and b/doc/loadtestscenario/IntellijAutomatorApplication.png differ diff --git a/doc/loadtestscenario/IntellijCLIExecution.png b/doc/loadtestscenario/IntellijCLIExecution.png new file mode 100644 index 0000000..2a2bc66 Binary files /dev/null and b/doc/loadtestscenario/IntellijCLIExecution.png differ diff --git a/doc/loadtestscenario/ProcessAutomatorScenario.png b/doc/loadtestscenario/ProcessAutomatorScenario.png new file mode 100644 index 0000000..ca62a15 Binary files /dev/null and b/doc/loadtestscenario/ProcessAutomatorScenario.png differ diff --git a/doc/loadtestscenario/README.md b/doc/loadtestscenario/README.md new file mode 100644 index 0000000..f67516b --- /dev/null +++ b/doc/loadtestscenario/README.md @@ -0,0 +1,36 @@ +# Load Test scenario + +A load test scenario is different than a unit test. In a load test, the goal is to mimic the production: + +* in term on process instance creation +* in term of service task, using the real service task or simulate it + +A load test scenario will set up an environment, and run it for a delay (like 30 mn) and check +throuput objectives. This is for example a load test scenario + +* Create 10 process instances par minutes in the process "Loan application" +* simulate the service task "GetCreditScore": each execution needs 2 minutes to complete. +Run 2 workers on this service task, and for each worker, 10 threads. +The worker will return in 80% a credit score lower than 500. +* Check the objective that 10 process instances can be completed every minutes + +* Because the GetCreditScore is a 2 minutes execution, on the first two minutes, the throuput will be 0 +(no process instance will be completed), so a warmup of 3 minutes is necessary before monitoring +the objective + +Process Automator execute processes on a platform. Some services may be available on the platform, so +it is needed to simulate only the missing piece. On the opposite, if all services are available, the +scenario will just creates process instances and check objectives. + +## Tutorial +A complete example to run a scenario, and change the Zeebe Parameters to reach the expecting delay is +available in the (Tutorial)[Tutorial.md] + +## Create process instance + +## Simulate a service task + +## Simulate a user task + +## Objectives + diff --git a/doc/loadtestscenario/RunCrawlUrl-1.png b/doc/loadtestscenario/RunCrawlUrl-1.png new file mode 100644 index 0000000..877e3a6 Binary files /dev/null and b/doc/loadtestscenario/RunCrawlUrl-1.png differ diff --git a/doc/loadtestscenario/StartCheckInOperate.png b/doc/loadtestscenario/StartCheckInOperate.png new file mode 100644 index 0000000..de06dcd Binary files /dev/null and b/doc/loadtestscenario/StartCheckInOperate.png differ diff --git a/doc/loadtestscenario/StartFromModeler.png b/doc/loadtestscenario/StartFromModeler.png new file mode 100644 index 0000000..3677587 Binary files /dev/null and b/doc/loadtestscenario/StartFromModeler.png differ diff --git a/doc/loadtestscenario/TheoricalCalculation.xlsx b/doc/loadtestscenario/TheoricalCalculation.xlsx new file mode 100644 index 0000000..a2908e8 Binary files /dev/null and b/doc/loadtestscenario/TheoricalCalculation.xlsx differ diff --git a/doc/loadtestscenario/Tutorial.md b/doc/loadtestscenario/Tutorial.md new file mode 100644 index 0000000..eaa365a --- /dev/null +++ b/doc/loadtestscenario/Tutorial.md @@ -0,0 +1,605 @@ +# Load Test Tutorial + +The goal of this tutorial is to create a load test, execute it, check the metrics on Zeebe, +change the different parameters to find the correct configuration for the platform. + +## Requirement +A Zeebe cluster is mandatory to run the test. This cluster may be deployed locally, on a cloud, or +in the SaaS environment. The Graphana page is a plus for understanding how the Zeebe platform reacts. + +The first step in the tutorial is to run the process automator locally. The second part will run the Process-Automator inside a cluster, simulating workers and creating pods. + +For the first step, a Zeebe Cluster is started, accessible from localhost:26000, and operates is +accessible on localhost:8081. + +## Specifications +Specifications are the following: + +![CrawlUrl](CrawlUrl.png) + +In this requirement, each service task needs some time to execute. The maximum time is provided in the comment. +The Search failed 10 % of the time, and a user must check the request. To be in the "load situation", +he will decide to process the URL. + +An order contains multiple sub-searchs. This may vary from 200 to 1000. + +To check the peak, the test will consider the number of sub-search is 1000. The user task will be simulated to accept each request in less than 2 seconds. + +The throughput expected is 10 orders per minute. + +This process is accessible in `src/main/resources/loadtest/C8CrawlUrl.bpmn` + + +## Deploy the process and check it + +Deploy the process in your C8 server. Start a process instance with this information: + +````json +{ + "loopSearch": [ + 1, + 2, + 3, + 4, + 5 + ], + "urlNotFound": true, + "processAcceptable": true +} +```` + +![Start from Modeler](StartFromModeler.png) + +Check via Operate. One process instance is created. + +![Check in Operate](StartCheckInOperate.png) + +## The scenario + +The scenario created is + +![Process Automator Scenario](ProcessAutomatorScenario.png) + +**STARTEVENT** +Two types of Start event is created: one for the main flow (5 process instance per second) and the second one. +For the user task + + +````json +[ + { + "type": "STARTEVENT", + "taskId": "StartEvent", + "processId": "CrawlUrl", + "Frequency": "PT30S", + "numberOfExecutions": "5", + "nbWorkers": "1", + "variables": { + "urlNotFound": false + }, + "variablesOperation": { + "loopcrawl": "generaterandomlist(1000)" + } + }, + { + "type": "STARTEVENT", + "taskId": "StartEvent", + "processId": "CrawlUrl", + "Frequency": "PT1M", + "numberOfExecutions": "1", + "nbWorkers": "1", + "variables": { + "urlNotFound": true + }, + "variablesOperation": { + "loopcrawl": "generaterandomlist(1000)" + } + } +] + +```` + +Then, one service task simulator per service task, and one for the user task. + +````json +[ + { + "type": "SERVICETASK", + "topic": "crawl-retrieve", + "waitingTime": "PT2S", + "modeExecution": "ASYNCHRONOUS" + }, + { + "type": "SERVICETASK", + "topic": "crawl-search", + "waitingTime": "PT10S", + "modeExecution": "ASYNCHRONOUS" + }, + { + "type": "USER TASK", + "taskId": "Activity_Verify", + "waitingTime": "PT10S", + "modeExecution": "ASYNCHRONOUS", + "variables": { + "processAcceptable": true + } + }, + { + "type": "SERVICETASK", + "topic": "crawl-add", + "waitingTime": "PT5S", + "modeExecution": "ASYNCHRONOUS" + }, + { + "type": "SERVICETASK", + "topic": "crawl-message", + "waitingTime": "PT0S", + "modeExecution": "ASYNCHRONOUS" + }, + { + "type": "SERVICETASK", + "topic": "crawl-filter", + "waitingTime": "PT1S", + "modeExecution": "ASYNCHRONOUS" + }, + { + "type": "SERVICETASK", + "topic": "crawl-store", + "waitingTime": "PT1S", + "modeExecution": "ASYNCHRONOUS" + } +] +```` + +## Run the scenario + + +### Via the CLI + +The CLI tool starts one scenario and stops at the end of the execution. Because this is a Flow scenario, +It will begin with all the commands, and at the end of the test (specified in the duration time), check if +the objective has been fulfilled and then stopped. + +To execute the scenario locally, use this command. + + +```` +cd target +java -cp *.jar org.camunda.automator.AutomatorCLI -s Camunda8Local -v -l MAIN -x run src/main/resources/loadtest/C8CrawlUrlScn.json +```` + +Camunda8Local is defined in the application.yaml file + +````yaml + + +automator.servers: + camunda8: + name: "Camunda8Local" + zeebeGatewayAddress: "127.0.0.1:26500" + operateUserName: "demo" + operateUserPassword: "demo" + operateUrl: "http://localhost:8081" + taskListUrl: "" + workerExecutionThreads: 500 + workerMaxJobsActive2: 500 +```` + + + +On Intellij, run this command +![Intellij CLI Execution](IntellijCLIExecution.png) + + +### Via the application + +Specify in the application parameter what you want to run. + +`````yaml +Automator.startup: +scenarioPath: ./src/main/resources/loadtest +# List of scenarios separated by ; +scenarioAtStartup: C8CrawlUrl.json; +# DEBUG, INFO, MONITORING, MAIN, NOTHING +logLevel: MAIN +# string composed with DEPLOYPROCESS, WARMINGUP, CREATION, SERVICETASK (ex: "CREATION", "DEPLOYPROCESS|CREATION|SERVICETASK") +policyExecution: DEPLOYPROCESS|WARMINGUP|CREATION|SERVICETASK|USERTASK +````` + + +Run the command + +```` +mvn spring-boot:run +```` + +or via Intellij: +![Intellij Automator Execution](IntellijAutomatorApplication.png) + +Note: The application will start the scenario automatically but will not stop. + + +### Run as a pod in the cluster + +To be close to the final platform, let's run the process-automator not locally but as a pod in the cluster. + +The main point is the pod must embed the scenario to execute it. +To do that, a specific image must be built. The scenario is saved under src/ + + +Build the docker image via the build command. Replace `pierreyvesmonnet` with your docker user ID, + +```` +docker build -t pierreyvesmonnet/processautomator:1.0.0 . + +docker push pierreyvesmonnet/processautomator:1.0.0 +```` + + +Then, deploy and start the docker image with +```` +kubectl create -f ku-c8CrawlUrl.yaml +```` + +### Run with multiple pods + +The idea is to deploy one pod per service task to be very close and simulate the same number of connections to Zeebe. + +The pod that executes the task "search" must run only this worker. This is done +via parameters `policyExecution` and `filterService` + +``` +-Dautomator.startup.scenarioAtStartup=C8CrawlUrlScn250.json +-Dautomator.startup.policyExecution=SERVICETASK +-Dautomator.startup.filterService=crawl-store +``` + + +## Follow the progress + +""`log +o.c.a.engine.flow.RunScenarioFlows : ------------ Log advancement at Tue Sep 05 17:16:21 PDT 2023 ----- 85 %, end in 59 s +o.c.a.engine.flow.RunScenarioFlows : [STARTEVENT CrawlUrl-StartEvent-main#0] RUNNING currentNbThreads[0] PI[{CrawlUrl=org.camunda.automator.engine.RunResult$RecordCreationPI@c94bd18}] delta[86] +o.c.a.engine.flow.RunScenarioFlows : [STARTEVENT CrawlUrl-StartEvent-main#0] RUNNING currentNbThreads[0] PI[{CrawlUrl=org.camunda.automator.engine.RunResult$RecordCreationPI@71fb8301}] delta[-85] +o.c.a.engine.flow.RunScenarioFlows : [SERVICETASK crawl-retrieve-main#0] RUNNING currentNbThreads[4] StepsExecuted[99] delta [5] StepsErrors[0] +o.c.a.engine.flow.RunScenarioFlows : [SERVICETASK crawl-search-main#0] RUNNING currentNbThreads[4] StepsExecuted[1939] delta [100] StepsErrors[0] +o.c.a.engine.flow.RunScenarioFlows : [SERVICETASK crawl-add-main#0] RUNNING currentNbThreads[4] StepsExecuted[0] delta [0] StepsErrors[0] +o.c.a.engine.flow.RunScenarioFlows : [SERVICETASK crawl-message-main#0] RUNNING currentNbThreads[4] StepsExecuted[1778] delta [100] StepsErrors[0] +o.c.a.engine.flow.RunScenarioFlows : [SERVICETASK crawl-filter-main#0] RUNNING currentNbThreads[4] StepsExecuted[1757] delta [99] StepsErrors[0] +o.c.a.engine.flow.RunScenarioFlows : [SERVICETASK crawl-store-main#0] RUNNING currentNbThreads[4] StepsExecuted[1777] delta [100] StepsErrors[0] +````` + + + +## Check the result +Via the CLI or via the command line, the first execution is + +![First execution](RunCrawlUrl-1.png) + +Check the Graphana page, too, during the execution : + + +Looking at the result, it is visible that the "search" activity is the bottleneck. +To improve the performance, the number of worker + +# Conduct a load test + +## Theoretical calculation +The requirement is 200 process instances every 30 seconds. Let's base the calculation per minute. +This is then 400 process instances/minute. + + +The first task needs 2 seconds duration. To execute 400 process instances, it will need 2*400=800 s. +Because this throughput is required by minute, multiple workers must do it in parallel. +One worker has a throughput of 60 s per 60 s. Workers are mandatory to handle 800 s, 800/60=13.3 (so, 14). + +This can be done in different ways: +* one application(pod) manage multiple threads. A worker with 14 threads is mandatory (one thread= one worker) +* or multiple applications(pods), with one thread, can be used (14 applications/pods) +* A mix of the two approaches is possible. The adjustment is made according to the resource. + +If the treatment of the worker is to manage a movie, one pod can maybe deal with two or three workers at the same time. +So, to handle 14 workers, 14/3=5 pods may be necessary. + +From the Zeebe client point of view, a pod can manage up to 200 threads after the multithreading is less efficient. + +We are in a simulation in our scenario, so the only limit is about 200 threads per pod. + +The theoretical calculation is + +| Name | Duration(s) | Loop | Load (s) | Workers | +|---------------|-------------:|-----:|---------:|--------:| +| Retrieve Work | 2 | 1 | 800 | 13.3 | +| Search | 10 | 10 | 40000 | 666.7 | +| Message | 1 | 10 | 4000 | 66.7 | +| Add | 5 | 10 | 40000 | 333.3 | +| Filter | 1 | 10 | 4000 | 66.7 | +| Store | 1 | 10 | 4000 | 66.7 | + +Regarding the number of tasks per second, 1+(10*4)=41 service tasks for a process instance. +Creating 200 Process Instances / 30 seconds means 200*2*41=16400 jobs/minute, **273 jobs/second** + + +## Context +The execution is processed in the cloud with the `ku-c8CrawlUrlMultiple.yaml`. +This deployment creates one pod per service task. + +Each change: +* The change must be applied in the file +* The deployment must be run +```` +kubectl create -f ku-c8CrawlUrlMultiple.yaml +```` + +Then, the cluster must be deleted. +````` +kubectl delete -f ku-c8CrawlUrlMultiple.yaml +````` + +## Check the basic + +During the load test, access the Graphana page. + +**Throughput / process Instance creation per second** + +This is the first indicator. Do you have enough process instances created per second? + +In our example, the scenario creates 200 Process Instances / 30 seconds. The graph should move to 7 process instances per second + +![Process Instance creation per second ](graphana/ThroughputProcessInstanceCreationPerSecond.png) + +**Throughput / process Instance completion per second** + +This is the last indicator: if the scenario's objective consists of complete +process instances, it should move to the same level as the creation. Executing a process may need time, +so this graph should be symmetric but may start after. + +![Process Instance completion per second ](graphana/ThroughputProcessInstanceCompletionPerSecond.png) + +**Job Creation per second** + +Job creation and job completion are the second key factors. Creating process instances is generally not a big deal for Zeebe. Executing jobs (service tasks) is more challenging. +For example, in our example, for a process instance, there is 1+(10*4)=41 service tasks. +Creating 200 Process Instances / 30 seconds means 200*2*41=16400 jobs/minute, 273 jobs/second. + + +![Job Creation per second ](graphana/ThroughputJobCreationPerSecond.png) + +When the job completion is lower than the job creation, this is typical when a worker can handle the +throughput. + +![Job Completion per second ](graphana/ThroughputJobCompletionPerSecond.png) + +**CPU Usage** +CPU and Memory usage is part of the excellent health of the platform. Elastic Search is, in general, the most consumer for the CPU. +If Zeebe is close to the value allocated, it's time to increase it or create new partitions. + +![CPU Usage](graphana/CPU.png) + + +![Memory Usage](graphana/MemoryUsage.png) + +**Gateway** + +The gateway is the first actor for the worker. Each worker communicates to a gateway, which asks Zeebe's broker. +If the response time is bad, the first option is to increase the number of gateways. However, the issue may come from the number of partitions: there may not be enough partitions, and Zeebe needs time to process the request. + +![Graphana Gateway](graphana/Gateway.png) + +**GRPC** + +GRPC graph is essential to see how the network is doing and if all the traffic gets a correct response time. +If the response is high, consider increasing the number of gateways or the number of partitions. + +![Graphana GRPC](graphana/GRPC.png) + +**GRPC Jobs Latency** +Jobs latency is essential. This metric gives the time when a worker asks for a job or submits a job the time Zeebe takes the request into account. If the response is high, consider increasing the number of gateways or the number of partitions. + +![Jobs Latency](graphana/GRPCJobsLatency.png) + +**Partitions** +The number of partitions is a very important metric. If there is a lot of traffic, a lot of tasks +to be executed per second, then this is the first scale parameter. + +Attention, too many partitions is counterproductive: a worker connects to the Zeebe Gateway to search +for a new job. The Zeebe Gateway will connect all partitions. When there are too many partitions (over 50), then the delay to fetch new jobs increases. + +![Partitions](graphana/OverviewPartitions.png) + +**Partitions** + +Zeebe maintains a stream to execute a process instance. In this stream, two pointers are running: +* one for the next job to execute (execute a gateway, execute a worker submission) +* one for the exporter to Elastic Search + +Where there is a lot of data to process, the Elastic search pointer may be late behind the execution: +the stream grows up. This may not be a big deal if, at one moment, the flow slows down, then the second pointer +will catch up. But if this is not the situation, the stream may reach the PVC limit. If this happens, then +the first pointer will slow down, and the Zeebe Engine will stop to accept new jobs: the speed will be then the slowest limit. + +In the case of a high throughput test, it is nice to keep an eye on this indicator. If the positions differ a lot, you should enlarge the test period to check the performance when the stream is full because this +moment, the speed will slow down. + +![Last Exporter Position](GraphanaLastExporterPosition.png) + + + + +## first execution (Test 1) + +As expected, the goal can't be reached. + +The creation is close to the result, but the goal failed. +```` + ERROR 1 --- [AutomatorSetup1] org.camunda.automator.engine.RunResult : Objective: FAIL Creation type CREATED processId [CrawlUrl] Objective Creation: ObjectiveCreation[4000] Created(zeebeAPI)[3912] Create(AutomatorRecor +d)[3884 (97 % ) CreateFail(AutomatorRecord)[1] +```` + +Looking at the Graphana overview, one partition gets a backpressure +![Back pressure](test_1/test-1-Backpressure.png) + +Looking Operate, we can identify which service task was the bottleneck. + +![Operate](test_1/test-1-operate.png) + + +Attention: When the test is finished, you have to stop as soon as possible the cluster. Because +Multiple pods are created to execute service tasks. If you don't stop these workers, they will continue to process + +Note: To access this log after the creation, do a +```` +kubectl get pods +```` + +Find the pod (due to the deployment, it got a prefix)`. Then run: +```` +kubectl logs -f ku-processautomator-creation-679b6f64b5-vl69t +```` + + +## Test 2 + +**What's change** +To improve the platform, we adjust the worker on the expected value. + +For example, for the worker `ku-processautomator-retrievework`, we moved the value to 15 (theory is 13.3). +```` + -Dautomator.servers.camunda8.workerExecutionThreads=15 +```` + +For the Search, we move to 3 replicas, with a number of threads to 250 + +```` +replicas: 3 + -Dautomator.servers.camunda8.workerExecutionThreads=250 +```` + + +** Execution** +During the execution, this log shows up +```` +STARTEVENT Step #1-STARTEVENT CrawlUrl-StartEvent-CrawlUrl-01#0 Error at creation: [Can't create in process [CrawlUrl] :Expected to execute the comma +nd on one of the partitions, but all failed; there are no more partitions available to retry. Please try again. If the error persists contact your zeebe operator] +```` +Looking at the Graphana overview, one partition gets a backpressure +![Back pressure](test_2/test-2-Backpressure.png) + +The job latency is important to: +![Job Latency](test_2/test-2-JobsLatency.png) + +There are more workers now, putting more pressure on the engine. +The number of jobs per second is better now, close to 200. +![Job Per seconds](test_2/test-2-JobsPerSecond.png) + +The CPU is at 1 for Elastic Search (which is the maximum limit in the cluster definition) +and 512 for Zeebe (the maximum limit too). +![CPU](test_2/test-2-CPU.png) + +Note that Elastic Search is not a bottleneck for Zeebe. To verify that, the "latest position exported" is compared to the +"latest position executed", and they are close. +But the issue will be in Operate: the dashboard may be late in regard to the reality +![Exporter position](test_2/test-2-ExporterPosition.png) + +The final result in Operate shows that it seems there is no major bottleneck +![Operate](test_2/test-2-Operate.png) + + +In the end, the creation objective is below. +```` +Objective: FAIL Creation type CREATED processId [CrawlUrl] Objective Creation: ObjectiveCreation[4000] Created(zeebeAPI)[1235] Create(AutomatorRecor +```` + + +The next move consists of increasing the number of partitions + +## Test 3 Increase the platform + +**What's change** +A ten-partition platform is used. Elastic Search will have 5 CPUs to run. +There is no change in the application cluster. + +""`yaml +zeebe: +clusterSize: 10 +partitionCount: 10 +replicationFactor: 3 + +elasticsearch: +replicas: 1 +resources: +requests: +cpu: "5" +memory: "512M" +limits: +cpu: "5" +memory: "2Gi" + +````` + + +** Execution** + +The execution went correctly. + +The back pressure is very low +![back pressure](test_3/test-3-Backpressure.png) + +CPU stays at a normal level + +![](test_3/test-3-CPU.png) + +Jobs Latency stays under a reasonable level. + +![](test_3/test-3-JobsLatency.png) + +Jobs per second can now reach 400 per second as a peak and then run at 300 per second. This level is stable. + +![](test_3/test-3-JobsPerSeconds.png) + +At the end, Operate show that all process are mainly processed. Just some tasks are pending in +a worker (this node stopped before the other, maybe) + +![](test_3/test-3-Operate.png) + +Objectives are mainly reached: 3800 process instances were processed. + +The reliqua comes from the startup of the different pods: the cluster starts different pods on the scenario. + +````log +2023-09-07 19:10:53.400 INFO 1 --- [AutomatorSetup1] o.c.a.engine.flow.RunScenarioFlows : Objective: SUCCESS type CREATED label [Creation} processId[CrawlUrl] reach 4010 (objective is 4000 ) analysis [Objective Creation: ObjectiveCreation[ +4000] Created(zeebeAPI)[4010] Create(AutomatorRecord)[4010 (100 % ) CreateFail(AutomatorRecord)[0]} +2023-09-07 19:10:53.500 ERROR 1 --- [AutomatorSetup1] org.camunda.automator.engine.RunResult : Objective: FAIL Ended type ENDED processId [CrawlUrl] Fail: Ended : 4000 ended expected, 3800 created (95 %), +2 +```` +Note: The parameters for the Google Cloud are here [doc/loadtestscenario/test_3/camunda-values-2.yaml](test_3/camunda-values-2.yaml) +## Test 4 + +To ensure the sizing is correct, we make a new test with more input +* Increase the warmup to 3 mn to pass the peak +* Increase the running time to 15 mn +* The number of process instances is set to 250 / 30 seconds (requirement is 200 / 30 seconds) - this is a 25% increase +* Increase the number of threads in each worker by 25 % + +The scenario used is `src/main/resources/loadtest/C8CrawlUrlScn250.json` + +Jobs per second is stable. + +![Jobs per second](test_4/test-4-JobsPerSeconds.png) + +and operate shows at the end of the test there is no bottleneck +![Operate](test_4/test-4-Operate.png) + +The cluster can handle this extra overload. Objectives can be reach +""`log +Objective: SUCCESS type CREATED label [Creation} processId[CrawlUrl] reach 7515 (objective is 4000 +) analysis [Objective Creation: ObjectiveCreation[4000] Created(zeebeAPI)[7727] Create(AutomatorRecord)[7515 (187 % ) CreateFail(AutomatorRecord)[0]} +Objective: SUCCESS type ENDED label [Ended} processId[CrawlUrl] reach 7467 (objective is 4000 ) an +alysis [} +2 +````` + +# Conclusion + +Using the scenario and Process-Automator tool helps to determine the correct sizing for the platform. +Analysis tools (Graphana, Operate) are essential to qualify the platform. diff --git a/doc/loadtestscenario/graphana/CPU.png b/doc/loadtestscenario/graphana/CPU.png new file mode 100644 index 0000000..8a74b0f Binary files /dev/null and b/doc/loadtestscenario/graphana/CPU.png differ diff --git a/doc/loadtestscenario/graphana/GRPC.png b/doc/loadtestscenario/graphana/GRPC.png new file mode 100644 index 0000000..a8468d5 Binary files /dev/null and b/doc/loadtestscenario/graphana/GRPC.png differ diff --git a/doc/loadtestscenario/graphana/GRPCJobsLatency.png b/doc/loadtestscenario/graphana/GRPCJobsLatency.png new file mode 100644 index 0000000..323ac8a Binary files /dev/null and b/doc/loadtestscenario/graphana/GRPCJobsLatency.png differ diff --git a/doc/loadtestscenario/graphana/Gateway.png b/doc/loadtestscenario/graphana/Gateway.png new file mode 100644 index 0000000..2d740d3 Binary files /dev/null and b/doc/loadtestscenario/graphana/Gateway.png differ diff --git a/doc/loadtestscenario/graphana/MemoryUsage.png b/doc/loadtestscenario/graphana/MemoryUsage.png new file mode 100644 index 0000000..d9c75cf Binary files /dev/null and b/doc/loadtestscenario/graphana/MemoryUsage.png differ diff --git a/doc/loadtestscenario/graphana/OverviewPartitions.png b/doc/loadtestscenario/graphana/OverviewPartitions.png new file mode 100644 index 0000000..f56f6d7 Binary files /dev/null and b/doc/loadtestscenario/graphana/OverviewPartitions.png differ diff --git a/doc/loadtestscenario/graphana/ProcessingLastExporterPosition.png b/doc/loadtestscenario/graphana/ProcessingLastExporterPosition.png new file mode 100644 index 0000000..6d267f7 Binary files /dev/null and b/doc/loadtestscenario/graphana/ProcessingLastExporterPosition.png differ diff --git a/doc/loadtestscenario/graphana/ThroughputJobCompletionPerSecond.png b/doc/loadtestscenario/graphana/ThroughputJobCompletionPerSecond.png new file mode 100644 index 0000000..5cfd170 Binary files /dev/null and b/doc/loadtestscenario/graphana/ThroughputJobCompletionPerSecond.png differ diff --git a/doc/loadtestscenario/graphana/ThroughputJobCreationPerSecond.png b/doc/loadtestscenario/graphana/ThroughputJobCreationPerSecond.png new file mode 100644 index 0000000..2020dae Binary files /dev/null and b/doc/loadtestscenario/graphana/ThroughputJobCreationPerSecond.png differ diff --git a/doc/loadtestscenario/graphana/ThroughputProcessInstanceCompletionPerSecond.png b/doc/loadtestscenario/graphana/ThroughputProcessInstanceCompletionPerSecond.png new file mode 100644 index 0000000..40db49c Binary files /dev/null and b/doc/loadtestscenario/graphana/ThroughputProcessInstanceCompletionPerSecond.png differ diff --git a/doc/loadtestscenario/graphana/ThroughputProcessInstanceCreationPerSecond.png b/doc/loadtestscenario/graphana/ThroughputProcessInstanceCreationPerSecond.png new file mode 100644 index 0000000..db52e12 Binary files /dev/null and b/doc/loadtestscenario/graphana/ThroughputProcessInstanceCreationPerSecond.png differ diff --git a/doc/loadtestscenario/ku-c8CrawlUrl.yaml b/doc/loadtestscenario/ku-c8CrawlUrl.yaml new file mode 100644 index 0000000..d0100e8 --- /dev/null +++ b/doc/loadtestscenario/ku-c8CrawlUrl.yaml @@ -0,0 +1,44 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: ku-processautomator + labels: + app: ku-processautomator +spec: + selector: + matchLabels: + app: ku-processautomator + replicas: 1 + template: + metadata: + labels: + app: ku-processautomator + annotations: + prometheus.io/scrape: "true" + prometheus.io/port: "8088" + prometheus.io/path: "/actuator/prometheus" + spec: + containers: + - name: ku-processautomator + image: pierreyvesmonnet/processautomator:1.0.0 + imagePullPolicy: Always + env: + - name: JAVA_TOOL_OPTIONS + value: >- + -Dautomator.servers.camunda8.zeebeGatewayAddress=camunda-zeebe-gateway:26500 + -Dautomator.servers.camunda8.operateUserName=demo + -Dautomator.servers.camunda8.operateUserPassword=demo + -Dautomator.servers.camunda8.operateUrl=http://camunda-operate:80 + -Dautomator.servers.camunda8.taskListUrl= + -Dautomator.servers.camunda8.workerExecutionThreads=1 + -Dautomator.startup.scenarioPath=/app/scenarii/loadtest + -Dautomator.startup.scenarioAtStartup=C8CrawlUrlScn.json + -Dautomator.startup.policyExecution=DEPLOYPROCESS|WARMINGUP|CREATION|SERVICETASK|USERTASK + -Dautomator.startup.logLevel=MONITORING + resources: + limits: + cpu: "1" + memory: 2Gi + requests: + cpu: "1" + memory: 1Gi \ No newline at end of file diff --git a/doc/loadtestscenario/test_1/test-1-c8CrawlUrlMultiple.yaml b/doc/loadtestscenario/test_1/test-1-c8CrawlUrlMultiple.yaml index 6f2e1ce..4b53d7e 100644 --- a/doc/loadtestscenario/test_1/test-1-c8CrawlUrlMultiple.yaml +++ b/doc/loadtestscenario/test_1/test-1-c8CrawlUrlMultiple.yaml @@ -37,11 +37,11 @@ spec: -Dautomator.startup.logLevel=MONITORING resources: limits: - cpu: 1 - memory: 2Gi + cpu: "500m" + memory: "512m" requests: - cpu: 1 - memory: 1Gi + cpu: "500m" + memory: "512m" --- apiVersion: apps/v1 kind: Deployment @@ -83,11 +83,11 @@ spec: -Dautomator.startup.logLevel=MONITORING resources: limits: - cpu: 1 - memory: 2Gi + cpu: "500m" + memory: "512m" requests: - cpu: 1 - memory: 1Gi + cpu: "500m" + memory: "512m" --- apiVersion: apps/v1 kind: Deployment @@ -129,11 +129,11 @@ spec: -Dautomator.startup.logLevel=MONITORING resources: limits: - cpu: 1 - memory: 2Gi + cpu: "500m" + memory: "512m" requests: - cpu: 1 - memory: 1Gi + cpu: "500m" + memory: "512m" --- apiVersion: apps/v1 kind: Deployment @@ -175,11 +175,11 @@ spec: -Dautomator.startup.logLevel=MONITORING resources: limits: - cpu: 1 - memory: 2Gi + cpu: "500m" + memory: "512m" requests: - cpu: 1 - memory: 1Gi + cpu: "500m" + memory: "512m" --- apiVersion: apps/v1 @@ -221,11 +221,11 @@ spec: -Dautomator.startup.logLevel=MONITORING resources: limits: - cpu: 1 - memory: 2Gi + cpu: "500m" + memory: "512m" requests: - cpu: 1 - memory: 1Gi + cpu: "500m" + memory: "512m" --- apiVersion: apps/v1 kind: Deployment @@ -267,12 +267,11 @@ spec: -Dautomator.startup.logLevel=MONITORING resources: limits: - cpu: 1 - memory: 2Gi + cpu: "500m" + memory: "512m" requests: - cpu: 1 - memory: 1Gi - + cpu: "500m" + memory: "512m" --- apiVersion: apps/v1 kind: Deployment @@ -314,11 +313,11 @@ spec: -Dautomator.startup.logLevel=MONITORING resources: limits: - cpu: 1 - memory: 2Gi + cpu: "500m" + memory: "512m" requests: - cpu: 1 - memory: 1Gi + cpu: "500m" + memory: "512m" --- apiVersion: apps/v1 kind: Deployment @@ -360,8 +359,8 @@ spec: -Dautomator.startup.logLevel=MONITORING resources: limits: - cpu: 1 - memory: 2Gi + cpu: "500m" + memory: "512m" requests: - cpu: 1 - memory: 1Gi \ No newline at end of file + cpu: "500m" + memory: "512m" diff --git a/doc/loadtestscenario/test_2/test-2-c8CrawlUrlMultiple.yaml b/doc/loadtestscenario/test_2/test-2-c8CrawlUrlMultiple.yaml index e7f8610..ab3919e 100644 --- a/doc/loadtestscenario/test_2/test-2-c8CrawlUrlMultiple.yaml +++ b/doc/loadtestscenario/test_2/test-2-c8CrawlUrlMultiple.yaml @@ -37,11 +37,12 @@ spec: -Dautomator.startup.logLevel=MONITORING resources: limits: - cpu: 1 - memory: 2Gi + cpu: "100m" + memory: "512M" requests: - cpu: 1 - memory: 1Gi + cpu: "100m" + memory: "512M" + --- apiVersion: apps/v1 kind: Deployment @@ -83,11 +84,12 @@ spec: -Dautomator.startup.logLevel=MONITORING resources: limits: - cpu: 1 - memory: 2Gi + cpu: "100m" + memory: "512M" requests: - cpu: 1 - memory: 1Gi + cpu: "100m" + memory: "512M" + --- apiVersion: apps/v1 kind: Deployment @@ -129,11 +131,12 @@ spec: -Dautomator.startup.logLevel=MONITORING resources: limits: - cpu: 1 - memory: 2Gi + cpu: "100m" + memory: "512M" requests: - cpu: 1 - memory: 1Gi + cpu: "100m" + memory: "512M" + --- apiVersion: apps/v1 kind: Deployment @@ -175,11 +178,12 @@ spec: -Dautomator.startup.logLevel=MONITORING resources: limits: - cpu: 1 - memory: 2Gi + cpu: "100m" + memory: "512M" requests: - cpu: 1 - memory: 1Gi + cpu: "100m" + memory: "512M" + --- apiVersion: apps/v1 @@ -221,11 +225,12 @@ spec: -Dautomator.startup.logLevel=MONITORING resources: limits: - cpu: 1 - memory: 2Gi + cpu: "100m" + memory: "512M" requests: - cpu: 1 - memory: 1Gi + cpu: "100m" + memory: "512M" + --- apiVersion: apps/v1 kind: Deployment @@ -267,11 +272,12 @@ spec: -Dautomator.startup.logLevel=MONITORING resources: limits: - cpu: 1 - memory: 2Gi + cpu: "100m" + memory: "512M" requests: - cpu: 1 - memory: 1Gi + cpu: "100m" + memory: "512M" + --- apiVersion: apps/v1 @@ -314,11 +320,12 @@ spec: -Dautomator.startup.logLevel=MONITORING resources: limits: - cpu: 1 - memory: 2Gi + cpu: "100m" + memory: "512M" requests: - cpu: 1 - memory: 1Gi + cpu: "100m" + memory: "512M" + --- apiVersion: apps/v1 kind: Deployment @@ -360,8 +367,8 @@ spec: -Dautomator.startup.logLevel=MONITORING resources: limits: - cpu: 1 - memory: 2Gi + cpu: "100m" + memory: "512M" requests: - cpu: 1 - memory: 1Gi \ No newline at end of file + cpu: "100m" + memory: "512M" diff --git a/doc/loadtestscenario/test_3/camunda-values-2.yaml b/doc/loadtestscenario/test_3/camunda-values-2.yaml new file mode 100644 index 0000000..8dd03a5 --- /dev/null +++ b/doc/loadtestscenario/test_3/camunda-values-2.yaml @@ -0,0 +1,105 @@ +# Chart values for the Camunda Platform 8 Helm chart. +# This file deliberately contains only the values that differ from the defaults. +# For changes and documentation, use your favorite diff tool to compare it with: +# https://github.com/camunda/camunda-platform-helm/blob/main/charts/camunda-platform/values.yaml + +# This is a very small cluster useful for running locally and for development + +global: + image: + tag: 8.3.0-alpha2 + identity: + auth: + # Disable the Identity authentication + # it will fall back to basic-auth: demo/demo as default user + enabled: false + +identity: + enabled: true + +operate: + # default is 3 + env: + - name: CAMUNDA_OPERATE_IMPORTER_THREADSCOUNT + value: "5" + - name: CAMUNDA_OPERATE_IMPORTER_READERTHREADSCOUNT + value: "5" + +optimize: + enabled: true + +connectors: + enabled: false + inbound: + mode: credentials + resources: + requests: + cpu: "100m" + memory: "512M" + limits: + cpu: "100m" + memory: "512M" + env: + - name: CAMUNDA_OPERATE_CLIENT_USERNAME + value: demo + - name: CAMUNDA_OPERATE_CLIENT_PASSWORD + value: demo + + +prometheusServiceMonitor: + enabled: true + + +# - name: ZEEBE_BROKER_PROCESSING_MAXCOMMANDSINBATCH +# value: "100" +zeebe: + clusterSize: 10 + partitionCount: 10 + replicationFactor: 3 + pvcSize: 5Gi + env: + - name: ZEEBE_BROKER_EXECUTION_METRICS_EXPORTER_ENABLED + value: "true" + - name: ZEEBE_BROKER_PROCESSING_MAXCOMMANDSINBATCH + value: "5000" + resources: + requests: + cpu: "1" + memory: "512M" + limits: + cpu: "1" + memory: "2Gi" + +zeebe-gateway: + replicas: 1 + env: + - name: ZEEBE_GATEWAY_MONITORING_ENABLED + value: "true" + - name: ZEEBE_GATEWAY_THREADS_MANAGEMENTTHREADS + value: "3" + + resources: + requests: + cpu: "100m" + memory: "512m" + limits: + cpu: "1000m" + memory: "1Gi" + + logLevel: ERROR + +elasticsearch: + enabled: true + # imageTag: 7.17.3 + replicas: 1 + minimumMasterNodes: 1 + # Allow no backup for single node setups + clusterHealthCheckParams: "wait_for_status=yellow&timeout=1s" + + resources: + requests: + cpu: "5" + memory: "512M" + limits: + cpu: "5" + memory: "2Gi" diff --git a/doc/loadtestscenario/test_3/test-3-Backpressure.png b/doc/loadtestscenario/test_3/test-3-Backpressure.png new file mode 100644 index 0000000..0744957 Binary files /dev/null and b/doc/loadtestscenario/test_3/test-3-Backpressure.png differ diff --git a/doc/loadtestscenario/test_3/test-3-CPU.png b/doc/loadtestscenario/test_3/test-3-CPU.png new file mode 100644 index 0000000..eaf33f9 Binary files /dev/null and b/doc/loadtestscenario/test_3/test-3-CPU.png differ diff --git a/doc/loadtestscenario/test_3/test-3-JobsLatency.png b/doc/loadtestscenario/test_3/test-3-JobsLatency.png new file mode 100644 index 0000000..5ca05f0 Binary files /dev/null and b/doc/loadtestscenario/test_3/test-3-JobsLatency.png differ diff --git a/doc/loadtestscenario/test_3/test-3-JobsPerSeconds.png b/doc/loadtestscenario/test_3/test-3-JobsPerSeconds.png new file mode 100644 index 0000000..c88d2c2 Binary files /dev/null and b/doc/loadtestscenario/test_3/test-3-JobsPerSeconds.png differ diff --git a/doc/loadtestscenario/test_3/test-3-Operate.png b/doc/loadtestscenario/test_3/test-3-Operate.png new file mode 100644 index 0000000..5753f4c Binary files /dev/null and b/doc/loadtestscenario/test_3/test-3-Operate.png differ diff --git a/doc/loadtestscenario/test_3/test-3-c8CrawlUrlMultiple.yaml b/doc/loadtestscenario/test_3/test-3-c8CrawlUrlMultiple.yaml new file mode 100644 index 0000000..ab3919e --- /dev/null +++ b/doc/loadtestscenario/test_3/test-3-c8CrawlUrlMultiple.yaml @@ -0,0 +1,374 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: ku-processautomator-creation + labels: + app: ku-processautomator-creation +spec: + selector: + matchLabels: + app: ku-processautomator-creation + replicas: 1 + template: + metadata: + labels: + app: ku-processautomator-creation + annotations: + prometheus.io/scrape: "true" + prometheus.io/port: "8088" + prometheus.io/path: "/actuator/prometheus" + spec: + containers: + - name: ku-processautomator-creation + image: pierreyvesmonnet/processautomator:1.0.0 + imagePullPolicy: Always + env: + - name: JAVA_TOOL_OPTIONS + value: >- + -Dautomator.servers.camunda8.zeebeGatewayAddress=camunda-zeebe-gateway:26500 + -Dautomator.servers.camunda8.operateUserName=demo + -Dautomator.servers.camunda8.operateUserPassword=demo + -Dautomator.servers.camunda8.operateUrl=http://camunda-operate:80 + -Dautomator.servers.camunda8.taskListUrl= + -Dautomator.servers.camunda8.workerExecutionThreads=1 + -Dautomator.startup.scenarioPath=/app/scenarii/loadtest + -Dautomator.startup.scenarioAtStartup=C8CrawlUrlScn.json + -Dautomator.startup.policyExecution=DEPLOYPROCESS|WARMINGUP|CREATION + -Dautomator.startup.logLevel=MONITORING + resources: + limits: + cpu: "100m" + memory: "512M" + requests: + cpu: "100m" + memory: "512M" + +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: ku-processautomator-retrievework + labels: + app: ku-processautomator-retrievework +spec: + selector: + matchLabels: + app: ku-processautomator-retrievework + replicas: 1 + template: + metadata: + labels: + app: ku-processautomator-retrievework + annotations: + prometheus.io/scrape: "true" + prometheus.io/port: "8088" + prometheus.io/path: "/actuator/prometheus" + spec: + containers: + - name: ku-processautomator-retrievework + image: pierreyvesmonnet/processautomator:1.0.0 + imagePullPolicy: Always + env: + - name: JAVA_TOOL_OPTIONS + value: >- + -Dautomator.servers.camunda8.zeebeGatewayAddress=camunda-zeebe-gateway:26500 + -Dautomator.servers.camunda8.operateUserName=demo + -Dautomator.servers.camunda8.operateUserPassword=demo + -Dautomator.servers.camunda8.operateUrl=http://camunda-operate:80 + -Dautomator.servers.camunda8.taskListUrl= + -Dautomator.servers.camunda8.workerExecutionThreads=15 + -Dautomator.startup.scenarioPath=/app/scenarii/loadtest + -Dautomator.startup.scenarioAtStartup=C8CrawlUrlScn.json + -Dautomator.startup.policyExecution=SERVICETASK + -Dautomator.startup.filterService=crawl-retrieve + -Dautomator.startup.logLevel=MONITORING + resources: + limits: + cpu: "100m" + memory: "512M" + requests: + cpu: "100m" + memory: "512M" + +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: ku-processautomator-search + labels: + app: ku-processautomator-search +spec: + selector: + matchLabels: + app: ku-processautomator-search + replicas: 3 + template: + metadata: + labels: + app: ku-processautomator-search + annotations: + prometheus.io/scrape: "true" + prometheus.io/port: "8088" + prometheus.io/path: "/actuator/prometheus" + spec: + containers: + - name: ku-processautomator-search + image: pierreyvesmonnet/processautomator:1.0.0 + imagePullPolicy: Always + env: + - name: JAVA_TOOL_OPTIONS + value: >- + -Dautomator.servers.camunda8.zeebeGatewayAddress=camunda-zeebe-gateway:26500 + -Dautomator.servers.camunda8.operateUserName=demo + -Dautomator.servers.camunda8.operateUserPassword=demo + -Dautomator.servers.camunda8.operateUrl=http://camunda-operate:80 + -Dautomator.servers.camunda8.taskListUrl= + -Dautomator.servers.camunda8.workerExecutionThreads=250 + -Dautomator.startup.scenarioPath=/app/scenarii/loadtest + -Dautomator.startup.scenarioAtStartup=C8CrawlUrlScn.json + -Dautomator.startup.policyExecution=SERVICETASK + -Dautomator.startup.filterService=crawl-search + -Dautomator.startup.logLevel=MONITORING + resources: + limits: + cpu: "100m" + memory: "512M" + requests: + cpu: "100m" + memory: "512M" + +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: ku-processautomator-message + labels: + app: ku-processautomator-message +spec: + selector: + matchLabels: + app: ku-processautomator-message + replicas: 1 + template: + metadata: + labels: + app: ku-processautomator-message + annotations: + prometheus.io/scrape: "true" + prometheus.io/port: "8088" + prometheus.io/path: "/actuator/prometheus" + spec: + containers: + - name: ku-processautomator-message + image: pierreyvesmonnet/processautomator:1.0.0 + imagePullPolicy: Always + env: + - name: JAVA_TOOL_OPTIONS + value: >- + -Dautomator.servers.camunda8.zeebeGatewayAddress=camunda-zeebe-gateway:26500 + -Dautomator.servers.camunda8.operateUserName=demo + -Dautomator.servers.camunda8.operateUserPassword=demo + -Dautomator.servers.camunda8.operateUrl=http://camunda-operate:80 + -Dautomator.servers.camunda8.taskListUrl= + -Dautomator.servers.camunda8.workerExecutionThreads=70 + -Dautomator.startup.scenarioPath=/app/scenarii/loadtest + -Dautomator.startup.scenarioAtStartup=C8CrawlUrlScn.json + -Dautomator.startup.policyExecution=SERVICETASK + -Dautomator.startup.filterService=crawl-message + -Dautomator.startup.logLevel=MONITORING + resources: + limits: + cpu: "100m" + memory: "512M" + requests: + cpu: "100m" + memory: "512M" + + +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: ku-processautomator-verify + labels: + app: ku-processautomator-verify +spec: + selector: + matchLabels: + app: ku-processautomator-verify + replicas: 1 + template: + metadata: + labels: + app: ku-processautomator-verify + annotations: + prometheus.io/scrape: "true" + prometheus.io/port: "8088" + prometheus.io/path: "/actuator/prometheus" + spec: + containers: + - name: ku-processautomator-verify + image: pierreyvesmonnet/processautomator:1.0.0 + imagePullPolicy: Always + env: + - name: JAVA_TOOL_OPTIONS + value: >- + -Dautomator.servers.camunda8.zeebeGatewayAddress=camunda-zeebe-gateway:26500 + -Dautomator.servers.camunda8.operateUserName=demo + -Dautomator.servers.camunda8.operateUserPassword=demo + -Dautomator.servers.camunda8.operateUrl=http://camunda-operate:80 + -Dautomator.servers.camunda8.taskListUrl=http://camunda-tasklist:80 + -Dautomator.servers.camunda8.workerExecutionThreads=1 + -Dautomator.startup.scenarioPath=/app/scenarii/loadtest + -Dautomator.startup.scenarioAtStartup=C8CrawlUrlScn.json + -Dautomator.startup.policyExecution=USERTASK + -Dautomator.startup.logLevel=MONITORING + resources: + limits: + cpu: "100m" + memory: "512M" + requests: + cpu: "100m" + memory: "512M" + +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: ku-processautomator-add + labels: + app: ku-processautomator-add +spec: + selector: + matchLabels: + app: ku-processautomator-add + replicas: 2 + template: + metadata: + labels: + app: ku-processautomator-add + annotations: + prometheus.io/scrape: "true" + prometheus.io/port: "8088" + prometheus.io/path: "/actuator/prometheus" + spec: + containers: + - name: ku-processautomator-message + image: pierreyvesmonnet/processautomator:1.0.0 + imagePullPolicy: Always + env: + - name: JAVA_TOOL_OPTIONS + value: >- + -Dautomator.servers.camunda8.zeebeGatewayAddress=camunda-zeebe-gateway:26500 + -Dautomator.servers.camunda8.operateUserName=demo + -Dautomator.servers.camunda8.operateUserPassword=demo + -Dautomator.servers.camunda8.operateUrl=http://camunda-operate:80 + -Dautomator.servers.camunda8.taskListUrl= + -Dautomator.servers.camunda8.workerExecutionThreads=150 + -Dautomator.startup.scenarioPath=/app/scenarii/loadtest + -Dautomator.startup.scenarioAtStartup=C8CrawlUrlScn.json + -Dautomator.startup.policyExecution=SERVICETASK + -Dautomator.startup.filterService=crawl-add + -Dautomator.startup.logLevel=MONITORING + resources: + limits: + cpu: "100m" + memory: "512M" + requests: + cpu: "100m" + memory: "512M" + + +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: ku-processautomator-filter + labels: + app: ku-processautomator-filter +spec: + selector: + matchLabels: + app: ku-processautomator-filter + replicas: 1 + template: + metadata: + labels: + app: ku-processautomator-filter + annotations: + prometheus.io/scrape: "true" + prometheus.io/port: "8088" + prometheus.io/path: "/actuator/prometheus" + spec: + containers: + - name: ku-processautomator-filter + image: pierreyvesmonnet/processautomator:1.0.0 + imagePullPolicy: Always + env: + - name: JAVA_TOOL_OPTIONS + value: >- + -Dautomator.servers.camunda8.zeebeGatewayAddress=camunda-zeebe-gateway:26500 + -Dautomator.servers.camunda8.operateUserName=demo + -Dautomator.servers.camunda8.operateUserPassword=demo + -Dautomator.servers.camunda8.operateUrl=http://camunda-operate:80 + -Dautomator.servers.camunda8.taskListUrl= + -Dautomator.servers.camunda8.workerExecutionThreads=70 + -Dautomator.startup.scenarioPath=/app/scenarii/loadtest + -Dautomator.startup.scenarioAtStartup=C8CrawlUrlScn.json + -Dautomator.startup.policyExecution=SERVICETASK + -Dautomator.startup.filterService=crawl-filter + -Dautomator.startup.logLevel=MONITORING + resources: + limits: + cpu: "100m" + memory: "512M" + requests: + cpu: "100m" + memory: "512M" + +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: ku-processautomator-store + labels: + app: ku-processautomator-store +spec: + selector: + matchLabels: + app: ku-processautomator-store + replicas: 1 + template: + metadata: + labels: + app: ku-processautomator-store + annotations: + prometheus.io/scrape: "true" + prometheus.io/port: "8088" + prometheus.io/path: "/actuator/prometheus" + spec: + containers: + - name: ku-processautomator-store + image: pierreyvesmonnet/processautomator:1.0.0 + imagePullPolicy: Always + env: + - name: JAVA_TOOL_OPTIONS + value: >- + -Dautomator.servers.camunda8.zeebeGatewayAddress=camunda-zeebe-gateway:26500 + -Dautomator.servers.camunda8.operateUserName=demo + -Dautomator.servers.camunda8.operateUserPassword=demo + -Dautomator.servers.camunda8.operateUrl=http://camunda-operate:80 + -Dautomator.servers.camunda8.taskListUrl= + -Dautomator.servers.camunda8.workerExecutionThreads=70 + -Dautomator.startup.scenarioPath=/app/scenarii/loadtest + -Dautomator.startup.scenarioAtStartup=C8CrawlUrlScn.json + -Dautomator.startup.policyExecution=SERVICETASK + -Dautomator.startup.filterService=crawl-store + -Dautomator.startup.logLevel=MONITORING + resources: + limits: + cpu: "100m" + memory: "512M" + requests: + cpu: "100m" + memory: "512M" diff --git a/doc/loadtestscenario/test_4/test-4-JobsPerSeconds.png b/doc/loadtestscenario/test_4/test-4-JobsPerSeconds.png new file mode 100644 index 0000000..fee12a5 Binary files /dev/null and b/doc/loadtestscenario/test_4/test-4-JobsPerSeconds.png differ diff --git a/doc/loadtestscenario/test_4/test-4-Operate.png b/doc/loadtestscenario/test_4/test-4-Operate.png new file mode 100644 index 0000000..fa6a662 Binary files /dev/null and b/doc/loadtestscenario/test_4/test-4-Operate.png differ diff --git a/doc/loadtestscenario/test_4/test-4-c8CrawlUrlMultiple.yaml b/doc/loadtestscenario/test_4/test-4-c8CrawlUrlMultiple.yaml new file mode 100644 index 0000000..b6e6b46 --- /dev/null +++ b/doc/loadtestscenario/test_4/test-4-c8CrawlUrlMultiple.yaml @@ -0,0 +1,374 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: ku-processautomator-creation + labels: + app: ku-processautomator-creation +spec: + selector: + matchLabels: + app: ku-processautomator-creation + replicas: 1 + template: + metadata: + labels: + app: ku-processautomator-creation + annotations: + prometheus.io/scrape: "true" + prometheus.io/port: "8088" + prometheus.io/path: "/actuator/prometheus" + spec: + containers: + - name: ku-processautomator-creation + image: pierreyvesmonnet/processautomator:1.0.0 + imagePullPolicy: Always + env: + - name: JAVA_TOOL_OPTIONS + value: >- + -Dautomator.servers.camunda8.zeebeGatewayAddress=camunda-zeebe-gateway:26500 + -Dautomator.servers.camunda8.operateUserName=demo + -Dautomator.servers.camunda8.operateUserPassword=demo + -Dautomator.servers.camunda8.operateUrl=http://camunda-operate:80 + -Dautomator.servers.camunda8.taskListUrl= + -Dautomator.servers.camunda8.workerExecutionThreads=1 + -Dautomator.startup.scenarioPath=/app/scenarii/loadtest + -Dautomator.startup.scenarioAtStartup=C8CrawlUrlScn250.json + -Dautomator.startup.policyExecution=DEPLOYPROCESS|WARMINGUP|CREATION + -Dautomator.startup.logLevel=MONITORING + resources: + limits: + cpu: "100m" + memory: "512M" + requests: + cpu: "100m" + memory: "512M" + +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: ku-processautomator-retrievework + labels: + app: ku-processautomator-retrievework +spec: + selector: + matchLabels: + app: ku-processautomator-retrievework + replicas: 1 + template: + metadata: + labels: + app: ku-processautomator-retrievework + annotations: + prometheus.io/scrape: "true" + prometheus.io/port: "8088" + prometheus.io/path: "/actuator/prometheus" + spec: + containers: + - name: ku-processautomator-retrievework + image: pierreyvesmonnet/processautomator:1.0.0 + imagePullPolicy: Always + env: + - name: JAVA_TOOL_OPTIONS + value: >- + -Dautomator.servers.camunda8.zeebeGatewayAddress=camunda-zeebe-gateway:26500 + -Dautomator.servers.camunda8.operateUserName=demo + -Dautomator.servers.camunda8.operateUserPassword=demo + -Dautomator.servers.camunda8.operateUrl=http://camunda-operate:80 + -Dautomator.servers.camunda8.taskListUrl= + -Dautomator.servers.camunda8.workerExecutionThreads=20 + -Dautomator.startup.scenarioPath=/app/scenarii/loadtest + -Dautomator.startup.scenarioAtStartup=C8CrawlUrlScn250.json + -Dautomator.startup.policyExecution=SERVICETASK + -Dautomator.startup.filterService=crawl-retrieve + -Dautomator.startup.logLevel=MONITORING + resources: + limits: + cpu: "100m" + memory: "512M" + requests: + cpu: "100m" + memory: "512M" + +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: ku-processautomator-search + labels: + app: ku-processautomator-search +spec: + selector: + matchLabels: + app: ku-processautomator-search + replicas: 3 + template: + metadata: + labels: + app: ku-processautomator-search + annotations: + prometheus.io/scrape: "true" + prometheus.io/port: "8088" + prometheus.io/path: "/actuator/prometheus" + spec: + containers: + - name: ku-processautomator-search + image: pierreyvesmonnet/processautomator:1.0.0 + imagePullPolicy: Always + env: + - name: JAVA_TOOL_OPTIONS + value: >- + -Dautomator.servers.camunda8.zeebeGatewayAddress=camunda-zeebe-gateway:26500 + -Dautomator.servers.camunda8.operateUserName=demo + -Dautomator.servers.camunda8.operateUserPassword=demo + -Dautomator.servers.camunda8.operateUrl=http://camunda-operate:80 + -Dautomator.servers.camunda8.taskListUrl= + -Dautomator.servers.camunda8.workerExecutionThreads=320 + -Dautomator.startup.scenarioPath=/app/scenarii/loadtest + -Dautomator.startup.scenarioAtStartup=C8CrawlUrlScn250.json + -Dautomator.startup.policyExecution=SERVICETASK + -Dautomator.startup.filterService=crawl-search + -Dautomator.startup.logLevel=MONITORING + resources: + limits: + cpu: "100m" + memory: "512M" + requests: + cpu: "100m" + memory: "512M" + +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: ku-processautomator-message + labels: + app: ku-processautomator-message +spec: + selector: + matchLabels: + app: ku-processautomator-message + replicas: 1 + template: + metadata: + labels: + app: ku-processautomator-message + annotations: + prometheus.io/scrape: "true" + prometheus.io/port: "8088" + prometheus.io/path: "/actuator/prometheus" + spec: + containers: + - name: ku-processautomator-message + image: pierreyvesmonnet/processautomator:1.0.0 + imagePullPolicy: Always + env: + - name: JAVA_TOOL_OPTIONS + value: >- + -Dautomator.servers.camunda8.zeebeGatewayAddress=camunda-zeebe-gateway:26500 + -Dautomator.servers.camunda8.operateUserName=demo + -Dautomator.servers.camunda8.operateUserPassword=demo + -Dautomator.servers.camunda8.operateUrl=http://camunda-operate:80 + -Dautomator.servers.camunda8.taskListUrl= + -Dautomator.servers.camunda8.workerExecutionThreads=90 + -Dautomator.startup.scenarioPath=/app/scenarii/loadtest + -Dautomator.startup.scenarioAtStartup=C8CrawlUrlScn250.json + -Dautomator.startup.policyExecution=SERVICETASK + -Dautomator.startup.filterService=crawl-message + -Dautomator.startup.logLevel=MONITORING + resources: + limits: + cpu: "100m" + memory: "512M" + requests: + cpu: "100m" + memory: "512M" + + +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: ku-processautomator-verify + labels: + app: ku-processautomator-verify +spec: + selector: + matchLabels: + app: ku-processautomator-verify + replicas: 1 + template: + metadata: + labels: + app: ku-processautomator-verify + annotations: + prometheus.io/scrape: "true" + prometheus.io/port: "8088" + prometheus.io/path: "/actuator/prometheus" + spec: + containers: + - name: ku-processautomator-verify + image: pierreyvesmonnet/processautomator:1.0.0 + imagePullPolicy: Always + env: + - name: JAVA_TOOL_OPTIONS + value: >- + -Dautomator.servers.camunda8.zeebeGatewayAddress=camunda-zeebe-gateway:26500 + -Dautomator.servers.camunda8.operateUserName=demo + -Dautomator.servers.camunda8.operateUserPassword=demo + -Dautomator.servers.camunda8.operateUrl=http://camunda-operate:80 + -Dautomator.servers.camunda8.taskListUrl=http://camunda-tasklist:80 + -Dautomator.servers.camunda8.workerExecutionThreads=1 + -Dautomator.startup.scenarioPath=/app/scenarii/loadtest + -Dautomator.startup.scenarioAtStartup=C8CrawlUrlScn250.json + -Dautomator.startup.policyExecution=USERTASK + -Dautomator.startup.logLevel=MONITORING + resources: + limits: + cpu: "100m" + memory: "512M" + requests: + cpu: "100m" + memory: "512M" + +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: ku-processautomator-add + labels: + app: ku-processautomator-add +spec: + selector: + matchLabels: + app: ku-processautomator-add + replicas: 2 + template: + metadata: + labels: + app: ku-processautomator-add + annotations: + prometheus.io/scrape: "true" + prometheus.io/port: "8088" + prometheus.io/path: "/actuator/prometheus" + spec: + containers: + - name: ku-processautomator-message + image: pierreyvesmonnet/processautomator:1.0.0 + imagePullPolicy: Always + env: + - name: JAVA_TOOL_OPTIONS + value: >- + -Dautomator.servers.camunda8.zeebeGatewayAddress=camunda-zeebe-gateway:26500 + -Dautomator.servers.camunda8.operateUserName=demo + -Dautomator.servers.camunda8.operateUserPassword=demo + -Dautomator.servers.camunda8.operateUrl=http://camunda-operate:80 + -Dautomator.servers.camunda8.taskListUrl= + -Dautomator.servers.camunda8.workerExecutionThreads=190 + -Dautomator.startup.scenarioPath=/app/scenarii/loadtest + -Dautomator.startup.scenarioAtStartup=C8CrawlUrlScn250.json + -Dautomator.startup.policyExecution=SERVICETASK + -Dautomator.startup.filterService=crawl-add + -Dautomator.startup.logLevel=MONITORING + resources: + limits: + cpu: "100m" + memory: "512M" + requests: + cpu: "100m" + memory: "512M" + + +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: ku-processautomator-filter + labels: + app: ku-processautomator-filter +spec: + selector: + matchLabels: + app: ku-processautomator-filter + replicas: 1 + template: + metadata: + labels: + app: ku-processautomator-filter + annotations: + prometheus.io/scrape: "true" + prometheus.io/port: "8088" + prometheus.io/path: "/actuator/prometheus" + spec: + containers: + - name: ku-processautomator-filter + image: pierreyvesmonnet/processautomator:1.0.0 + imagePullPolicy: Always + env: + - name: JAVA_TOOL_OPTIONS + value: >- + -Dautomator.servers.camunda8.zeebeGatewayAddress=camunda-zeebe-gateway:26500 + -Dautomator.servers.camunda8.operateUserName=demo + -Dautomator.servers.camunda8.operateUserPassword=demo + -Dautomator.servers.camunda8.operateUrl=http://camunda-operate:80 + -Dautomator.servers.camunda8.taskListUrl= + -Dautomator.servers.camunda8.workerExecutionThreads=100 + -Dautomator.startup.scenarioPath=/app/scenarii/loadtest + -Dautomator.startup.scenarioAtStartup=C8CrawlUrlScn250.json + -Dautomator.startup.policyExecution=SERVICETASK + -Dautomator.startup.filterService=crawl-filter + -Dautomator.startup.logLevel=MONITORING + resources: + limits: + cpu: "100m" + memory: "512M" + requests: + cpu: "100m" + memory: "512M" + +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: ku-processautomator-store + labels: + app: ku-processautomator-store +spec: + selector: + matchLabels: + app: ku-processautomator-store + replicas: 1 + template: + metadata: + labels: + app: ku-processautomator-store + annotations: + prometheus.io/scrape: "true" + prometheus.io/port: "8088" + prometheus.io/path: "/actuator/prometheus" + spec: + containers: + - name: ku-processautomator-store + image: pierreyvesmonnet/processautomator:1.0.0 + imagePullPolicy: Always + env: + - name: JAVA_TOOL_OPTIONS + value: >- + -Dautomator.servers.camunda8.zeebeGatewayAddress=camunda-zeebe-gateway:26500 + -Dautomator.servers.camunda8.operateUserName=demo + -Dautomator.servers.camunda8.operateUserPassword=demo + -Dautomator.servers.camunda8.operateUrl=http://camunda-operate:80 + -Dautomator.servers.camunda8.taskListUrl= + -Dautomator.servers.camunda8.workerExecutionThreads=100 + -Dautomator.startup.scenarioPath=/app/scenarii/loadtest + -Dautomator.startup.scenarioAtStartup=C8CrawlUrlScn250.json + -Dautomator.startup.policyExecution=SERVICETASK + -Dautomator.startup.filterService=crawl-store + -Dautomator.startup.logLevel=MONITORING + resources: + limits: + cpu: "100m" + memory: "512M" + requests: + cpu: "100m" + memory: "512M" diff --git a/src/main/resources/loadtest/C8CrawlUrl.bpmn b/src/main/resources/loadtest/C8CrawlUrl.bpmn new file mode 100644 index 0000000..4fbb9de --- /dev/null +++ b/src/main/resources/loadtest/C8CrawlUrl.bpmn @@ -0,0 +1,445 @@ + + + + + + + + + + + + + + + + + Flow_17n6ju8 + + + + + + Flow_17n6ju8 + Flow_03a245y + + + Flow_191cul5 + + + Flow_03a245y + Flow_191cul5 + + + + + + + Flow_0ue17j2 + + + + + Flow_06qswzj + + + + + + Flow_0ue17j2 + Flow_1dpfwyb + + + + + + Flow_0zrd1qi + Flow_0zhmx9x + + + + + + Flow_0zhmx9x + Flow_11raeuz + Flow_1llqly1 + + + + + + Flow_1llqly1 + Flow_06qswzj + + + + + + Flow_1dpfwyb + Flow_0zrd1qi + Flow_0af83lu + + + + =urlNotFound + + + Flow_124nqht + + + + Flow_0af83lu + Flow_0x26tsm + + + Flow_0x26tsm + Flow_124nqht + Flow_15yyn2a + + + + =processAcceptable + + + + + + Flow_15yyn2a + Flow_11raeuz + + + + 0 s + + + + 10 s + + + + + + 1 s + + + 1 s + + + 5 s + + + + + + + + + Flow_1q4xt3v + + PT20M + + + + Flow_1q4xt3v + + + + + Loop 20 + + + 2 s + + + 200 PI/day + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/src/main/resources/loadtest/C8CrawlUrlScn.json b/src/main/resources/loadtest/C8CrawlUrlScn.json new file mode 100644 index 0000000..aa03d8b --- /dev/null +++ b/src/main/resources/loadtest/C8CrawlUrlScn.json @@ -0,0 +1,127 @@ +{ + "name": "C8CrawlUrl", + "processId": "CrawlUrl", + "type": "FLOW", + "serverType": "Camunda_8", + "deployments": [ + { + "serverType": "CAMUNDA_8", + "type": "PROCESS", + "processFile": "C8CrawlUrl.bpmn", + "policy": "ONLYNOTEXIST" + } + ], + "flowControl": { + "duration": "PT10M", + "objectives": [ + { + "label": "Creation", + "processId": "CrawlUrl", + "type": "CREATED", + "value": 4000, + "comment": "100/30 s. Duration=10M => 2000" + }, + { + "label": "Ended", + "processId": "CrawlUrl", + "type" : "ENDED", + "value": 4000, + "comment": "Same as creation" + } + ] + }, + "warmingUp" : { + "duration": "PT2M", + "useServiceTasks" : true, + "useUserTasks" : true, + "operations": [ + { + "type": "STARTEVENT", + "taskId": "StartEvent", + "processId": "CrawlUrl", + "variables": {"urlNotFound": false}, + "variablesOperation": { + "loopcrawl": "generaterandomlist(10)" + }, + "frequency": "PT30S", + "numberOfExecutions": "200", + "endWarmingUp": "EndEventThreshold(EndEvent,1)" + } + ] + }, + "flows": [ + { + "name": "Start event normal 200/30s", + "type": "STARTEVENT", + "taskId": "StartEvent", + "processId": "CrawlUrl", + "frequency": "PT30S", + "numberOfExecutions": "200", + "nbWorkers": "1", + "variables": {"urlNotFound": false}, + "variablesOperation": { + "loopcrawl": "generaterandomlist(10)" + } + }, + { + "name": "Start event error 1/1mn", + "type": "STARTEVENT", + "taskId": "StartEvent", + "processId": "CrawlUrl", + "frequency": "PT1M", + "numberOfExecutions": "1", + "nbWorkers": "1", + "variables": {"urlNotFound": true}, + "variablesOperation": { + "loopcrawl": "generaterandomlist(5)" + }, + "modeExecution": "CLASSICAL" + }, + { + "type": "SERVICETASK", + "topic": "crawl-retrieve", + "waitingTime": "PT2S", + "modeExecution": "ASYNCHRONOUS" + }, + { + "type": "SERVICETASK", + "topic": "crawl-search", + "waitingTime": "PT10S", + "modeExecution": "ASYNCHRONOUS" + }, + { + "type": "USERTASK", + "taskId": "Activity_Verify", + "waitingTime": "PT10S", + "modeExecution": "ASYNCHRONOUS", + "variables" : { + "processAcceptable": true + } + }, + { + "type": "SERVICETASK", + "topic": "crawl-add", + "waitingTime": "PT5S", + "modeExecution": "ASYNCHRONOUS" + }, + { + "type": "SERVICETASK", + "topic": "crawl-message", + "waitingTime": "PT0S", + "modeExecution": "ASYNCHRONOUS" + }, + { + "type": "SERVICETASK", + "topic": "crawl-filter", + "waitingTime": "PT1S", + "modeExecution": "ASYNCHRONOUS" + }, + { + "type": "SERVICETASK", + "topic": "crawl-store", + "waitingTime": "PT1S", + "modeExecution": "ASYNCHRONOUS" + } + + ] +} \ No newline at end of file diff --git a/src/main/resources/loadtest/C8CrawlUrlScn250.json b/src/main/resources/loadtest/C8CrawlUrlScn250.json new file mode 100644 index 0000000..33283d4 --- /dev/null +++ b/src/main/resources/loadtest/C8CrawlUrlScn250.json @@ -0,0 +1,126 @@ +{ + "name": "C8CrawlUrl", + "processId": "CrawlUrl", + "type": "FLOW", + "serverType": "Camunda_8", + "deployments": [ + { + "serverType": "CAMUNDA_8", + "type": "PROCESS", + "processFile": "C8CrawlUrl.bpmn", + "policy": "ONLYNOTEXIST" + } + ], + "flowControl": { + "duration": "PT15M", + "objectives": [ + { + "label": "Creation", + "processId": "CrawlUrl", + "type": "CREATED", + "value": 4000, + "comment": "100/30 s. Duration=10M => 2000" + }, + { + "label": "Ended", + "processId": "CrawlUrl", + "type" : "ENDED", + "value": 4000, + "comment": "Same as creation" + } + ] + }, + "warmingUp" : { + "duration": "PT3M", + "useServiceTasks" : true, + "useUserTasks" : true, + "operations": [ + { + "type": "STARTEVENT", + "taskId": "StartEvent", + "processId": "CrawlUrl", + "variables": {"urlNotFound": false}, + "variablesOperation": { + "loopcrawl": "generaterandomlist(10)" + }, + "frequency": "PT30S", + "numberOfExecutions": "250", + "endWarmingUp": "EndEventThreshold(EndEvent,1)" + } + ] + }, + "flows": [ + { + "name": "Start event normal 250/30s", + "type": "STARTEVENT", + "taskId": "StartEvent", + "processId": "CrawlUrl", + "frequency": "PT30S", + "numberOfExecutions": "250", + "nbWorkers": "1", + "variables": {"urlNotFound": false}, + "variablesOperation": { + "loopcrawl": "generaterandomlist(10)" + } + }, + { + "name": "Start event error 1/1mn", + "type": "STARTEVENT", + "taskId": "StartEvent", + "processId": "CrawlUrl", + "frequency": "PT1M", + "numberOfExecutions": "1", + "nbWorkers": "1", + "variables": {"urlNotFound": true}, + "variablesOperation": { + "loopcrawl": "generaterandomlist(5)" + } + }, + { + "type": "SERVICETASK", + "topic": "crawl-retrieve", + "waitingTime": "PT2S", + "modeExecution": "ASYNCHRONOUS" + }, + { + "type": "SERVICETASK", + "topic": "crawl-search", + "waitingTime": "PT10S", + "modeExecution": "ASYNCHRONOUS" + }, + { + "type": "USERTASK", + "taskId": "Activity_Verify", + "waitingTime": "PT10S", + "modeExecution": "ASYNCHRONOUS", + "variables" : { + "processAcceptable": true + } + }, + { + "type": "SERVICETASK", + "topic": "crawl-add", + "waitingTime": "PT5S", + "modeExecution": "ASYNCHRONOUS" + }, + { + "type": "SERVICETASK", + "topic": "crawl-message", + "waitingTime": "PT0S", + "modeExecution": "ASYNCHRONOUS" + }, + { + "type": "SERVICETASK", + "topic": "crawl-filter", + "waitingTime": "PT1S", + "modeExecution": "ASYNCHRONOUS" + }, + { + "type": "SERVICETASK", + "topic": "crawl-store", + "waitingTime": "PT1S", + "modeExecution": "ASYNCHRONOUS" + } + + ] +} \ No newline at end of file