diff --git a/README.md b/README.md index e99d95a2590..c8a4a8ada7b 100644 --- a/README.md +++ b/README.md @@ -1,5 +1,7 @@ # Dapr documentation +[![GitHub License](https://img.shields.io/github/license/dapr/docs?style=flat&label=License&logo=github)](https://github.com/dapr/docs/blob/v1.13/LICENSE) [![GitHub issue custom search in repo](https://img.shields.io/github/issues-search/dapr/docs?query=type%3Aissue%20is%3Aopen%20label%3A%22good%20first%20issue%22&label=Good%20first%20issues&style=flat&logo=github)](https://github.com/dapr/docs/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22) [![Discord](https://img.shields.io/discord/778680217417809931?label=Discord&style=flat&logo=discord)](http://bit.ly/dapr-discord) [![YouTube Channel Views](https://img.shields.io/youtube/channel/views/UCtpSQ9BLB_3EXdWAUQYwnRA?style=flat&label=YouTube%20views&logo=youtube)](https://youtube.com/@daprdev) [![X (formerly Twitter) Follow](https://img.shields.io/twitter/follow/daprdev?logo=x&style=flat)](https://twitter.com/daprdev) + If you are looking to explore the Dapr documentation, please go to the documentation website: [**https://docs.dapr.io**](https://docs.dapr.io) diff --git a/daprdocs/content/en/contributing/daprbot.md b/daprdocs/content/en/contributing/daprbot.md index 14fb29373f0..2e35fd4914b 100644 --- a/daprdocs/content/en/contributing/daprbot.md +++ b/daprdocs/content/en/contributing/daprbot.md @@ -12,7 +12,7 @@ Dapr bot is triggered by a list of commands that helps with common tasks in the | Command | Target | Description | Who can use | Repository | | ---------------- | --------------------- | -------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------- | -------------------------------------- | -| `/assign` | Issue | Assigns an issue to a user or group of users | Anyone | `dapr`, `docs`, `quickstarts`, `cli`, `components-contrib`, `go-sdk`, `js-sdk`, `java-sdk`, `python-sdk`, `dotnet-sdk` | +| `/assign` | Issue | Assigns an issue to a user or group of users | Anyone | `dapr`, `docs`, `quickstarts`, `cli`, `components-contrib`, `go-sdk`, `js-sdk`, `java-sdk`, `python-sdk`, `dotnet-sdk`, `rust-sdk` | | `/ok-to-test` | Pull request | `dapr`: trigger end to end tests
`components-contrib`: trigger conformance and certification tests | Users listed in the [bot](https://github.com/dapr/dapr/blob/master/.github/scripts/dapr_bot.js) | `dapr`, `components-contrib` | | `/ok-to-perf` | Pull request | Trigger performance tests. | Users listed in the [bot](https://github.com/dapr/dapr/blob/master/.github/scripts/dapr_bot.js) | `dapr` | | `/make-me-laugh` | Issue or pull request | Posts a random joke | Users listed in the [bot](https://github.com/dapr/dapr/blob/master/.github/scripts/dapr_bot.js) | `dapr`, `components-contrib` | diff --git a/daprdocs/content/en/developing-applications/building-blocks/service-invocation/howto-invoke-non-dapr-endpoints.md b/daprdocs/content/en/developing-applications/building-blocks/service-invocation/howto-invoke-non-dapr-endpoints.md index ec48330e35c..28c3cb8f1ec 100644 --- a/daprdocs/content/en/developing-applications/building-blocks/service-invocation/howto-invoke-non-dapr-endpoints.md +++ b/daprdocs/content/en/developing-applications/building-blocks/service-invocation/howto-invoke-non-dapr-endpoints.md @@ -70,7 +70,7 @@ There are two ways to invoke a non-Dapr endpoint when communicating either to Da ``` ### Using appId when calling Dapr enabled applications -AppIDs are always used to call Dapr applications with the `appID` and `my-method``. Read the [How-To: Invoke services using HTTP]({{< ref howto-invoke-discover-services.md >}}) guide for more information. For example: +AppIDs are always used to call Dapr applications with the `appID` and `my-method`. Read the [How-To: Invoke services using HTTP]({{< ref howto-invoke-discover-services.md >}}) guide for more information. For example: ```sh localhost:3500/v1.0/invoke//method/ diff --git a/daprdocs/content/en/developing-applications/building-blocks/state-management/howto-get-save-state.md b/daprdocs/content/en/developing-applications/building-blocks/state-management/howto-get-save-state.md index fad6afbfdac..fd492d188b6 100644 --- a/daprdocs/content/en/developing-applications/building-blocks/state-management/howto-get-save-state.md +++ b/daprdocs/content/en/developing-applications/building-blocks/state-management/howto-get-save-state.md @@ -537,7 +537,7 @@ Try getting state again. Note that no value is returned. Below are code examples that leverage Dapr SDKs for saving and retrieving multiple states. -{{< tabs Dotnet Java Python Javascript "HTTP API (Bash)" "HTTP API (PowerShell)">}} +{{< tabs Dotnet Java Python Go Javascript "HTTP API (Bash)" "HTTP API (PowerShell)">}} {{% codetab %}} @@ -656,6 +656,56 @@ dapr run --app-id orderprocessing --app-port 6001 --dapr-http-port 3601 --dapr-g {{% codetab %}} +```go +// dependencies +import ( + "context" + "log" + "math/rand" + "strconv" + "time" + + dapr "github.com/dapr/go-sdk/client" +) + +// code +func main() { + const STATE_STORE_NAME = "statestore" + rand.Seed(time.Now().UnixMicro()) + for i := 0; i < 10; i++ { + orderId := rand.Intn(1000-1) + 1 + client, err := dapr.NewClient() + if err != nil { + panic(err) + } + defer client.Close() + ctx := context.Background() + err = client.SaveState(ctx, STATE_STORE_NAME, "order_1", []byte(strconv.Itoa(orderId)), nil) + if err != nil { + panic(err) + } + keys := []string{"key1", "key2", "key3"} + items, err := client.GetBulkState(ctx, STATE_STORE_NAME, keys, nil, 100) + if err != nil { + panic(err) + } + for _, item := range items { + log.Println("Item from GetBulkState:", string(item.Value)) + } + } +} +``` + +To launch a Dapr sidecar for the above example application, run a command similar to the following: + +```bash +dapr run --app-id orderprocessing --app-port 6001 --dapr-http-port 3601 --dapr-grpc-port 60001 go run OrderProcessingService.go +``` + +{{% /codetab %}} + +{{% codetab %}} + ```javascript //dependencies import { DaprClient, HttpMethod, CommunicationProtocolEnum } from '@dapr/dapr'; @@ -738,7 +788,7 @@ State transactions require a state store that supports multi-item transactions. Below are code examples that leverage Dapr SDKs for performing state transactions. -{{< tabs Dotnet Java Python Javascript "HTTP API (Bash)" "HTTP API (PowerShell)">}} +{{< tabs Dotnet Java Python Go Javascript "HTTP API (Bash)" "HTTP API (PowerShell)">}} {{% codetab %}} @@ -893,6 +943,79 @@ dapr run --app-id orderprocessing --app-port 6001 --dapr-http-port 3601 --dapr-g {{% codetab %}} +```go +// dependencies +package main + +import ( + "context" + "log" + "math/rand" + "strconv" + "time" + + dapr "github.com/dapr/go-sdk/client" +) + +// code +func main() { + const STATE_STORE_NAME = "statestore" + rand.Seed(time.Now().UnixMicro()) + for i := 0; i < 10; i++ { + orderId := rand.Intn(1000-1) + 1 + client, err := dapr.NewClient() + if err != nil { + panic(err) + } + defer client.Close() + ctx := context.Background() + err = client.SaveState(ctx, STATE_STORE_NAME, "order_1", []byte(strconv.Itoa(orderId)), nil) + if err != nil { + panic(err) + } + result, err := client.GetState(ctx, STATE_STORE_NAME, "order_1", nil) + if err != nil { + panic(err) + } + + ops := make([]*dapr.StateOperation, 0) + data1 := "data1" + data2 := "data2" + + op1 := &dapr.StateOperation{ + Type: dapr.StateOperationTypeUpsert, + Item: &dapr.SetStateItem{ + Key: "key1", + Value: []byte(data1), + }, + } + op2 := &dapr.StateOperation{ + Type: dapr.StateOperationTypeDelete, + Item: &dapr.SetStateItem{ + Key: "key2", + Value: []byte(data2), + }, + } + ops = append(ops, op1, op2) + meta := map[string]string{} + err = client.ExecuteStateTransaction(ctx, STATE_STORE_NAME, meta, ops) + + log.Println("Result after get:", string(result.Value)) + time.Sleep(2 * time.Second) + } +} +``` + +To launch a Dapr sidecar for the above example application, run a command similar to the following: + +```bash +dapr run --app-id orderprocessing --app-port 6001 --dapr-http-port 3601 --dapr-grpc-port 60001 go run OrderProcessingService.go +``` + +{{% /codetab %}} + +{{% codetab %}} + ```javascript //dependencies import { DaprClient, HttpMethod, CommunicationProtocolEnum } from '@dapr/dapr'; diff --git a/daprdocs/content/en/developing-applications/building-blocks/state-management/howto-outbox.md b/daprdocs/content/en/developing-applications/building-blocks/state-management/howto-outbox.md index 2831802729a..59dcf3c3711 100644 --- a/daprdocs/content/en/developing-applications/building-blocks/state-management/howto-outbox.md +++ b/daprdocs/content/en/developing-applications/building-blocks/state-management/howto-outbox.md @@ -28,8 +28,10 @@ The diagram below is an overview of how the outbox feature works: The outbox feature can be used with using any [transactional state store]({{< ref supported-state-stores >}}) supported by Dapr. All [pub/sub brokers]({{< ref supported-pubsub >}}) are supported with the outbox feature. +[Learn more about the transactional methods you can use.]({{< ref "howto-get-save-state.md#perform-state-transactions" >}}) + {{% alert title="Note" color="primary" %}} -Message brokers that work with the competing consumer pattern (for example, [Apache Kafka]({{< ref setup-apache-kafka>}}) are encouraged to reduce the chances of duplicate events. +Message brokers that work with the competing consumer pattern (for example, [Apache Kafka]({{< ref setup-apache-kafka>}})) are encouraged to reduce the chances of duplicate events. {{% /alert %}} ## Usage diff --git a/daprdocs/content/en/developing-applications/building-blocks/workflow/workflow-patterns.md b/daprdocs/content/en/developing-applications/building-blocks/workflow/workflow-patterns.md index f7865f55e9c..fe6f69b63c2 100644 --- a/daprdocs/content/en/developing-applications/building-blocks/workflow/workflow-patterns.md +++ b/daprdocs/content/en/developing-applications/building-blocks/workflow/workflow-patterns.md @@ -586,7 +586,45 @@ The key takeaways from this example are: - The number of parallel tasks can be static or dynamic - The workflow itself is capable of aggregating the results of parallel executions -While not shown in the example, it's possible to go further and limit the degree of concurrency using simple, language-specific constructs. Furthermore, the execution of the workflow is durable. If a workflow starts 100 parallel task executions and only 40 complete before the process crashes, the workflow restarts itself automatically and only schedules the remaining 60 tasks. +Furthermore, the execution of the workflow is durable. If a workflow starts 100 parallel task executions and only 40 complete before the process crashes, the workflow restarts itself automatically and only schedules the remaining 60 tasks. + +It's possible to go further and limit the degree of concurrency using simple, language-specific constructs. The sample code below illustrates how to restrict the degree of fan-out to just 5 concurrent activity executions: + +{{< tabs ".NET" >}} + +{{% codetab %}} + +```csharp + +//Revisiting the earlier example... +// Get a list of N work items to process in parallel. +object[] workBatch = await context.CallActivityAsync("GetWorkBatch", null); + +const int MaxParallelism = 5; +var results = new List(); +var inFlightTasks = new HashSet>(); +foreach(var workItem in workBatch) +{ + if (inFlightTasks.Count >= MaxParallelism) + { + var finishedTask = await Task.WhenAny(inFlightTasks); + results.Add(finishedTask.Result); + inFlightTasks.Remove(finishedTask); + } + + inFlightTasks.Add(context.CallActivityAsync("ProcessWorkItem", workItem)); +} +results.AddRange(await Task.WhenAll(inFlightTasks)); + +var sum = results.Sum(t => t); +await context.CallActivityAsync("PostResults", sum); +``` + +{{% /codetab %}} + +{{< /tabs >}} + +Limiting the degree of concurrency in this way can be useful for limiting contention against shared resources. For example, if the activities need to call into external resources that have their own concurrency limits, like a databases or external APIs, it can be useful to ensure that no more than a specified number of activities call that resource concurrently. ## Async HTTP APIs diff --git a/daprdocs/content/en/developing-applications/local-development/multi-app-dapr-run/multi-app-template.md b/daprdocs/content/en/developing-applications/local-development/multi-app-dapr-run/multi-app-template.md index e5c526da43f..606ae9fbe8c 100644 --- a/daprdocs/content/en/developing-applications/local-development/multi-app-dapr-run/multi-app-template.md +++ b/daprdocs/content/en/developing-applications/local-development/multi-app-dapr-run/multi-app-template.md @@ -42,7 +42,7 @@ dapr run -f ```cmd -dapr run -f -k +dapr run -f -k ``` {{% /codetab %}} @@ -67,7 +67,7 @@ dapr run -f ./path/to/.yaml ```cmd -dapr run -f -k ./path/to/.yaml +dapr run -f ./path/to/.yaml -k ``` {{% /codetab %}} @@ -77,10 +77,27 @@ dapr run -f -k ./path/to/.yaml Once the multi-app template is running, you can view the started applications with the following command: +{{< tabs Self-hosted Kubernetes>}} + +{{% codetab %}} + + ```cmd dapr list ``` +{{% /codetab %}} + +{{% codetab %}} + + +```cmd +dapr list -k +``` +{{% /codetab %}} + +{{< /tabs >}} + ## Stop the multi-app template Stop the multi-app run template anytime with either of the following commands: @@ -109,12 +126,12 @@ dapr stop -f ./path/to/.yaml ```cmd # the template file needs to be called `dapr.yaml` by default if a directory path is given -dapr stop -f -k +dapr stop -f -k ``` or: ```cmd -dapr stop -f -k ./path/to/.yaml +dapr stop -f ./path/to/.yaml -k ``` {{% /codetab %}} diff --git a/daprdocs/content/en/operations/components/component-secrets.md b/daprdocs/content/en/operations/components/component-secrets.md index b7e1f0a51f7..d894865251b 100644 --- a/daprdocs/content/en/operations/components/component-secrets.md +++ b/daprdocs/content/en/operations/components/component-secrets.md @@ -75,15 +75,14 @@ spec: type: bindings.azure.servicebusqueues version: v1 metadata: - -name: connectionString - secretKeyRef: + - name: connectionString + secretKeyRef: name: asbNsConnString key: asbNsConnString - -name: queueName - value: servicec-inputq + - name: queueName + value: servicec-inputq auth: secretStore: - ``` The above "Secret is a string" case yaml tells Dapr to extract a connection string named `asbNsConnstring` from the defined `secretStore` and assign the value to the `connectionString` field in the component since there is no key embedded in the "secret" from the `secretStore` because it is a plain string. This requires the secret `name` and secret `key` to be identical. @@ -95,7 +94,7 @@ The following example shows you how to create a Kubernetes secret to hold the co 1. First, create the Kubernetes secret: ```bash - kubectl create secret generic eventhubs-secret --from-literal=connectionString=********* + kubectl create secret generic eventhubs-secret --from-literal=connectionString=********* ``` 2. Next, reference the secret in your binding: diff --git a/daprdocs/content/en/operations/components/pluggable-components-registration.md b/daprdocs/content/en/operations/components/pluggable-components-registration.md index 10f6a057715..7aecca49458 100644 --- a/daprdocs/content/en/operations/components/pluggable-components-registration.md +++ b/daprdocs/content/en/operations/components/pluggable-components-registration.md @@ -52,7 +52,7 @@ Since you are running Dapr in the same host as the component, verify that this f ### Component discovery and multiplexing -A pluggable component accessible through a [Unix Domain Socket][UDS] (UDS) can host multiple distinct component APIs . During the components' initial discovery process, Dapr uses reflection to enumerate all the component APIs behind a UDS. The `my-component` pluggable component in the example above can contain both state store (`state`) and a pub/sub (`pubsub`) component APIs. +A pluggable component accessible through a [Unix Domain Socket][UDS] (UDS) can host multiple distinct component APIs. During the components' initial discovery process, Dapr uses reflection to enumerate all the component APIs behind a UDS. The `my-component` pluggable component in the example above can contain both state store (`state`) and a pub/sub (`pubsub`) component APIs. Typically, a pluggable component implements a single component API for packaging and deployment. However, at the expense of increasing its dependencies and broadening its security attack surface, a pluggable component can have multiple component APIs implemented. This could be done to ease the deployment and monitoring burden. Best practice for isolation, fault tolerance, and security is a single component API implementation for each pluggable component. diff --git a/daprdocs/content/en/operations/observability/tracing/setup-tracing.md b/daprdocs/content/en/operations/observability/tracing/setup-tracing.md index 4fd3f40bca6..9a04f6bc90d 100644 --- a/daprdocs/content/en/operations/observability/tracing/setup-tracing.md +++ b/daprdocs/content/en/operations/observability/tracing/setup-tracing.md @@ -20,7 +20,7 @@ spec: tracing: samplingRate: "1" otel: - endpointAddress: "https://..." + endpointAddress: "myendpoint.cluster.local:4317" zipkin: endpointAddress: "https://..." @@ -32,10 +32,10 @@ The following table lists the properties for tracing: |--------------|--------|-------------| | `samplingRate` | string | Set sampling rate for tracing to be enabled or disabled. | `stdout` | bool | True write more verbose information to the traces -| `otel.endpointAddress` | string | Set the Open Telemetry (OTEL) server address. +| `otel.endpointAddress` | string | Set the Open Telemetry (OTEL) target hostname and optionally port. If this is used, you do not need to specify the 'zipkin' section. | `otel.isSecure` | bool | Is the connection to the endpoint address encrypted. | `otel.protocol` | string | Set to `http` or `grpc` protocol. -| `zipkin.endpointAddress` | string | Set the Zipkin server address. If this is used, you do not need to specify the `otel` section. +| `zipkin.endpointAddress` | string | Set the Zipkin server URL. If this is used, you do not need to specify the `otel` section. To enable tracing, use a configuration file (in self hosted mode) or a Kubernetes configuration object (in Kubernetes mode). For example, the following configuration object changes the sample rate to 1 (every span is sampled), and sends trace using OTEL protocol to the OTEL server at localhost:4317 @@ -66,7 +66,7 @@ turns on tracing for the sidecar. | Environment Variable | Description | |----------------------|-------------| -| `OTEL_EXPORTER_OTLP_ENDPOINT` | Sets the Open Telemetry (OTEL) server address, turns on tracing | +| `OTEL_EXPORTER_OTLP_ENDPOINT` | Sets the Open Telemetry (OTEL) server hostname and optionally port, turns on tracing | | `OTEL_EXPORTER_OTLP_INSECURE` | Sets the connection to the endpoint as unencrypted (true/false) | | `OTEL_EXPORTER_OTLP_PROTOCOL` | Transport protocol (`grpc`, `http/protobuf`, `http/json`) | diff --git a/daprdocs/content/en/operations/support/support-release-policy.md b/daprdocs/content/en/operations/support/support-release-policy.md index 51e11749b8a..46543b49f52 100644 --- a/daprdocs/content/en/operations/support/support-release-policy.md +++ b/daprdocs/content/en/operations/support/support-release-policy.md @@ -45,6 +45,7 @@ The table below shows the versions of Dapr releases that have been tested togeth | Release date | Runtime | CLI | SDKs | Dashboard | Status | Release notes | |--------------------|:--------:|:--------|---------|---------|---------|------------| +| May 21st 2024 | 1.13.3
| 1.13.0 | Java 1.11.0
Go 1.10.0
PHP 1.2.0
Python 1.13.0
.NET 1.13.0
JS 3.3.0 | 0.14.0 | Supported (current) | [v1.13.3 release notes](https://github.com/dapr/dapr/releases/tag/v1.13.3) | | April 3rd 2024 | 1.13.2
| 1.13.0 | Java 1.11.0
Go 1.10.0
PHP 1.2.0
Python 1.13.0
.NET 1.13.0
JS 3.3.0 | 0.14.0 | Supported (current) | [v1.13.2 release notes](https://github.com/dapr/dapr/releases/tag/v1.13.2) | | March 26th 2024 | 1.13.1
| 1.13.0 | Java 1.11.0
Go 1.10.0
PHP 1.2.0
Python 1.13.0
.NET 1.13.0
JS 3.3.0 | 0.14.0 | Supported (current) | [v1.13.1 release notes](https://github.com/dapr/dapr/releases/tag/v1.13.1) | | March 6th 2024 | 1.13.0
| 1.13.0 | Java 1.11.0
Go 1.10.0
PHP 1.2.0
Python 1.13.0
.NET 1.13.0
JS 3.3.0 | 0.14.0 | Supported (current) | [v1.13.0 release notes](https://github.com/dapr/dapr/releases/tag/v1.13.0) | @@ -136,6 +137,7 @@ General guidance on upgrading can be found for [self hosted mode]({{< ref self-h | 1.11.0 | N/A | 1.11.4 | | 1.12.0 | N/A | 1.12.4 | | 1.13.0 | N/A | 1.13.2 | +| 1.13.0 | N/A | 1.13.3 | ## Upgrade on Hosting platforms diff --git a/daprdocs/content/en/reference/api/pubsub_api.md b/daprdocs/content/en/reference/api/pubsub_api.md index 627ca7780b5..9c24e076cae 100644 --- a/daprdocs/content/en/reference/api/pubsub_api.md +++ b/daprdocs/content/en/reference/api/pubsub_api.md @@ -177,13 +177,14 @@ Example: { "pubsubname": "pubsub", "topic": "newOrder", - "route": { + "routes": { "rules": [ { "match": "event.type == order", "path": "/orders" } ] + default: /otherorders }, "metadata": { "rawPayload": "true" diff --git a/daprdocs/content/en/reference/cli/dapr-init.md b/daprdocs/content/en/reference/cli/dapr-init.md index b40af670116..9e517cfa95c 100644 --- a/daprdocs/content/en/reference/cli/dapr-init.md +++ b/daprdocs/content/en/reference/cli/dapr-init.md @@ -74,7 +74,7 @@ dapr init -s You can also specify a specific runtime version. Be default, the latest version is used. ```bash -dapr init --runtime-version 1.13.0 +dapr init --runtime-version 1.13.3 ``` **Install with image variant** diff --git a/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-azure-cosmosdb.md b/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-azure-cosmosdb.md index d7ee723eaa0..e33f603617f 100644 --- a/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-azure-cosmosdb.md +++ b/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-azure-cosmosdb.md @@ -28,6 +28,9 @@ spec: value: - name: collection value: + # Uncomment this if you wish to use Azure Cosmos DB as a state store for actors (optional) + #- name: actorStateStore + # value: "true" ``` {{% alert title="Warning" color="warning" %}} @@ -49,7 +52,7 @@ If you wish to use Cosmos DB as an actor store, append the following to the yam | masterKey | Y* | The key to authenticate to the Cosmos DB account. Only required when not using Microsoft Entra ID authentication. | `"key"` | database | Y | The name of the database | `"db"` | collection | Y | The name of the collection (container) | `"collection"` -| actorStateStore | N | Consider this state store for actors. Defaults to `"false"` | `"true"`, `"false"` +| actorStateStore | N | Consider this state store for actors. Defaults to `"false"` | `"true"`, `"false"` ### Microsoft Entra ID authentication @@ -172,7 +175,9 @@ az cosmosdb sql role assignment create \ --role-definition-id "$ROLE_ID" ``` -## Optimizing Cosmos DB for bulk operation write performance +## Optimizations + +### Optimizing Cosmos DB for bulk operation write performance If you are building a system that only ever reads data from Cosmos DB via key (`id`), which is the default Dapr behavior when using the state management API or actors, there are ways you can optimize Cosmos DB for improved write speeds. This is done by excluding all paths from indexing. By default, Cosmos DB indexes all fields inside of a document. On systems that are write-heavy and run little-to-no queries on values within a document, this indexing policy slows down the time it takes to write or update a document in Cosmos DB. This is exacerbated in high-volume systems. @@ -208,6 +213,18 @@ This optimization comes at the cost of queries against fields inside of document {{% /alert %}} +### Optimizing Cosmos DB for cost savings + +If you intend to use Cosmos DB only as a key-value pair, it may be in your interest to consider converting your state object to JSON and compressing it before persisting it to state, and subsequently decompressing it when reading it out of state. This is because Cosmos DB bills your usage based on the maximum number of RU/s used in a given time period (typically each hour). Furthermore, RU usage is calculated as 1 RU per 1 KB of data you read or write. Compression helps by reducing the size of the data stored in Cosmos DB and subsequently reducing RU usage. + +This savings is particularly significant for Dapr actors. While the Dapr State Management API does a base64 encoding of your object before saving, Dapr actor state is saved as raw, formatted JSON. This means multiple lines with indentations for formatting. Compressing can signficantly reduce the size of actor state objects. For example, if you have an actor state object that is 75KB in size when the actor is hydrated, you will use 75 RU/s to read that object out of state. If you then modify the state object and it grows to 100KB, you will use 100 RU/s to write that object to Cosmos DB, totalling 175 RU/s for the I/O operation. Let's say your actors are concurrently handling 1000 requests per second, you will need at least 175,000 RU/s to meet that load. With effective compression, the size reduction can be in the region of 90%, which means you will only need in the region of 17,500 RU/s to meet the load. + +{{% alert title="Note" color="primary" %}} + +This particular optimization only makes sense if you are saving large objects to state. The performance and memory tradeoff for performing the compression and decompression on either end need to make sense for your use case. Furthermore, once the data is saved to state, it is not human readable, nor is it queryable. You should only adopt this optimization if you are saving large state objects as key-value pairs. + +{{% /alert %}} + ## Related links - [Basic schema for a Dapr component]({{< ref component-schema >}}) diff --git a/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-dynamodb.md b/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-dynamodb.md index a3b1781de70..32150cafe62 100644 --- a/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-dynamodb.md +++ b/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-dynamodb.md @@ -36,6 +36,9 @@ spec: value: "expiresAt" # Optional - name: partitionKey value: "ContractID" # Optional + # Uncomment this if you wish to use AWS DynamoDB as a state store for actors (optional) + #- name: actorStateStore + # value: "true" ``` {{% alert title="Warning" color="warning" %}} @@ -58,6 +61,7 @@ In order to use DynamoDB as a Dapr state store, the table must have a primary ke | sessionToken | N |AWS session token to use. A session token is only required if you are using temporary security credentials. | `"TOKEN"` | ttlAttributeName | N |The table attribute name which should be used for TTL. | `"expiresAt"` | partitionKey | N |The table primary key or partition key attribute name. This field is used to replace the default primary key attribute name `"key"`. See the section [Partition Keys]({{< ref "setup-dynamodb.md#partition-keys" >}}). | `"ContractID"` +| actorStateStore | N | Consider this state store for actors. Defaults to "false" | `"true"`, `"false"` {{% alert title="Important" color="warning" %}} When running the Dapr sidecar (daprd) with your application on EKS (AWS Kubernetes), if you're using a node/pod that has already been attached to an IAM policy defining access to AWS resources, you **must not** provide AWS access-key, secret-key, and tokens in the definition of the component spec you're using. diff --git a/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-etcd.md b/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-etcd.md index 63ee61ac2d0..6b40c5241d2 100644 --- a/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-etcd.md +++ b/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-etcd.md @@ -34,6 +34,9 @@ spec: value: # Optional. Required if tlsEnable is `true`. - name: key value: # Optional. Required if tlsEnable is `true`. + # Uncomment this if you wish to use Etcd as a state store for actors (optional) + #- name: actorStateStore + # value: "true" ``` {{% alert title="Warning" color="warning" %}} @@ -59,6 +62,7 @@ If you are using `v1`, you should continue to use `v1` until you create a new Et | `ca` | N | CA certificate for connecting to Etcd, PEM-encoded. Can be `secretKeyRef` to use a [secret reference]({{< ref component-secrets.md >}}).| `"-----BEGIN CERTIFICATE-----\nMIIC9TCCA..."` | `cert` | N | TLS certificate for connecting to Etcd, PEM-encoded. Can be `secretKeyRef` to use a [secret reference]({{< ref component-secrets.md >}}).| `"-----BEGIN CERTIFICATE-----\nMIIDUTCC..."` | `key` | N | TLS key for connecting to Etcd, PEM-encoded. Can be `secretKeyRef` to use a [secret reference]({{< ref component-secrets.md >}}).| `"-----BEGIN PRIVATE KEY-----\nMIIEpAIB..."` +| `actorStateStore` | N | Consider this state store for actors. Defaults to `"false"` | `"true"`, `"false"` ## Setup Etcd diff --git a/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-inmemory.md b/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-inmemory.md index 17d8cc4be3e..50d20f5000e 100644 --- a/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-inmemory.md +++ b/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-inmemory.md @@ -21,7 +21,10 @@ metadata: spec: type: state.in-memory version: v1 - metadata: [] + metadata: + # Uncomment this if you wish to use In-memory as a state store for actors (optional) + #- name: actorStateStore + # value: "true" ``` > Note: While in-memory does not require any specific metadata for the component to work, `spec.metadata` is a required field. diff --git a/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-mongodb.md b/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-mongodb.md index db3fec2ed77..11b9d5a479a 100644 --- a/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-mongodb.md +++ b/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-mongodb.md @@ -41,6 +41,10 @@ spec: value: # Optional. default: "5s" - name: params value: # Optional. Example: "?authSource=daprStore&ssl=true" + # Uncomment this if you wish to use MongoDB as a state store for actors (optional) + #- name: actorStateStore + # value: "true" + ``` {{% alert title="Warning" color="warning" %}} @@ -72,6 +76,7 @@ If you wish to use MongoDB as an actor store, add this metadata option to your C | readConcern | N | The read concern to use | `"majority"`, `"local"`,`"available"`, `"linearizable"`, `"snapshot"` | operationTimeout | N | The timeout for the operation. Defaults to `"5s"` | `"5s"` | params | N2 | Additional parameters to use | `"?authSource=daprStore&ssl=true"` +| actorStateStore | N | Consider this state store for actors. Defaults to `"false"` | `"true"`, `"false"` > [1] The `server` and `host` fields are mutually exclusive. If neither or both are set, Dapr returns an error. @@ -85,7 +90,7 @@ If you wish to use MongoDB as an actor store, add this metadata option to your C You can run a single MongoDB instance locally using Docker: ```sh -docker run --name some-mongo -d mongo +docker run --name some-mongo -d -p 27017:27017 mongo ``` You can then interact with the server at `localhost:27017`. If you do not specify a `databaseName` value in your component definition, make sure to create a database named `daprStore`. diff --git a/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-mysql.md b/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-mysql.md index 65000b4d142..29867aa1e98 100644 --- a/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-mysql.md +++ b/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-mysql.md @@ -35,6 +35,9 @@ spec: value: "" - name: pemContents # Required if pemPath not provided. Pem value. value: "" +# Uncomment this if you wish to use MySQL & MariaDB as a state store for actors (optional) + #- name: actorStateStore + # value: "true" ``` {{% alert title="Warning" color="warning" %}} @@ -59,6 +62,7 @@ If you wish to use MySQL as an actor store, append the following to the yaml. | `pemPath` | N | Full path to the PEM file to use for [enforced SSL Connection](#enforced-ssl-connection) required if pemContents is not provided. Cannot be used in K8s environment | `"/path/to/file.pem"`, `"C:\path\to\file.pem"` | | `pemContents` | N | Contents of PEM file to use for [enforced SSL Connection](#enforced-ssl-connection) required if pemPath is not provided. Can be used in K8s environment | `"pem value"` | | `cleanupIntervalInSeconds` | N | Interval, in seconds, to clean up rows with an expired TTL. Default: `3600` (that is 1 hour). Setting this to values <=0 disables the periodic cleanup. | `1800`, `-1` +| `actorStateStore` | N | Consider this state store for actors. Defaults to `"false"` | `"true"`, `"false"` ## Setup MySQL diff --git a/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-oracledatabase.md b/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-oracledatabase.md index b023a2ef75d..51cf0a02066 100644 --- a/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-oracledatabase.md +++ b/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-oracledatabase.md @@ -28,6 +28,9 @@ spec: value: "" # Optional, no default - name: tableName value: "" # Optional, defaults to STATE + # Uncomment this if you wish to use Oracle Database as a state store for actors (optional) + #- name: actorStateStore + # value: "true" ``` {{% alert title="Warning" color="warning" %}} The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}). @@ -40,6 +43,7 @@ The above example uses secrets as plain strings. It is recommended to use a secr | connectionString | Y | The connection string for Oracle Database | `"oracle://user/password@host:port/servicename"` for example `"oracle://demo:demo@localhost:1521/xe"` or for Autonomous Database `"oracle://states_schema:State12345pw@adb.us-ashburn-1.oraclecloud.com:1522/k8j2agsqjsw_daprdb_low.adb.oraclecloud.com"` | oracleWalletLocation | N | Location of the contents of an Oracle Wallet file (required to connect to Autonomous Database on OCI)| `"/home/app/state/Wallet_daprDB/"` | tableName | N | Name of the database table in which this instance of the state store records the data default `"STATE"`| `"MY_APP_STATE_STORE"` +| actorStateStore | N | Consider this state store for actors. Defaults to `"false"` | `"true"`, `"false"` ## What Happens at Runtime? When the state store component initializes, it connects to the Oracle Database and checks if a table with the name specified with `tableName` exists. If it does not, it creates this table (with columns Key, Value, Binary_YN, ETag, Creation_Time, Update_Time, Expiration_time). diff --git a/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-redis.md b/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-redis.md index 366bbde0d44..da1f2bf04a3 100644 --- a/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-redis.md +++ b/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-redis.md @@ -74,6 +74,9 @@ spec: value: # Optional - name: queryIndexes value: # Optional + # Uncomment this if you wish to use Redis as a state store for actors (optional) + #- name: actorStateStore + # value: "true" ``` {{% alert title="Warning" color="warning" %}} @@ -119,6 +122,7 @@ If you wish to use Redis as an actor store, append the following to the yaml. | actorStateStore | N | Consider this state store for actors. Defaults to `"false"` | `"true"`, `"false"` | ttlInSeconds | N | Allows specifying a default Time-to-live (TTL) in seconds that will be applied to every state store request unless TTL is explicitly defined via the [request metadata]({{< ref "state-store-ttl.md" >}}). | `600` | queryIndexes | N | Indexing schemas for querying JSON objects | see [Querying JSON objects](#querying-json-objects) +| actorStateStore | N | Consider this state store for actors. Defaults to `"false"` | `"true"`, `"false"` ## Setup Redis diff --git a/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-sqlserver.md b/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-sqlserver.md index 96d79ac9d64..23b9a10750b 100644 --- a/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-sqlserver.md +++ b/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-sqlserver.md @@ -52,6 +52,9 @@ spec: value: "" - name: cleanupIntervalInSeconds value: "3600" + # Uncomment this if you wish to use Microsoft SQL Server as a state store for actors (optional) + #- name: actorStateStore + # value: "true" ``` {{% alert title="Warning" color="warning" %}} diff --git a/daprdocs/layouts/shortcodes/dapr-latest-version.html b/daprdocs/layouts/shortcodes/dapr-latest-version.html index 108b35c0726..7920173103f 100644 --- a/daprdocs/layouts/shortcodes/dapr-latest-version.html +++ b/daprdocs/layouts/shortcodes/dapr-latest-version.html @@ -1 +1 @@ -{{- if .Get "short" }}1.13{{ else if .Get "long" }}1.13.0{{ else if .Get "cli" }}1.13.0{{ else }}1.13.0{{ end -}} \ No newline at end of file +{{- if .Get "short" }}1.13{{ else if .Get "long" }}1.13.3{{ else if .Get "cli" }}1.13.3{{ else }}1.13.3{{ end -}} \ No newline at end of file