Skip to content

Commit

Permalink
Upmerge 10/17 (#4392)
Browse files Browse the repository at this point in the history
* initial freshness, start with overview

Signed-off-by: Hannah Hunter <[email protected]>

* continue freshness pass

Signed-off-by: Hannah Hunter <[email protected]>

* continue freshness

Signed-off-by: Hannah Hunter <[email protected]>

* finish freshness pass

Signed-off-by: Hannah Hunter <[email protected]>

* typo

Signed-off-by: Hannah Hunter <[email protected]>

* add note about default app-max-concurrency

Signed-off-by: Hannah Hunter <[email protected]>

* add to args and annotations doc

Signed-off-by: Hannah Hunter <[email protected]>

* Update config.toml

Signed-off-by: Hannah Hunter <[email protected]>

* fix 1.15 link

Signed-off-by: Hannah Hunter <[email protected]>

* fix release

Signed-off-by: Hannah Hunter <[email protected]>

* other versions

Signed-off-by: Hannah Hunter <[email protected]>

* Add AWS IAM authentication fields in PostgreSQL components (#4311)

* Add AWS IAM authentication fields in PostgreSQL components

Signed-off-by: Anton Troshin <[email protected]>

* change wording

Signed-off-by: Anton Troshin <[email protected]>

---------

Signed-off-by: Anton Troshin <[email protected]>
Co-authored-by: Hannah Hunter <[email protected]>

* Adds clientCert and clientKey fields to spec Redis metadata fields (#4312)

* Adds clientCert and clientKey fields to spec Redis metadata fields

Signed-off-by: Elena Kolevska <[email protected]>

* remove

Signed-off-by: Elena Kolevska <[email protected]>

* removes empty line

Signed-off-by: Elena Kolevska <[email protected]>

---------

Signed-off-by: Elena Kolevska <[email protected]>
Co-authored-by: Mark Fussell <[email protected]>

* Fix: Scheduler Actor Reminders Wording (#4320)

* update phrasing to be scheduler actor reminders bc the jobs api has nothing to do with it really

Signed-off-by: Cassandra Coyle <[email protected]>

* Update daprdocs/content/en/operations/support/support-preview-features.md

Co-authored-by: Mark Fussell <[email protected]>
Signed-off-by: Cassie Coyle <[email protected]>

---------

Signed-off-by: Cassandra Coyle <[email protected]>
Signed-off-by: Cassie Coyle <[email protected]>
Co-authored-by: Mark Fussell <[email protected]>

* add storageClass example for s3 metadata (#4308)

* add storageClass example to docs

Signed-off-by: Cassandra Coyle <[email protected]>

* add field to table

Signed-off-by: Cassandra Coyle <[email protected]>

* update data indentation for example curl

Signed-off-by: Cassandra Coyle <[email protected]>

* Update daprdocs/content/en/reference/components-reference/supported-bindings/s3.md

Co-authored-by: Mark Fussell <[email protected]>
Signed-off-by: Cassie Coyle <[email protected]>

* tweaks

Signed-off-by: Cassandra Coyle <[email protected]>

* add to template example

Signed-off-by: Cassandra Coyle <[email protected]>

* add doc link for storage class

Signed-off-by: Cassandra Coyle <[email protected]>

* Update daprdocs/content/en/reference/components-reference/supported-bindings/s3.md

Signed-off-by: Mark Fussell <[email protected]>

* Update daprdocs/content/en/reference/components-reference/supported-bindings/s3.md

Signed-off-by: Mark Fussell <[email protected]>

* Update daprdocs/content/en/reference/components-reference/supported-bindings/s3.md

Signed-off-by: Mark Fussell <[email protected]>

---------

Signed-off-by: Cassandra Coyle <[email protected]>
Signed-off-by: Cassie Coyle <[email protected]>
Signed-off-by: Mark Fussell <[email protected]>
Co-authored-by: Mark Fussell <[email protected]>

* Add Prometheus auto service discovery instructions to
prometheus.md

- Updated prometheus.md with instructions for setting up Prometheus auto service discovery for Dapr and sidecar targets.

* Add Prometheus auto service discovery instructions to
prometheus.md

- Updated prometheus.md with instructions for setting up Prometheus auto service discovery for Dapr and sidecar targets.

Signed-off-by: Maulin Desai <[email protected]>

* clarify/correct quickstarts

Signed-off-by: Hannah Hunter <[email protected]>

* clarify docs (#4324)

Signed-off-by: Hannah Hunter <[email protected]>

* Format service discovery instructions

Updated the navigation instructions to use bold text for "Status" and "Service Discovery" for better visual clarity.

Signed-off-by: Maulin Desai <[email protected]>

* add note about implicit retries (#4325)

Signed-off-by: Hannah Hunter <[email protected]>

* Add Jobs API to Dapr slidedeck

Signed-off-by: Marc Duiker <[email protected]>

* Update daprdocs/content/en/operations/configuration/configuration-overview.md

Co-authored-by: Mark Fussell <[email protected]>
Signed-off-by: Hannah Hunter <[email protected]>

* Update daprdocs/content/en/operations/configuration/configuration-overview.md

Co-authored-by: Mark Fussell <[email protected]>
Signed-off-by: Hannah Hunter <[email protected]>

* Update daprdocs/content/en/operations/configuration/configuration-overview.md

Co-authored-by: Mark Fussell <[email protected]>
Signed-off-by: Hannah Hunter <[email protected]>

* Update daprdocs/content/en/operations/configuration/configuration-overview.md

Co-authored-by: Mark Fussell <[email protected]>
Signed-off-by: Hannah Hunter <[email protected]>

* Update daprdocs/content/en/operations/configuration/configuration-overview.md

Co-authored-by: Mark Fussell <[email protected]>
Signed-off-by: Hannah Hunter <[email protected]>

* last update per mark review

Signed-off-by: Hannah Hunter <[email protected]>

* docs: init scheduler in the docker compose example

Signed-off-by: Mike Nguyen <[email protected]>

* update per mark, pt 2

Signed-off-by: Hannah Hunter <[email protected]>

* fixed yaml syntax for v2alpha1 example (#4335)

Signed-off-by: Adrian Hristov <[email protected]>

* Bump actions/download-artifact from 3 to 4.1.7 in /.github/workflows

Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 3 to 4.1.7.
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](actions/download-artifact@v3...v4.1.7)

---
updated-dependencies:
- dependency-name: actions/download-artifact
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <[email protected]>

* add notes about namespacing

Signed-off-by: Hannah Hunter <[email protected]>

* update latest version (#4341)

Signed-off-by: Hannah Hunter <[email protected]>

* Add Kafka escapeHeaders documentation (#4332)

* Add Kafka escapeHeaders documentation

Signed-off-by: Anton Troshin <[email protected]>

* update the escapeHeaders setting docs

Signed-off-by: Anton Troshin <[email protected]>

* review fixes

Signed-off-by: Anton Troshin <[email protected]>

---------

Signed-off-by: Anton Troshin <[email protected]>
Co-authored-by: Yaron Schneider <[email protected]>

* Update roadmap.md (#4340)

* Update roadmap.md

Signed-off-by: Yaron Schneider <[email protected]>

* Update roadmap.md

Signed-off-by: Yaron Schneider <[email protected]>

---------

Signed-off-by: Yaron Schneider <[email protected]>

* conductor update (#4344)

* fix job api http reference (#4343)

Signed-off-by: yaron2 <[email protected]>
Co-authored-by: Mark Fussell <[email protected]>

* update alias (#4347)

* Updated workflow to reflect deprecation of Workflow methods on client (#4336)

Signed-off-by: Whit Waldo <[email protected]>
Co-authored-by: Hannah Hunter <[email protected]>

* rm escape (#4348)

Signed-off-by: Cassandra Coyle <[email protected]>

* clarify per josh comment

Signed-off-by: Hannah Hunter <[email protected]>

* Fixed cron schedule table

Signed-off-by: Whit Waldo <[email protected]>

* Tweaked the endpoint description and example to reflect that the protocol may or may not be necessary based on the provider used.

Signed-off-by: Whit Waldo <[email protected]>

* Updated to reflect the need for the protocol on the zipkin endpoint though it's not necessary on the otel endpoint.

Signed-off-by: Whit Waldo <[email protected]>

* update latest version to 1.14.2 (#4352)

Signed-off-by: Hannah Hunter <[email protected]>
Co-authored-by: Mark Fussell <[email protected]>

* Update Job HTTP request API (#4349)

According to dapr/dapr#8083

Signed-off-by: joshvanl <[email protected]>
Co-authored-by: Yaron Schneider <[email protected]>

* Helm: Revert Scheduler storage quota size to `1Gi` (#4354)

In v1.14.3, the storage quota size for the Scheduler volume was
increased from `1Gi` to `16Gi`. This is because users where encountering
disk exhaustion fatal errors on the Scheduler under normal usage.
Because the volume size request field is protected from updates, Dapr
version upgrades to v1.14.3 failed without manual intervention.

Reverts the Scheduler storage quota size back to `1Gi`, and adds
warnings that the volume size may need to be increased for production
deployments.

See: dapr/dapr#8107

Signed-off-by: joshvanl <[email protected]>

* Reflecting valid value of 0-6, not 0-7 in jobs schedule

Signed-off-by: Whit Waldo <[email protected]>

* Clarifiied need for the actorStateStore property in docs, regardless of whether the actor actually stores any state.

Signed-off-by: Whit Waldo <[email protected]>

* Reworded slightly

Signed-off-by: Whit Waldo <[email protected]>

* Update workflow-patterns.md

Make monitor code samples consistent between python/go and all other examples.

* Python and Go are using seconds
* Everything else is in minutes.

Signed-off-by: Vasily Chekalkin <[email protected]>

* update latest and recalled versions (#4360)

Signed-off-by: Hannah Hunter <[email protected]>

* Update setup-azure-servicebus-topics.md

Signed-off-by: Andrew Riddlestone <[email protected]>

* Update howto-invoke-non-dapr-endpoints.md (#4369)

Update service invocation steps according to diagram

Signed-off-by: Michael Klich <[email protected]>

* Update daprdocs/content/en/reference/components-reference/supported-pubsub/setup-azure-servicebus-topics.md

Signed-off-by: Mark Fussell <[email protected]>

* Workflow limitations change (#4367)

* workflow limitations change

Signed-off-by: yaron2 <[email protected]>

* Update daprdocs/content/en/developing-applications/building-blocks/workflow/workflow-overview.md

Co-authored-by: Hannah Hunter <[email protected]>
Signed-off-by: Mark Fussell <[email protected]>

* Update daprdocs/content/en/developing-applications/building-blocks/workflow/workflow-overview.md

Co-authored-by: Hannah Hunter <[email protected]>
Signed-off-by: Mark Fussell <[email protected]>

---------

Signed-off-by: yaron2 <[email protected]>
Signed-off-by: Mark Fussell <[email protected]>
Co-authored-by: Mark Fussell <[email protected]>
Co-authored-by: Hannah Hunter <[email protected]>

* rm decoding (#4373)

Signed-off-by: Cassandra Coyle <[email protected]>

* fix misleading wording (#4379)

Signed-off-by: yaron2 <[email protected]>

* [Jobs API] Describe Triggered Job Handling Assumptions (#4376)

* add specific logic for what assumptions are made for triggered jobs for http, grpc, sdks

Signed-off-by: Cassandra Coyle <[email protected]>

* rm space

Signed-off-by: Cassandra Coyle <[email protected]>

* add a note about this applying to all programming languages to avoid confusion

Signed-off-by: Cassandra Coyle <[email protected]>

* Update howto-schedule-and-handle-triggered-jobs.md

Signed-off-by: Yaron Schneider <[email protected]>

---------

Signed-off-by: Cassandra Coyle <[email protected]>
Signed-off-by: Yaron Schneider <[email protected]>
Co-authored-by: Yaron Schneider <[email protected]>

* add roadmap to main page (#4386)

Signed-off-by: yaron2 <[email protected]>

* Update support (#4387)

* Fixed typo (#4389)

* Update daprdocs/config.toml

Signed-off-by: Hannah Hunter <[email protected]>

---------

Signed-off-by: Hannah Hunter <[email protected]>
Signed-off-by: Hannah Hunter <[email protected]>
Signed-off-by: Anton Troshin <[email protected]>
Signed-off-by: Elena Kolevska <[email protected]>
Signed-off-by: Cassandra Coyle <[email protected]>
Signed-off-by: Cassie Coyle <[email protected]>
Signed-off-by: Mark Fussell <[email protected]>
Signed-off-by: Maulin Desai <[email protected]>
Signed-off-by: Marc Duiker <[email protected]>
Signed-off-by: Mike Nguyen <[email protected]>
Signed-off-by: Adrian Hristov <[email protected]>
Signed-off-by: dependabot[bot] <[email protected]>
Signed-off-by: Yaron Schneider <[email protected]>
Signed-off-by: yaron2 <[email protected]>
Signed-off-by: Whit Waldo <[email protected]>
Signed-off-by: joshvanl <[email protected]>
Signed-off-by: Vasily Chekalkin <[email protected]>
Signed-off-by: Andrew Riddlestone <[email protected]>
Signed-off-by: Michael Klich <[email protected]>
Co-authored-by: Anton Troshin <[email protected]>
Co-authored-by: Elena Kolevska <[email protected]>
Co-authored-by: Mark Fussell <[email protected]>
Co-authored-by: Cassie Coyle <[email protected]>
Co-authored-by: Maulin Desai <[email protected]>
Co-authored-by: Marc Duiker <[email protected]>
Co-authored-by: Mike Nguyen <[email protected]>
Co-authored-by: Adrian Hristov <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Yaron Schneider <[email protected]>
Co-authored-by: Whit Waldo <[email protected]>
Co-authored-by: Cassie Coyle <[email protected]>
Co-authored-by: Josh van Leeuwen <[email protected]>
Co-authored-by: Vasily Chekalkin <[email protected]>
Co-authored-by: Andrew Riddlestone <[email protected]>
Co-authored-by: Michael Klich <[email protected]>
  • Loading branch information
17 people authored Oct 17, 2024
1 parent 1bf6bd7 commit 95f394e
Show file tree
Hide file tree
Showing 52 changed files with 1,054 additions and 392 deletions.
7 changes: 7 additions & 0 deletions daprdocs/content/en/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -87,6 +87,13 @@ you tackle the challenges that come with building microservices and keeps your c
<a href="{{< ref contributing >}}" class="stretched-link"></a>
</div>
</div>
<div class="card">
<div class="card-body">
<h5 class="card-title"><b>Roadmap</b></h5>
<p class="card-text">Learn about Dapr's roadmap and change process.</p>
<a href="{{< ref roadmap.md >}}" class="stretched-link"></a>
</div>
</div>
</div>


Expand Down
26 changes: 19 additions & 7 deletions daprdocs/content/en/concepts/configuration-concept.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,13 @@ weight: 400
description: "Change the behavior of Dapr application sidecars or globally on Dapr control plane system services"
---

Dapr configurations are settings and policies that enable you to change both the behavior of individual Dapr applications, or the global behavior of the Dapr control plane system services. For example, you can set an ACL policy on the application sidecar configuration which indicates which methods can be called from another application, or on the Dapr control plane configuration you can change the certificate renewal period for all certificates that are deployed to application sidecar instances.
With Dapr configurations, you use settings and policies to change:
- The behavior of individual Dapr applications
- The global behavior of the Dapr control plane system services

Configurations are defined and deployed as a YAML file. An application configuration example is shown below, which demonstrates an example of setting a tracing endpoint for where to send the metrics information, capturing all the sample traces.
For example, set a sampling rate policy on the application sidecar configuration to indicate which methods can be called from another application. If you set a policy on the Dapr control plane configuration, you can change the certificate renewal period for all certificates that are deployed to application sidecar instances.

Configurations are defined and deployed as a YAML file. In the following application configuration example, a tracing endpoint is set for where to send the metrics information, capturing all the sample traces.

```yaml
apiVersion: dapr.io/v1alpha1
Expand All @@ -23,9 +27,11 @@ spec:
endpointAddress: "http://localhost:9411/api/v2/spans"
```
This configuration configures tracing for metrics recording. It can be loaded in local self-hosted mode by editing the default configuration file called `config.yaml` file in your `.dapr` directory, or by applying it to your Kubernetes cluster with kubectl/helm.
The above YAML configures tracing for metrics recording. You can load it in local self-hosted mode by either:
- Editing the default configuration file called `config.yaml` file in your `.dapr` directory, or
- Applying it to your Kubernetes cluster with `kubectl/helm`.

Here is an example of the Dapr control plane configuration called `daprsystem` in the `dapr-system` namespace.
The following example shows the Dapr control plane configuration called `daprsystem` in the `dapr-system` namespace.

```yaml
apiVersion: dapr.io/v1alpha1
Expand All @@ -40,8 +46,14 @@ spec:
allowedClockSkew: "15m"
```

Visit [overview of Dapr configuration options]({{<ref "configuration-overview.md">}}) for a list of the configuration options.
By default, there is a single configuration file called `daprsystem` installed with the Dapr control plane system services. This configuration file applies global control plane settings and is set up when Dapr is deployed to Kubernetes.

[Learn more about configuration options.]({{< ref "configuration-overview.md" >}})

{{% alert title="Note" color="primary" %}}
Dapr application and control plane configurations should not be confused with the configuration building block API that enables applications to retrieve key/value data from configuration store components. Read the [Configuration building block]({{< ref configuration-api-overview >}}) for more information.
{{% alert title="Important" color="warning" %}}
Dapr application and control plane configurations should not be confused with the [configuration building block API]({{< ref configuration-api-overview >}}), which enables applications to retrieve key/value data from configuration store components.
{{% /alert %}}

## Next steps

{{< button text="Learn more about configuration" page="configuration-overview" >}}
2 changes: 1 addition & 1 deletion daprdocs/content/en/concepts/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,7 @@ Deploying and running a Dapr-enabled application into your Kubernetes cluster is

### Clusters of physical or virtual machines

The Dapr control plane services can be deployed in high availability (HA) mode to clusters of physical or virtual machines in production. In the diagram below, the Actor `Placement` and security `Sentry` services are started on three different VMs to provide HA control plane. In order to provide name resolution using DNS for the applications running in the cluster, Dapr uses [Hashicorp Consul service]({{< ref setup-nr-consul >}}), also running in HA mode.
The Dapr control plane services can be deployed in high availability (HA) mode to clusters of physical or virtual machines in production. In the diagram below, the Actor `Placement` and security `Sentry` services are started on three different VMs to provide HA control plane. In order to provide name resolution using DNS for the applications running in the cluster, Dapr uses multicast DNS by default, but can also optionally support [Hashicorp Consul service]({{< ref setup-nr-consul >}}).

<img src="/images/overview-vms-hosting.png" width=1200 alt="Architecture diagram of Dapr control plane and Consul deployed to VMs in high availability mode">

Expand Down
42 changes: 2 additions & 40 deletions daprdocs/content/en/contributing/roadmap.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,47 +2,9 @@
type: docs
title: "Dapr Roadmap"
linkTitle: "Roadmap"
description: "The Dapr Roadmap is a tool to help with visibility into investments across the Dapr project"
description: "The Dapr Roadmap gives the community visibility into the different priorities of the projecs"
weight: 30
no_list: true
---


Dapr encourages the community to help with prioritization. A GitHub project board is available to view and provide feedback on proposed issues and track them across development.

[<img src="/images/roadmap.png" alt="Screenshot of the Dapr Roadmap board" width=500 >](https://aka.ms/dapr/roadmap)

{{< button text="View the backlog" link="https://aka.ms/dapr/roadmap" color="primary" >}}
<br />

Please vote by adding a 👍 on the GitHub issues for the feature capabilities you would most like to see Dapr support. This will help the Dapr maintainers understand which features will provide the most value.

Contributions from the community is also welcomed. If there are features on the roadmap that you are interested in contributing to, please comment on the GitHub issue and include your solution proposal.

{{% alert title="Note" color="primary" %}}
The Dapr roadmap includes issues only from the v1.2 release and onwards. Issues closed and released prior to v1.2 are not included.
{{% /alert %}}

## Stages

The Dapr Roadmap progresses through the following stages:

{{< cardpane >}}
{{< card title="**[📄 Backlog](https://github.com/orgs/dapr/projects/52#column-14691591)**" >}}
Issues (features) that need 👍 votes from the community to prioritize. Updated by Dapr maintainers.
{{< /card >}}
{{< card title="**[⏳ Planned (Committed)](https://github.com/orgs/dapr/projects/52#column-14561691)**" >}}
Issues with a proposal and/or targeted release milestone. This is where design proposals are discussed and designed.
{{< /card >}}
{{< card title="**[👩‍💻 In Progress (Development)](https://github.com/orgs/dapr/projects/52#column-14561696)**" >}}
Implementation specifics have been agreed upon and the feature is under active development.
{{< /card >}}
{{< /cardpane >}}
{{< cardpane >}}
{{< card title="**[☑ Done](https://github.com/orgs/dapr/projects/52#column-14561700)**" >}}
The feature capability has been completed and is scheduled for an upcoming release.
{{< /card >}}
{{< card title="**[✅ Released](https://github.com/orgs/dapr/projects/52#column-14659973)**" >}}
The feature is released and available for use.
{{< /card >}}
{{< /cardpane >}}
See [this document](https://github.com/dapr/community/blob/master/roadmap.md) to view the Dapr project's roadmap.
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,7 @@ The Dapr actor runtime provides a simple turn-based access model for accessing a

### State

Transactional state stores can be used to store actor state. To specify which state store to use for actors, specify value of property `actorStateStore` as `true` in the state store component's metadata section. Actors state is stored with a specific scheme in transactional state stores, allowing for consistent querying. Only a single state store component can be used as the state store for all actors. Read the [state API reference]({{< ref state_api.md >}}) and the [actors API reference]({{< ref actors_api.md >}}) to learn more about state stores for actors.
Transactional state stores can be used to store actor state. Regardless of whether you intend to store any state in your actor, you must specify a value for property `actorStateStore` as `true` in the state store component's metadata section. Actors state is stored with a specific scheme in transactional state stores, allowing for consistent querying. Only a single state store component can be used as the state store for all actors. Read the [state API reference]({{< ref state_api.md >}}) and the [actors API reference]({{< ref actors_api.md >}}) to learn more about state stores for actors.

### Actor timers and reminders

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -94,6 +94,75 @@ In this example, at trigger time, which is `@every 1s` according to the `Schedul

At the trigger time, the `prodDBBackupHandler` function is called, executing the desired business logic for this job at trigger time. For example:

#### HTTP

When you create a job using Dapr's Jobs API, Dapr will automatically assume there is an endpoint available at
`/job/<job-name>`. For instance, if you schedule a job named `test`, Dapr expects your application to listen for job
events at `/job/test`. Ensure your application has a handler set up for this endpoint to process the job when it is
triggered. For example:

*Note: The following example is in Go but applies to any programming language.*

```go

func main() {
...
http.HandleFunc("/job/", handleJob)
http.HandleFunc("/job/<job-name>", specificJob)
...
}

func specificJob(w http.ResponseWriter, r *http.Request) {
// Handle specific triggered job
}

func handleJob(w http.ResponseWriter, r *http.Request) {
// Handle the triggered jobs
}
```

#### gRPC

When a job reaches its scheduled trigger time, the triggered job is sent back to the application via the following
callback function:

*Note: The following example is in Go but applies to any programming language with gRPC support.*

```go
import rtv1 "github.com/dapr/dapr/pkg/proto/runtime/v1"
...
func (s *JobService) OnJobEventAlpha1(ctx context.Context, in *rtv1.JobEventRequest) (*rtv1.JobEventResponse, error) {
// Handle the triggered job
}
```

This function processes the triggered jobs within the context of your gRPC server. When you set up the server, ensure that
you register the callback server, which will invoke this function when a job is triggered:

```go
...
js := &JobService{}
rtv1.RegisterAppCallbackAlphaServer(server, js)
```

In this setup, you have full control over how triggered jobs are received and processed, as they are routed directly
through this gRPC method.

#### SDKs

For SDK users, handling triggered jobs is simpler. When a job is triggered, Dapr will automatically route the job to the
event handler you set up during the server initialization. For example, in Go, you'd register the event handler like this:

```go
...
if err = server.AddJobEventHandler("prod-db-backup", prodDBBackupHandler); err != nil {
log.Fatalf("failed to register job event handler: %v", err)
}
```

Dapr takes care of the underlying routing. When the job is triggered, your `prodDBBackupHandler` function is called with
the triggered job data. Here’s an example of handling the triggered job:

```go
// ...

Expand All @@ -103,11 +172,9 @@ func prodDBBackupHandler(ctx context.Context, job *common.JobEvent) error {
if err := json.Unmarshal(job.Data, &jobData); err != nil {
// ...
}
decodedPayload, err := base64.StdEncoding.DecodeString(jobData.Value)
// ...

var jobPayload api.DBBackup
if err := json.Unmarshal(decodedPayload, &jobPayload); err != nil {
if err := json.Unmarshal(job.Data, &jobPayload); err != nil {
// ...
}
fmt.Printf("job %d received:\n type: %v \n typeurl: %v\n value: %v\n extracted payload: %v\n", jobCount, job.JobType, jobData.TypeURL, jobData.Value, jobPayload)
Expand Down Expand Up @@ -146,4 +213,4 @@ dapr run --app-id=distributed-scheduler \
## Next steps

- [Learn more about the Scheduler control plane service]({{< ref "concepts/dapr-services/scheduler.md" >}})
- [Jobs API reference]({{< ref jobs_api.md >}})
- [Jobs API reference]({{< ref jobs_api.md >}})
Original file line number Diff line number Diff line change
Expand Up @@ -38,11 +38,9 @@ The diagram below is an overview of how Dapr's service invocation works when inv
<img src="/images/service-invocation-overview-non-dapr-endpoint.png" width=800 alt="Diagram showing the steps of service invocation to non-Dapr endpoints">

1. Service A makes an HTTP call targeting Service B, a non-Dapr endpoint. The call goes to the local Dapr sidecar.
2. Dapr discovers Service B's location using the `HTTPEndpoint` or FQDN URL.
3. Dapr forwards the message to Service B.
4. Service B runs its business logic code.
5. Service B sends a response to Service A's Dapr sidecar.
6. Service A receives the response.
2. Dapr discovers Service B's location using the `HTTPEndpoint` or FQDN URL then forwards the message to Service B.
3. Service B sends a response to Service A's Dapr sidecar.
4. Service A receives the response.

## Using an HTTPEndpoint resource or FQDN URL for non-Dapr endpoints
There are two ways to invoke a non-Dapr endpoint when communicating either to Dapr applications or non-Dapr applications. A Dapr application can invoke a non-Dapr endpoint by providing one of the following:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -106,8 +106,25 @@ Want to skip the quickstarts? Not a problem. You can try out the workflow buildi

## Limitations

- **State stores:** As of the 1.12.0 beta release of Dapr Workflow, using the NoSQL databases as a state store results in limitations around storing internal states. For example, CosmosDB has a maximum single operation item limit of only 100 states in a single request.
- **Horizontal scaling:** As of the 1.12.0 beta release of Dapr Workflow, if you scale out Dapr sidecars or your application pods to more than 2, then the concurrency of the workflow execution drops. It is recommended to test with 1 or 2 instances, and no more than 2.
- **State stores:** Due to underlying limitations in some database choices, more commonly NoSQL databases, you might run into limitations around storing internal states. For example, CosmosDB has a maximum single operation item limit of only 100 states in a single request.
- **Horizontal scaling:** As of the 1.12.0 beta release of Dapr Workflow, it is recommended to use a maximum of two instances of Dapr per workflow application. This limitation is resolved in Dapr 1.14.x when enabling the scheduler service.

To enable the scheduler service to work for Dapr Workflows, make sure you're using Dapr 1.14.x or later and assign the following configuration to your app:

```yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: schedulerconfig
spec:
tracing:
samplingRate: "1"
features:
- name: SchedulerReminders
enabled: true
```
See more info about [enabling preview features]({{<ref preview-features>}}).
## Watch the demo
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -749,7 +749,7 @@ def status_monitor_workflow(ctx: wf.DaprWorkflowContext, job: JobStatus):
ctx.call_activity(send_alert, input=f"Job '{job.job_id}' is unhealthy!")
next_sleep_interval = 5 # check more frequently when unhealthy
yield ctx.create_timer(fire_at=ctx.current_utc_datetime + timedelta(seconds=next_sleep_interval))
yield ctx.create_timer(fire_at=ctx.current_utc_datetime + timedelta(minutes=next_sleep_interval))
# restart from the beginning with a new JobStatus input
ctx.continue_as_new(job)
Expand Down Expand Up @@ -896,7 +896,7 @@ func StatusMonitorWorkflow(ctx *workflow.WorkflowContext) (any, error) {
}
if status == "healthy" {
job.IsHealthy = true
sleepInterval = time.Second * 60
sleepInterval = time.Minutes * 60
} else {
if job.IsHealthy {
job.IsHealthy = false
Expand All @@ -905,7 +905,7 @@ func StatusMonitorWorkflow(ctx *workflow.WorkflowContext) (any, error) {
return "", err
}
}
sleepInterval = time.Second * 5
sleepInterval = time.Minutes * 5
}
if err := ctx.CreateTimer(sleepInterval).Await(nil); err != nil {
return "", err
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,4 @@ By studying past resource behavior, recommend application resource optimization

The application graph facilitates collaboration between dev and ops by providing a dynamic overview of your services and infrastructure components.

Try out [Conductor Free](https://www.diagrid.io/pricing), ideal for individual developers building and testing Dapr applications on Kubernetes.

{{< button text="Learn more about Diagrid Conductor" link="https://www.diagrid.io/conductor" >}}
27 changes: 12 additions & 15 deletions daprdocs/content/en/getting-started/quickstarts/jobs-quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -273,23 +273,20 @@ func deleteJob(ctx context.Context, in *common.InvocationEvent) (out *common.Con
// Handler that handles job events
func handleJob(ctx context.Context, job *common.JobEvent) error {
var jobData common.Job
if err := json.Unmarshal(job.Data, &jobData); err != nil {
return fmt.Errorf("failed to unmarshal job: %v", err)
}
decodedPayload, err := base64.StdEncoding.DecodeString(jobData.Value)
if err != nil {
return fmt.Errorf("failed to decode job payload: %v", err)
}
var jobPayload JobData
if err := json.Unmarshal(decodedPayload, &jobPayload); err != nil {
return fmt.Errorf("failed to unmarshal payload: %v", err)
}
var jobData common.Job
if err := json.Unmarshal(job.Data, &jobData); err != nil {
return fmt.Errorf("failed to unmarshal job: %v", err)
}
fmt.Println("Starting droid:", jobPayload.Droid)
fmt.Println("Executing maintenance job:", jobPayload.Task)
var jobPayload JobData
if err := json.Unmarshal(job.Data, &jobPayload); err != nil {
return fmt.Errorf("failed to unmarshal payload: %v", err)
}
return nil
fmt.Println("Starting droid:", jobPayload.Droid)
fmt.Println("Executing maintenance job:", jobPayload.Task)
return nil
}
```

Expand Down
Loading

0 comments on commit 95f394e

Please sign in to comment.