From 6c69aecf3d7b9f14262e51a61502246eb2ba4b14 Mon Sep 17 00:00:00 2001 From: Alejandro Pedraza Date: Tue, 15 Aug 2023 09:00:51 -0500 Subject: [PATCH] Port 2.13 changes that were missed in 2-edge Ported #1648, #1642 to the 2-edge docs --- linkerd.io/content/2-edge/features/cni.md | 8 +- .../content/2-edge/features/http-grpc.md | 26 +-- linkerd.io/content/2-edge/features/nft.md | 8 +- .../2-edge/features/request-routing.md | 24 ++ .../content/2-edge/features/server-policy.md | 2 +- .../content/2-edge/features/traffic-split.md | 10 +- .../configuring-dynamic-request-routing.md | 2 +- .../tasks/configuring-per-route-policy.md | 6 +- .../content/2-edge/tasks/using-ingress.md | 205 +++++++++--------- 9 files changed, 165 insertions(+), 126 deletions(-) create mode 100644 linkerd.io/content/2-edge/features/request-routing.md diff --git a/linkerd.io/content/2-edge/features/cni.md b/linkerd.io/content/2-edge/features/cni.md index a70dd80fad..999e5443fb 100644 --- a/linkerd.io/content/2-edge/features/cni.md +++ b/linkerd.io/content/2-edge/features/cni.md @@ -9,10 +9,10 @@ every meshed pod to its proxy. (See the without the application being aware. By default, this rewiring is done with an [Init -Container](../../reference/architecture/#linkerd-init-container) that uses iptables -to install routing rules for the pod, at pod startup time. However, this requires -the `CAP_NET_ADMIN` capability; and in some clusters, this capability is not -granted to pods. +Container](../../reference/architecture/#linkerd-init-container) that uses +iptables to install routing rules for the pod, at pod startup time. However, +this requires the `CAP_NET_ADMIN` capability; and in some clusters, this +capability is not granted to pods. To handle this, Linkerd can optionally run these iptables rules in a [CNI plugin](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/) diff --git a/linkerd.io/content/2-edge/features/http-grpc.md b/linkerd.io/content/2-edge/features/http-grpc.md index 74ddd707ce..bb32e81916 100644 --- a/linkerd.io/content/2-edge/features/http-grpc.md +++ b/linkerd.io/content/2-edge/features/http-grpc.md @@ -4,18 +4,18 @@ description = "Linkerd will automatically enable advanced features (including me weight = 1 +++ -Linkerd can proxy all TCP connections, and will automatically enable advanced -features (including metrics, load balancing, retries, and more) for HTTP, -HTTP/2, and gRPC connections. (See -[TCP Proxying and Protocol Detection](../protocol-detection/) for details of how -this detection happens). +Linkerd can proxy all TCP connections. For HTTP connections (including HTTP/1.0, +HTTP/1.1, HTTP/2, and gRPC connections), it will automatically enable advanced +L7 features including [request-level metrics](../telemetry/), [latency-aware +load balancing](../load-balancing/), [retries](../retries-and-timeouts/), and +more. -## Notes +(See [TCP Proxying and Protocol Detection](../protocol-detection/) for details of +how this detection happens automatically, and how it can sometimes fail.) -* gRPC applications that use [grpc-go][grpc-go] must use version 1.3 or later due - to a [bug](https://github.com/grpc/grpc-go/issues/1120) in earlier versions. -* gRPC applications that use [@grpc/grpc-js][grpc-js] must use version 1.1.0 or later - due to a [bug](https://github.com/grpc/grpc-node/issues/1475) in earlier versions. - -[grpc-go]: https://github.com/grpc/grpc-go -[grpc-js]: https://github.com/grpc/grpc-node/tree/master/packages/grpc-js +Note that while Linkerd does [zero-config mutual TLS](../automatic-mtls/), it +cannot decrypt TLS connections initiated by the outside world. For example, if +you have a TLS connection from outside the cluster, or if your application does +HTTP/2 plus TLS, Linkerd will treat these connections as raw TCP streams. To +take advantage of Linkerd's full array of L7 features, communication between +meshed pods must be TLS'd by Linkerd, not by the application itself. diff --git a/linkerd.io/content/2-edge/features/nft.md b/linkerd.io/content/2-edge/features/nft.md index 502b7053fd..beb0a29504 100644 --- a/linkerd.io/content/2-edge/features/nft.md +++ b/linkerd.io/content/2-edge/features/nft.md @@ -1,6 +1,6 @@ +++ -title = "Proxy Init Iptables Modes" -description = "Linkerd's init container can run in two separate modes, nft or legacy." +title = "Iptables-nft Support" +description = "Linkerd's init container can use iptables-nft on systems that require it." +++ To transparently route TCP traffic through the proxy, without any awareness @@ -8,7 +8,7 @@ from the application, Linkerd will configure a set of [firewall rules](../../reference/iptables/) in each injected pod. Configuration can be done either through an [init container](../../reference/architecture/#linkerd-init-container) or through a -[CNI plugin](../cni/) +[CNI plugin](../cni/). Linkerd's init container can be run in two separate modes: `legacy` or `nft`. The difference between the two modes is what variant of `iptables` they will use @@ -26,7 +26,7 @@ two, is which binary they will call into: This is the default mode that `linkerd-init` runs in, and is supported by most operating systems and distributions. 2. `nft` mode will call into `iptables-nft`, which uses the newer `nf_tables` - kernel API. The [`nftables`] utilities are used by newer operating systems to + kernel API. The `nftables` utilities are used by newer operating systems to configure firewalls by default. [`iptables-legacy`]: https://manpages.debian.org/bullseye/iptables/iptables-legacy.8.en.html diff --git a/linkerd.io/content/2-edge/features/request-routing.md b/linkerd.io/content/2-edge/features/request-routing.md new file mode 100644 index 0000000000..6daef28b0f --- /dev/null +++ b/linkerd.io/content/2-edge/features/request-routing.md @@ -0,0 +1,24 @@ ++++ +title = "Dynamic Request Routing" +description = "Linkerd can route individual HTTP requests based on their properties." ++++ + +Linkerd's dynamic request routing allows you to control routing of HTTP and gRPC +traffic based on properties of the request, including verb, method, query +parameters, and headers. For example, you can route all requests that match +a specific URL pattern to a given backend; or you can route traffic with a +particular header to a different service. + +This is an example of _client-side policy_, i.e. ways to dynamically configure +Linkerd's behavior when it is sending requests from a meshed pod. + +Dynamic request routing is built on Kubernetes's Gateway API types, especially +[HTTPRoute](https://gateway-api.sigs.k8s.io/api-types/httproute/). + +This feature extends Linkerd's traffic routing capabilities beyond those of +[traffic splits](../traffic-split/), which only provide percentage-based +splits. + +## Learning more + +- [Guide to configuring routing policy](../../tasks/configuring-dynamic-request-routing/) diff --git a/linkerd.io/content/2-edge/features/server-policy.md b/linkerd.io/content/2-edge/features/server-policy.md index 2bad7ae96b..eb688edc33 100644 --- a/linkerd.io/content/2-edge/features/server-policy.md +++ b/linkerd.io/content/2-edge/features/server-policy.md @@ -130,5 +130,5 @@ result in an abrupt termination of those connections. ## Learning more -- [Policy reference](../../reference/authorization-policy/) +- [Authorization policy reference](../../reference/authorization-policy/) - [Guide to configuring per-route policy](../../tasks/configuring-per-route-policy/) diff --git a/linkerd.io/content/2-edge/features/traffic-split.md b/linkerd.io/content/2-edge/features/traffic-split.md index c6b9582eb9..725bbce8e3 100644 --- a/linkerd.io/content/2-edge/features/traffic-split.md +++ b/linkerd.io/content/2-edge/features/traffic-split.md @@ -13,8 +13,14 @@ for example, by slowly easing traffic off of an older version of a service and onto a newer version. {{< note >}} -If working with headless services, traffic splits cannot be retrieved. Linkerd -reads service discovery information based off the target IP address, and if that +This feature will eventually be supplanted by the newer [dynamic request +routing](../request-routing/) capabilities, which does not require the SMI +extension. +{{< /note >}} + +{{< note >}} +TrafficSplits cannot be used with headless services. Linkerd reads +service discovery information based off the target IP address, and if that happens to be a pod IP address then it cannot tell which service the pod belongs to. {{< /note >}} diff --git a/linkerd.io/content/2-edge/tasks/configuring-dynamic-request-routing.md b/linkerd.io/content/2-edge/tasks/configuring-dynamic-request-routing.md index 319f5aa749..ec7d32ed47 100644 --- a/linkerd.io/content/2-edge/tasks/configuring-dynamic-request-routing.md +++ b/linkerd.io/content/2-edge/tasks/configuring-dynamic-request-routing.md @@ -24,7 +24,7 @@ request routing, by deploying in the cluster two backend and one frontend podinfo pods. Traffic will flow to just one backend, and then we'll switch traffic to the other one just by adding a header to the frontend requests. -## Set Up +## Setup First we create the `test` namespace, annotated by linkerd so all pods that get created there get injected with the linkerd proxy: diff --git a/linkerd.io/content/2-edge/tasks/configuring-per-route-policy.md b/linkerd.io/content/2-edge/tasks/configuring-per-route-policy.md index 881126f092..e724da6418 100644 --- a/linkerd.io/content/2-edge/tasks/configuring-per-route-policy.md +++ b/linkerd.io/content/2-edge/tasks/configuring-per-route-policy.md @@ -1,12 +1,12 @@ +++ -title = "Configuring Per-Route Policy" +title = "Configuring Fine-grained Authorization Policy" description = "Fine-grained authorization policies can be configured for individual HTTP routes." aliases = [] +++ -In addition to [enforcing authorization policies at the service +In addition to [enforcing authorization at the service level](../restricting-access/), finer-grained authorization policies can also be configured for individual HTTP routes. In this example, we'll use the Books demo app to demonstrate how to control which clients can access particular routes on @@ -16,7 +16,7 @@ This is an advanced example that demonstrates more complex policy configuration. For a basic introduction to Linkerd authorization policy, start with the [Restricting Access to Services](../restricting-access/) example. For more comprehensive documentation of the policy resources, see the -[Policy reference docs](../../reference/authorization-policy/). +[Authorization policy reference](../../reference/authorization-policy/). ## Prerequisites diff --git a/linkerd.io/content/2-edge/tasks/using-ingress.md b/linkerd.io/content/2-edge/tasks/using-ingress.md index 73a5c65fce..667371e65e 100644 --- a/linkerd.io/content/2-edge/tasks/using-ingress.md +++ b/linkerd.io/content/2-edge/tasks/using-ingress.md @@ -1,38 +1,71 @@ +++ -title = "Ingress traffic" -description = "Linkerd works alongside your ingress controller of choice." +title = "Handling ingress traffic" +description = "Linkerd can work alongside your ingress controller of choice." +++ -For reasons of simplicity and composability, Linkerd doesn't provide a built-in -ingress. Instead, Linkerd is designed to work with existing Kubernetes ingress -solutions. +Ingress traffic refers to traffic that comes into your cluster from outside the +cluster. For reasons of simplicity and composability, Linkerd itself doesn't +provide a built-in ingress solution for handling traffic coming into the +cluster. Instead, Linkerd is designed to work with the many existing Kubernetes +ingress options. -Combining Linkerd and your ingress solution requires two things: +Combining Linkerd and your ingress solution of choice requires two things: -1. Configuring your ingress to support Linkerd. -2. Meshing your ingress pods so that they have the Linkerd proxy installed. +1. Configuring your ingress to support Linkerd (if necessary). +2. Meshing your ingress pods. -Meshing your ingress pods will allow Linkerd to provide features like L7 -metrics and mTLS the moment the traffic is inside the cluster. (See -[Adding your service](../adding-your-service/) for instructions on how to mesh -your ingress.) +Strictly speaking, meshing your ingress pods is not required to allow traffic +into the cluster. However, it is recommended, as it allows Linkerd to provide +features like L7 metrics and mutual TLS the moment the traffic enters the +cluster. -Note that, as explained below, some ingress options need to be meshed in -"ingress" mode, which means injecting with the `linkerd.io/inject: ingress` -annotation rather than the default `enabled`. It's possible to use this -annotation at the namespace level, but it's recommended to do it at the -individual workload level instead. The reason is that many ingress -implementations also place other types of workloads under the same namespace for -tasks other than routing and therefore you'd rather inject them using the -default `enabled` mode (or some you wouldn't want to inject at all, such as -Jobs). +## Handling external TLS + +One common job for ingress controllers is to terminate TLS from the outside +world, e.g. HTTPS calls. + +Like all pods, traffic to a meshed ingress has both an inbound and an outbound +component. If your ingress terminates TLS, Linkerd will treat this inbound TLS +traffic as an opaque TCP stream, and will only be able to provide byte-level +metrics for this side of the connection. + +Once the ingress controller terminates the TLS connection and issues the +corresponding HTTP or gRPC traffic to internal services, these outbound calls +will have the full set of metrics and mTLS support. + +## Ingress mode {#ingress-mode} + +Most ingress controllers can be meshed like any other service, i.e. by +applying the `linkerd.io/inject: enabled` annotation at the appropriate level. +(See [Adding your services to Linkerd](../adding-your-service/) for more.) + +However, some ingress options need to be meshed in a special "ingress" mode, +using the `linkerd.io/inject: ingress` annotation. + +The instructions below will describe, for each ingress, whether it requires this +mode of operation. + +If you're using "ingress" mode, we recommend that you set this ingress +annotation at the workload level rather than at the namespace level, so that +other resources in the ingress namespace are be meshed normally. {{< warning id=open-relay-warning >}} -When an ingress is meshed in `ingress` mode by using `linkerd.io/inject: -ingress`, the ingress _must_ be configured to remove the `l5d-dst-override` -header to avoid creating an open relay to cluster-local and external endpoints. +When an ingress is meshed in ingress mode, you _must_ configure it to remove +the `l5d-dst-override` header to avoid creating an open relay to cluster-local +and external endpoints. {{< /warning >}} +{{< note >}} +Linkerd versions 2.13.0 through 2.13.4 had a bug whereby the `l5d-dst-override` +header was *required* in ingress mode, or the request would fail. This bug was +fixed in 2.13.5, and was not present prior to 2.13.0. +{{< /note >}} + +For more on ingress mode and why it's necessary, see [Ingress +details](#ingress-details) below. + +## Common ingress options for Linkerd + Common ingress options that Linkerd has been used with include: - [Ambassador (aka Emissary)](#ambassador) @@ -46,23 +79,15 @@ Common ingress options that Linkerd has been used with include: - [Kong](#kong) - [Haproxy](#haproxy) - [EnRoute](#enroute) -- [Ingress details](#ingress-details) For a quick start guide to using a particular ingress, please visit the section -for that ingress. If your ingress is not on that list, never fear—it likely -works anyways. See [Ingress details](#ingress-details) below. - -{{< note >}} -If your ingress terminates TLS, this TLS traffic (e.g. HTTPS calls from outside -the cluster) will pass through Linkerd as an opaque TCP stream and Linkerd will -only be able to provide byte-level metrics for this side of the connection. The -resulting HTTP or gRPC traffic to internal services, of course, will have the -full set of metrics and mTLS support. -{{< /note >}} +for that ingress below. If your ingress is not on that list, never fear—it +likely works anyways. See [Ingress details](#ingress-details) below. -## Ambassador (aka Emissary) {#ambassador} +## Emissary-Ingress (aka Ambassador) {#ambassador} -Ambassador can be meshed normally. An example manifest for configuring the +Emissary-Ingress can be meshed normally: it does not require the [ingress +mode](#ingress-mode) annotation. An example manifest for configuring Ambassador / Emissary is as follows: ```yaml @@ -77,15 +102,18 @@ spec: service: http://web-svc.emojivoto.svc.cluster.local:80 ``` -For a more detailed guide, we recommend reading [Installing the Emissary -ingress with the Linkerd service +For a more detailed guide, we recommend reading [Installing the Emissary ingress +with the Linkerd service mesh](https://buoyant.io/2021/05/24/emissary-and-linkerd-the-best-of-both-worlds/). ## Nginx -Nginx can be meshed normally, but the +Nginx can be meshed normally: it does not require the [ingress +mode](#ingress-mode) annotation. + +The [`nginx.ingress.kubernetes.io/service-upstream`](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#service-upstream) -annotation should be set to `"true"`. +annotation should be set to `"true"`. For example: ```yaml # apiVersion: networking.k8s.io/v1beta1 # for k8s < v1.19 @@ -105,13 +133,11 @@ spec: number: 80 ``` -If using [this Helm chart](https://artifacthub.io/packages/helm/ingress-nginx/ingress-nginx), -note the following. - -The `namespace` containing the ingress controller (when using the above -Helm chart) should NOT be annotated with `linkerd.io/inject: enabled`. -Rather, annotate the `kind: Deployment` (`.spec.template.metadata.annotations`) -of the Nginx by setting `values.yaml` like this: +If using [the ingress-nginx Helm +chart](https://artifacthub.io/packages/helm/ingress-nginx/ingress-nginx), note +that the namespace containing the ingress controller should NOT be annotated +with `linkerd.io/inject: enabled`. Instead, you should annotate the `kind: +Deployment` (`.spec.template.metadata.annotations`). For example: ```yaml controller: @@ -120,37 +146,25 @@ controller: ... ``` -The reason is as follows. - -That Helm chart defines (among other things) two Kubernetes resources: +The reason is because this Helm chart defines (among other things) two +Kubernetes resources: 1) `kind: ValidatingWebhookConfiguration`. This creates a short-lived pod named - something like `ingress-nginx-admission-create-t7b77` which terminates in 1 - or 2 seconds. + something like `ingress-nginx-admission-create-XXXXX` which quickly terminates. 2) `kind: Deployment`. This creates a long-running pod named something like -`ingress-nginx-controller-644cc665c9-5zmrp` which contains the Nginx docker +`ingress-nginx-controller-XXXX` which contains the Nginx docker container. -However, had we set `linkerd.io/inject: enabled` at the `namespace` level, -a long-running sidecar would be injected into the otherwise short-lived -pod in (1). This long-running sidecar would prevent the pod as a whole from -terminating naturally (by design a few seconds after creation) even if the -original base admission container had terminated. - -Without (1) being considered "done", the creation of (2) would wait forever -in an infinite timeout loop. - -The above analysis only applies to that particular Helm chart. Other charts -may have a different behaviour and different file structure for `values.yaml`. -Be sure to check the nginx chart that you are using to set the annotation -appropriately, if necessary. +Setting the injection annotation at the namespace level would mesh the +short-lived pod, which would prevent it from terminating as designed. ## Traefik -Traefik should be meshed with ingress mode enabled([*](#open-relay-warning)), -i.e. with the `linkerd.io/inject: ingress` annotation rather than the default -`enabled`. Instructions differ for 1.x and 2.x versions of Traefik. +Traefik should be meshed with [ingress mode enabled](#ingress-mode), i.e. with +the `linkerd.io/inject: ingress` annotation rather than the default `enabled`. + +Instructions differ for 1.x and 2.x versions of Traefik. ### Traefik 1.x {#traefik-1x} @@ -263,8 +277,8 @@ spec: ## GCE -The GCE ingress should be meshed with ingress mode -enabled([*](#open-relay-warning)), i.e. with the `linkerd.io/inject: ingress` +The GCE ingress should be meshed with with [ingress mode +enabled](#ingress-mode), , i.e. with the `linkerd.io/inject: ingress` annotation rather than the default `enabled`. This example shows how to use a [Google Cloud Static External IP @@ -308,9 +322,8 @@ certificate is provisioned, the ingress should be visible to the Internet. ## Gloo -Gloo should be meshed with ingress mode enabled([*](#open-relay-warning)), i.e. -with the `linkerd.io/inject: ingress` annotation rather than the default -`enabled`. +Gloo should be meshed with [ingress mode enabled](#ingress-mode), i.e. with the +`linkerd.io/inject: ingress` annotation rather than the default `enabled`. As of Gloo v0.13.20, Gloo has native integration with Linkerd, so that the required Linkerd headers are added automatically. Assuming you installed Gloo @@ -332,9 +345,8 @@ glooctl add route --path-prefix=/ --dest-name booksapp-webapp-7000 ## Contour -Contour should be meshed with ingress mode enabled([*](#open-relay-warning)), -i.e. with the `linkerd.io/inject: ingress` annotation rather than the default -`enabled`. +Contour should be meshed with [ingress mode enabled](#ingress-mode), i.e. with +the `linkerd.io/inject: ingress` annotation rather than the default `enabled`. The following example uses the [Contour getting started](https://projectcontour.io/getting-started/) documentation @@ -424,9 +436,8 @@ the `l5d-dst-override` headers will be set automatically. ### Kong -Kong should be meshed with ingress mode enabled([*](#open-relay-warning)), i.e. -with the `linkerd.io/inject: ingress` annotation rather than the default -`enabled`. +Kong should be meshed with [ingress mode enabled](#ingress-mode), i.e. with the +`linkerd.io/inject: ingress` annotation rather than the default `enabled`. This example will use the following elements: @@ -513,9 +524,8 @@ haproxytech](https://www.haproxy.com/documentation/kubernetes/latest/) and not the [haproxy-ingress controller](https://haproxy-ingress.github.io/). {{< /note >}} -Haproxy should be meshed with ingress mode enabled([*](#open-relay-warning)), -i.e. with the `linkerd.io/inject: ingress` annotation rather than the default -`enabled`. +Haproxy should be meshed with [ingress mode enabled](#ingress-mode), i.e. with +the `linkerd.io/inject: ingress` annotation rather than the default `enabled`. The simplest way to use Haproxy as an ingress for Linkerd is to configure a Kubernetes `Ingress` resource with the @@ -553,8 +563,7 @@ in an ingress manifest as each one needs their own ## EnRoute OneStep {#enroute} -Meshing EnRoute with linkerd involves only setting one -flag globally: +Meshing EnRoute with Linkerd involves only setting one flag globally: ```yaml apiVersion: enroute.saaras.io/v1 @@ -574,14 +583,14 @@ spec: ``` EnRoute can now be meshed by injecting Linkerd proxy in EnRoute pods. -Using the ```linkerd``` utility, we can update the EnRoute deployment +Using the `linkerd` utility, we can update the EnRoute deployment to inject Linkerd proxy. ```bash kubectl get -n enroute-demo deploy -o yaml | linkerd inject - | kubectl apply -f - ``` -The ```linkerd_enabled``` flag automatically sets `l5d-dst-override` header. +The `linkerd_enabled` flag automatically sets `l5d-dst-override` header. The flag also delegates endpoint selection for routing to linkerd. More details and customization can be found in, @@ -593,22 +602,22 @@ Linkerd](https://getenroute.io/blog/end-to-end-encryption-mtls-linkerd-enroute/) In this section we cover how Linkerd interacts with ingress controllers in general. -In general, Linkerd can be used with any ingress controller. In order for -Linkerd to properly apply features such as route-based metrics and traffic -splitting, Linkerd needs the IP/port of the Kubernetes Service. However, by -default, many ingresses do their own endpoint selection and pass the IP/port of -the destination Pod, rather than the Service as a whole. +In order for Linkerd to properly apply L7 features such as route-based metrics +and dynamic traffic routing, Linkerd needs the ingress controller to connect +to the IP/port of the destination Kubernetes Service. However, by default, +many ingresses do their own endpoint selection and connect directly to the +IP/port of the destination Pod, rather than the Service. Thus, combining an ingress with Linkerd takes one of two forms: -1. Configure the ingress to pass the IP and port of the Service as the +1. Configure the ingress to connect to the IP and port of the Service as the destination, i.e. to skip its own endpoint selection. (E.g. see [Nginx](#nginx) above.) -2. If this is not possible, then configure the ingress to pass the Service - IP/port in a header such as `l5d-dst-override`, `Host`, or `:authority`, and - configure Linkerd in *ingress* mode. In this mode, it will read from one of - those headers instead. +2. Alternatively, configure the ingress to pass the Service IP/port in a + header such as `l5d-dst-override`, `Host`, or `:authority`, and configure + Linkerd in *ingress* mode. In this mode, it will read from one of those + headers instead. The most common approach in form #2 is to use the explicit `l5d-dst-override` header.