Skip to content

Latest commit

 

History

History
708 lines (601 loc) · 22.3 KB

using-keystone-webhook-authenticator-and-authorizer.md

File metadata and controls

708 lines (601 loc) · 22.3 KB

Table of Contents generated with DocToc

k8s-keystone-auth

Kubernetes webhook authentication and authorization for OpenStack Keystone. With k8s-keystone-auth, the Kubernetes cluster administrator only need to know the OpenStack project names or roles, it's up to the OpenStack project admin for user management, as a result, the OpenStack users could have access to the Kubernetes cluster.

The k8s-keystone-auth can be running either as a static pod(controlled by kubelet) or a normal kubernetes service.

Prerequisites

  • You already have a Kubernetes cluster(version >= 1.9.3) up and running and you have the admin permission for the cluster.
  • You have an OpenStack environment and admin credentials.

If you run k8s-keystone-auth service as a static pod, the pod creation could be a part of kubernetes cluster initialization process.

Deploy k8s-keystone-auth webhook server

Prepare the authorization policy (optional)

The authorization feature is optional, you can choose to deploy k8s-keystone-auth webhook server for authentication only and rely on Kubernetes RBAC for authorization. See more details here. However, k8s-keystone-auth authorization provides more flexible configurations than Kubernetes native RBAC.

The authorization policy can be specified using an existing ConfigMap name in the cluster, by doing this, the policy could be changed dynamically without the k8s-keystone-auth service restart. The ConfigMap needs to be created before running the k8s-keystone-auth service.

Sometimes after changing the authz policy, the new policy may not take effect immediately because there is a config --authorization-webhook-cache-authorized-ttl set in kube-api server(default 5m).

k8s-keystone-auth service supports two versions of policy definition. Version 2 is recommended because of its better flexibility. However, both versions are described in this guide. You can see more information of version 2 in Authorization policy definition(version 2).

For testing purpose, in the following ConfigMap, we only allow the users in project demo with member role in OpenStack to query the Pods information from all the namespaces. We create the ConfigMap in kube-system namespace because we will also run k8s-keystone-auth service there.

Version 1:

$ cat <<EOF > /etc/kubernetes/keystone-auth/policy-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: k8s-auth-policy
  namespace: kube-system
data:
  policies: |
    [
      {
        "resource": {
          "verbs": ["get", "list", "watch"],
          "resources": ["pods"],
          "version": "*",
          "namespace": "default"
        },
        "match": [
          {
            "type": "role",
            "values": ["member"]
          },
          {
            "type": "project",
            "values": ["demo"]
          }
        ]
      }
    ]
EOF
$ kubectl apply -f /etc/kubernetes/keystone-auth/policy-config.yaml

Version 2:

please refer to configmap for the definition of policy config map.

$ kubectl apply -f examples/webhook/keystone-policy-configmap.yaml

As you can see, the version 2 policy definition is much simpler and more succinct.

Non-resource permission

For many scenarios clients require access to nonresourse paths. nonresource paths include: /api, /apis, /metrics, /resetMetrics, /logs, /debug, /healthz, /swagger-ui/, /swaggerapi/, /ui, and /version. Clients require access to /api, /api/*, /apis, /apis/*, and /version to discover what resources and versions are present on the server. Access to other nonresource paths can be disallowed without restricting access to the REST api.

Sub-resource permission

In order to describe subresource (e.g logs or exec) of a certain resource (e.g. pod)it is possible to use / in order to combine resource and subresource. This is similar to the way resources described in rules list of k8s Role object.

For an example of using of subresources as well as nonresourse paths please see policy below. With this policy we only want to allow client to be able to kubectl exec into pod and only in utility namespace. For this purpose we define resource as "resources": ["pods/exec"]. But in order for client to be able to discover pods and versions as mentioned above we also need to allow read access to nonresource paths /api, /api/*, /apis, /apis/*. At this moment only one path (type string) is supported per nonresource json object, this is why we have entry for each of them.

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
  name: k8s-auth-policy
  namespace: kube-system
data:
  policies: |
    [
      {
        "nonresource": {
          "verbs": ["get"],
          "path": "/api"
        },
        "match": [
          {
            "type": "role",
            "values": ["utility_exec"]
          }
        ]
      },
      {
        "nonresource": {
          "verbs": ["get"],
          "path": "/api/*"
        },
        "match": [
          {
            "type": "role",
            "values": ["utility_exec"]
          }
        ]
      },
      {
        "nonresource": {
          "verbs": ["get"],
          "path": "/apis"
        },
        "match": [
          {
            "type": "role",
            "values": ["utility_exec"]
          }
        ]
      },
      {
        "nonresource": {
          "verbs": ["get"],
          "path": "/apis/*"
        },
        "match": [
          {
            "type": "role",
            "values": ["utility_exec"]
          }
        ]
      },
      {
        "resource": {
          "verbs": ["create"],
          "resources": ["pods/exec"],
          "version": "*",
          "namespace": "utility"
        },
        "match": [
          {
            "type": "role",
            "values": ["utility_exec"]
          }
        ]
      }
    ]
EOF

Prepare the service certificates

For security reasons, the k8s-keystone-auth service is running as an HTTPS service, so the TLS certificates need to be configured. This example uses a self-signed certificate, but for a production cluster it is important to use certificates signed by a trusted issuer.

$ openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -days 365 -nodes -subj /CN=k8s-keystone-auth.kube-system/
$ kubectl --namespace kube-system create secret tls keystone-auth-certs --cert=cert.pem --key=key.pem

Create service account for k8s-keystone-auth

In order to support dynamic policy configuration, the k8s-keystone-auth service needs to talk to the API server to query ConfigMap resources. You can either specify a kubeconfig file or relies on the in-cluster configuration capability to instantiate the kubernetes client, the latter approach is recommended.

Next, we create a new service account keystone-auth and grant the cluster admin role to it. please refer to rbac for definition of the rbac such as clusterroles and rolebinding.

$ kubectl apply -f examples/webhook/keystone-rbac.yaml

Deploy k8s-keystone-auth

Now we are ready to create the k8s-keystone-auth deployment and expose it as a service. There are several things we need to notice in the deployment manifest:

  • We are using image registry.k8s.io/provider-os/k8s-keystone-auth:v1.28.0
  • We use k8s-auth-policy configmap created above.
  • The pod uses service account keystone-auth created above.
  • We use keystone-auth-certs secret created above to inject the certificates into the pod.
  • The value of keystone_auth_url needs to be changed according to your environment.
$ kubectl apply -f examples/webhook/keystone-deployment.yaml
$ kubectl apply -f examples/webhook/keystone-service.yaml

Test k8s-keystone-auth service

  • Check k8s-keystone-auth webhook pod.

    First check if the k8s-keystone-auth pod is up and running:

    $ kubectl -n kube-system get deployment k8s-keystone-auth
    NAME                READY   UP-TO-DATE   AVAILABLE   AGE
    k8s-keystone-auth   2/2     2            2           94m
    $ kubectl -n kube-system get svc
    NAME                            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                  AGE
    k8s-keystone-auth-service       ClusterIP   10.103.122.254   <none>        8443/TCP                 9m50s
    kube-dns                        ClusterIP   10.96.0.10       <none>        53/UDP,53/TCP,9153/TCP   67d

    Before we continue to config kube-apiserver, we could test the k8s-keystone-auth service by sending HTTP request directly to make sure the service works as expected.

  • Authentication

    Get a token of an OpenStack user from the demo project, send request to the k8s-keystone-auth service. Since this service is only exposed within the cluster, run a temporary pod within the kube-system namespace to access the webhook endpoint.

    $ token=... # Get token from Keystone
    $ kubectl run curl --rm -it --restart=Never --image curlimages/curl -- \
      -k -XPOST https://k8s-keystone-auth-service.kube-system:8443/webhook -d '
    {
      "apiVersion": "authentication.k8s.io/v1",
      "kind": "TokenReview",
      "metadata": {
        "creationTimestamp": null
      },
      "spec": {
        "token": "'$token'"
      }
    }'

    You should see the response from k8s-keystone-auth service if it is configured correctly. You may notice that besides the user's Keystone group, the user's project ID is also included in the groups field, so the cluster admin could config RBAC rolebindings based on the groups without involving the webhook authorization.

    {
        "apiVersion": "authentication.k8s.io/v1",
        "kind": "TokenReview",
        "metadata": {
            "creationTimestamp": null
        },
        "spec": {
            "token": "<truncated>"
        },
        "status": {
            "authenticated": true,
            "user": {
                "extra": {
                    "alpha.kubernetes.io/identity/project/id": [
                        "423d41d3a02f4b77b4a9bbfbc3a1b3c6"
                    ],
                    "alpha.kubernetes.io/identity/project/name": [
                        "demo"
                    ],
                    "alpha.kubernetes.io/identity/roles": [
                        "member",
                        "load-balancer_member"
                    ],
                    "alpha.kubernetes.io/identity/user/domain/id": [
                        "default"
                    ],
                    "alpha.kubernetes.io/identity/user/domain/name": [
                        "Default"
                    ]
                },
                "groups": [
                    "mygroup",
                    "423d41d3a02f4b77b4a9bbfbc3a1b3c6"
                ],
                "uid": "ff369be2cbb14ee9bb775c0bcf2a1061",
                "username": "demo"
            }
        }
    }
  • Authorization (optional)

    Please skip this validation if you are using Kubernetes RBAC for authorization.

    From the above response, we know the demo user in the demo project does have member role associated, based on the authorization policy defined in examples/webhook/keystone-policy-configmap.yaml, the user has read access to the pods:

    $ kubectl run curl --rm -it --restart=Never --image curlimages/curl -- \
      -k -XPOST https://k8s-keystone-auth-service.kube-system:8443/webhook -d '
    {
      "apiVersion": "authorization.k8s.io/v1",
      "kind": "SubjectAccessReview",
      "spec": {
        "resourceAttributes": {
          "namespace": "default",
          "verb": "get",
          "groups": "",
          "resource": "pods",
          "name": "pod1"
        },
        "user": "demo",
        "groups": ["423d41d3a02f4b77b4a9bbfbc3a1b3c6"],
        "extra": {
            "alpha.kubernetes.io/identity/project/id": ["423d41d3a02f4b77b4a9bbfbc3a1b3c6"],
            "alpha.kubernetes.io/identity/project/name": ["demo"],
            "alpha.kubernetes.io/identity/roles": ["load-balancer_member","member"]
        }
      }
    }'

    Response:

    {
        "apiVersion": "authorization.k8s.io/v1",
        "kind": "SubjectAccessReview",
        "status": {
            "allowed": true
        }
    }

    However, pod creation should fail:

    $ kubectl run curl --rm -it --restart=Never --image curlimages/curl -- \
      -k -XPOST https://k8s-keystone-auth-service.kube-system:8443/webhook -d '
    {
      "apiVersion": "authorization.k8s.io/v1",
      "kind": "SubjectAccessReview",
      "spec": {
        "resourceAttributes": {
          "namespace": "default",
          "verb": "create",
          "groups": "",
          "resource": "pods",
          "name": "pod1"
        },
        "user": "demo",
        "groups": ["423d41d3a02f4b77b4a9bbfbc3a1b3c6"],
        "extra": {
            "alpha.kubernetes.io/identity/project/id": ["423d41d3a02f4b77b4a9bbfbc3a1b3c6"],
            "alpha.kubernetes.io/identity/project/name": ["demo"],
            "alpha.kubernetes.io/identity/roles": ["load-balancer_member","member"]
        }
      }
    }'

    Response:

    {
        "apiVersion": "authorization.k8s.io/v1",
        "kind": "SubjectAccessReview",
        "status": {
            "allowed": false
        }
    }

Now the k8s-keystone-auth service works as expected, we could go ahead to config kubernetes API server to use the k8s-keystone-auth service as a webhook service for both authentication and authorization. In fact, the k8s-keystone-auth service can be used for authentication or authorization only, and both as well, depending on your requirement. In this example, 10.109.16.219 is the cluster IP of k8s-keystone-auth service.

Configuration on K8S master for authentication and/or authorization

  • Create the webhook config file.

    keystone_auth_service_addr=10.109.16.219
    mkdir /etc/kubernetes/webhooks
    cat <<EOF > /etc/kubernetes/webhooks/webhookconfig.yaml
    ---
    apiVersion: v1
    kind: Config
    preferences: {}
    clusters:
      - cluster:
          insecure-skip-tls-verify: true
          server: https://${keystone_auth_service_addr}:8443/webhook
        name: webhook
    users:
      - name: webhook
    contexts:
      - context:
          cluster: webhook
          user: webhook
        name: webhook
    current-context: webhook
    EOF
  • Modify kube-apiserver config file to use the webhook service for authentication and/or authorization.

    Authentication:

    --authentication-token-webhook-config-file=/etc/kubernetes/webhooks/webhookconfig.yaml
    

    Authorization (optional):

    --authorization-webhook-config-file=/etc/kubernetes/webhooks/webhookconfig.yaml
    --authorization-mode=Node,RBAC,Webhook
    

    Also mount the new webhooks directory:

    containers:
    ...
      volumeMounts:
      ...
      - mountPath: /etc/kubernetes/webhooks
        name: webhooks
        readOnly: true
    volumes:
    ...
    - hostPath:
        path: /etc/kubernetes/webhooks
        type: DirectoryOrCreate
      name: webhooks
    
  • Wait for the API server to restart successfully until you can see all the pods are running in kube-system namespace.

Authorization policy definition(version 2)

The version 2 definition could be used together with version 1 but will take precedence over version 1 if both are defined. The version 1 definition is still supported but may be considered deprecated in the future.

The authorization policy definition is based on whitelist, which means the operation is allowed if ANY rule defined in the permissions is satisfied.

  • "users" defines which projects the OpenStack users belong to and what roles they have. You could define multiple projects or roles, if the project of the target user is included in the projects, the permission is going to be checked.
  • "resource_permissions" is a map with the key defines namespaces and resources, the value defines the allowed operations. / is used as separator for namespace and resource. ! and * are supported both for namespaces and resources, see examples below.
  • "nonresource_permissions" is a map with the key defines the non-resource endpoint such as /healthz, the value defines the allowed operations.

Some examples:

  • Any operation is allowed on any resource in any namespace.

    "resource_permissions": {
      "*/*": ["*"]
    }
  • Only "get" and "list" are allowed for Pods in the "default" namespace.

    "resource_permissions": {
      "default/pods": ["get", "list"]
    }
  • "create" is allowed for any resource except Secrets and ClusterRoles in the "default" namespace.

    "resource_permissions": {
      "default/!['secrets', 'clusterroles']": ["create"]
    }
  • Any operation is allowed for any resource in any namespace except "kube-system".

    "resource_permissions": {
      "!kube-system/*": ["*"]
    }
  • Any operation is allowed for any resource except Secrets and ClusterRoles in any namespace except "kube-system".

    "resource_permissions": {
      "!kube-system/!['secrets', 'clusterroles']": ["*"]
    }

Client(kubectl) configuration

If the k8s-keystone-auth service is configured for both authentication and authorization, make sure your OpenStack user in the following steps has the member role in Keystone as defined above, otherwise listing pod operation will fail.

The recommended way of client authentication is to use exec mode with the client-keystone-auth binary.

To configure the client do the following:

  • Download the client-keystone-auth binary from cloud-provider-openstack release page. In this example, we will download the latest version.

    repo=kubernetes/cloud-provider-openstack
    version=$(curl --silent "https://api.github.com/repos/${repo}/releases/latest" | grep '"tag_name":' | awk -F '"' '{print $4}')
    curl -L https://github.com/kubernetes/cloud-provider-openstack/releases//download/${version}/client-keystone-auth -o ~/client-keystone-auth
    sudo chmod u+x ~/client-keystone-auth
    
  • Run kubectl config set-credentials openstackuser, this command creates the following entry in the ~/.kube/config file.

    - name: openstackuser
      user: {}
    
  • Config kubectl to use client-keystone-auth binary for the user openstackuser. Replace cluster_name with your own cluster name.

    $ sed -i '/user: {}/ d' ~/.kube/config
    $ cat <<EOF >> ~/.kube/config
      user:
        exec:
          command: "/home/ubuntu/client-keystone-auth"
          apiVersion: "client.authentication.k8s.io/v1beta1"
    EOF
    $ kubectl config set-context --cluster=$cluster_name --user=openstackuser openstackuser@$cluster_name
    

Now, your kubeconfig file should look like below:

apiVersion: v1
clusters:
- cluster:
    certificate-authority: /tmp/certs/ca.pem
    server: https://172.24.4.6:6443
  name: mycluster
contexts:
- context:
    cluster: mycluster
    user: admin
  name: default
- context:
    cluster: mycluster
    user: openstackuser
  name: openstackuser@mycluster
current-context: openstackuser@mycluster
kind: Config
preferences: {}
users:
- name: admin
  user:
    client-certificate: /tmp/certs/cert.pem
    client-key: /tmp/certs/key.pem
- name: openstackuser
  user:
    exec:
      command: "/home/ubuntu/client-keystone-auth"
      apiVersion: "client.authentication.k8s.io/v1alpha1"

In above kubeconfig, the cluster name is mycluster, the kube API address is https://172.24.4.6:6443. And in this kubeconfig file, there are two contexts. One for normal certs auth, and one for Keystone auth.

Next you have several ways to specify additional auth parameters:

  1. Source your env vars(recommended). Make sure you include OS_DOMAIN_NAME otherwise the client will fallback to Keystone V2 that is not supported by the webhook.

    export OS_AUTH_URL="https://keystone.example.com:5000/v3"
    export OS_DOMAIN_NAME="default"
    export OS_PASSWORD="mysecret"
    export OS_USERNAME="username"
    export OS_PROJECT_NAME="demo"
    
  2. Specify auth parameters in the ~/.kube/config file. For more information read client keystone auth configuaration doc and credential plugins documentation

  3. Use the interactive mode. If auth parameters are not specified initially, neither as env variables, nor the ~/.kube/config file, the user will be prompted to enter them from keyboard at the time of the interactive session.

To test that everything works as expected try:

kubectl --context openstackuser@mycluster get pods

In case you are using this Webhook just for the authentication, you should get an authorization error:

Error from server (Forbidden): pods is forbidden: User "username" cannot list pods in the namespace "default"

You need to configure the RBAC with roles to be authorized to do something, for example:

kubectl create rolebinding username-view --clusterrole view --user username --namespace default

Try now again and you should see the pods.