Published
June 22, 2022

Secured Access to Kubernetes from Anywhere with Zero Trust

Tenry Fu
Tenry Fu
CEO & Co-Founder

Controlling access, authentication and authorization of K8s workloads is vitally important, but remains a daunting task despite the growing popularity of Kubernetes for cloud native workloads.

In this article, we’ll break down the problem, showing how you can achieve the ideal of zero-trust security in your Kubernetes clusters through applying fundamental best practices and the addition of some popular open-source tools.

Step 1: Zero-Trust Authentication with Tokens and MFA

In a zero-trust model, all user and service accounts tied to K8s clusters must be authenticated before executing an API call — there’s no implicit trust that the account has a right to access.

Out of the box, Kubernetes supports multiple authentication strategies:

  • Credential-based authentication (HTTP basic auth, LDAP, etc.)
  • Certificate-based authentication (Client certificates)
  • Token-based authentication (Bearer/Oauth2 tokens, OIDC tokens, Webhook tokens, etc.)

Both credential-based and certificate-based authentications are simple to implement, but have one major weakness: they require the user to put their credentials or client certificates in the kubeconfig file, which effectively shifts the trust and security attack surface to the kubeconfig file. If User-A’s kubeconfig file is leaked or accessible by a malicious User-B, User-B can fully access User-A’s K8s cluster and resources. Many infamous data breach cases have been caused by a misconfigured permission in the config or certificate file.

Another drawback of credential-based or certificate-based authentication is that it may be hard to add multifactor authentication (MFA), a common security best practice for authentication.

Token-based authentication is different. It involves talking to an external Identity Provider (IDP) such as Okta, Azure AD or PingIdentity for authentication. Once authenticated, the client (in this case kubectl) will get an access token and in some cases also another refresh token. These tokens have a configurable expiration period (typically less than one hour) and can be refreshed before their expiration. Importantly, most IDPs have comprehensive MFA support without the client needing to do any additional work. Another advantage of using an external IDP for authentication and authorization is that all access logs can be easily collected on the IDP side.

The downside? Token-based authentication is more arduous to set up initially, entailing installation of client-side plugins and some kubeconfig configuration changes, which is more complicated than credential-based and certificate-based authentication.

Secured Access to Kubernetes from Anywhere with Zero Trust

Recommendation: Adopting a token-based authentication system may be more work to implement and create an external IDP dependency, but it’s worth the effort. It’s the only way to bring true zero-trust security to Kubernetes.

Step 2: Zero-Trust Authorization with Group-Based Permissions

Authentication gives a user access to the K8s cluster; authorization governs what each user is allowed to do once they have access.

In a zero-trust security model, an authenticated user can only be authorized if they have the necessary permissions to perform the requested operation. For each request made, this model will require specification of the username, action and the objects affected in the Kubernetes cluster.

Kubernetes supports multiple methods for authorization:

  • Attribute-based access control (ABAC), which authorizes access based on a combination of user and resource attributes. This may also be referred to as Access Control List (ACL).
  • Role-based access control (RBAC), which authorizes access based on the user’s role in the organization, with the role representing a collection of permissions a user is allowed to perform.

These two authorization models are not mutually exclusive. Many enterprises actually use RBAC to control generic permissions based on roles and augment those controls with additional ACLs on specific resource objects if necessary.

Regardless of which authorization model you adopt, a common best practice is to base access control on groups instead of on specific users. Roles and ACLs are assigned to one or multiple user groups; in turn, users are allocated to one or more groups; authorizations will be resolved at runtime to the effective roles and ACLs. This way the user and group membership can be managed separately from the roles and ACL policies, making the group membership the only source of truth for authorization and access control. This works particularly well with token-based authentication, as external IDPs natively support user-group membership management.

Zero-Trust Authorization with Group-Based Permissions

Recommendation: Choose ABAC, RBAC, or a combination of both — but stick with a group-based access control policy with an external IDP as the single source of truth for both authentication and authorization.

Step 3: Secured Kubernetes API Server Access with a TCP Reverse Proxy

You may have implemented zero-trust authentication and authorization, but you still need to secure the Kubernetes API server access from the network perspective.

In certain setups, such as when K8s is running in virtual private clouds, on-premises behind a firewall or at an edge location behind a firewall and NAT (network address translation) device, it’s almost impossible to expose Kubernetes access directly to the public internet. But that doesn’t cover the majority of K8s deployments. Today, a huge number of K8s instances are publicly accessible, as you may have heard in recent news coverage.

Theoretically, you could deliberately leave access open to anyone in this way, since Kubernetes itself is already secured with authentication and authorization. But few K8s admins would admit feeling safe exposing K8s access directly to the public internet, relying only on authentication and authorization to secure it. It’s wise to take further precautions to limit the attack surface and avoid exploits that target potential zero-day vulnerabilities in K8s itself. This has happened before (e.g., CVE-2018-1002105) and for sure will happen again.

To enable secure remote access to a Kubenetes cluster from anywhere, you can set up a publicly accessible TCP reverse proxy server.

This reverse proxy tunnel is established by having an agent on the K8s cluster initiate a TLS connection to the reserve proxy server, to bind the local API server’s port to a reverse proxy server’s remote vhost URL. Since the TLS connection is initiated by the agent behind the firewall, it will only require an outgoing internet connection, without having to modify any firewall rules to open any port. If the TCP reverse proxy is set up to use well-known ports such as 443 (HTTPS port) for its endpoints, it typically will not have any problem going through the firewall to connect, just like a browser trying to go through the firewall to access an HTTPS website. So now the K8s API server can be accessed via the reverse proxy server’s vhost URL.

To keep the extra layer of security, the reverse proxy server’s vhost URL can be configured to require the cluster-specific client certificate to establish the connection. This client cert can be embedded in the kubeconfig file to access the K8s endpoint. Note that because we’re using token-based authentication and authorization, the client cert is only used to securely establish a TLS connection to the reverse proxy server’s vhost URL, not the actual authentication and authorization to the K8s API server, so the attack surface is limited to the user with the client cert or kubeconfig file to be able to connect to the reverse proxy server’s vhost URL. The attack surface can be further reduced by requiring the K8s cluster admin to manually turn on this reverse proxy tunnel for a remote troubleshooting session, closing the tunnel once the session ends or after a fixed period of time.

Another way to further reduce the attack surface is to avoid exposing the client cert and kubeconfig at all. Instead, the user can access the K8s cluster via a web-based terminal which enables the user to run kubectl commands — the client cert is never exposed to the user.

Note that when using a TCP reverse proxy server for remote access, access logs are easily retained on the reverse proxy server instead of needing to store them on the cluster. Additional action audit logs can be enabled on the Kubernetes cluster to provide a track record of the actions performed within a cluster.

Secured Kubernetes API Server Access with a TCP Reverse Proxy

Recommendation: Never directly expose the K8s API server to the public internet. If remote troubleshooting is needed, consider the TCP reverse proxy setup with controlled access.

Putting Things Together with kubelogin and frp

Let’s work through an example of these best practices in a little more detail. In the following setup, we use two open source projects, kubelogin and frp, to work with a standard external OIDC provider such as Okta or Azure Active Directory to achieve end-to-end secured access to K8s cluster from anywhere with zero-trust authentication and authorization.

end-to-end secured access to K8s cluster with kubelogin and frp

The data diagram shows the following steps:

  1. The FRPC client initiates a remote binding to the FRPS server component. FRPS is hosted in the public cloud and can be accessed via the public internet using its endpoint URL proxy.example.com. To ensure security, the frps.cert (self-signed) is packaged with the FRPC client. The Cluster C1 API server is now accessible via FRPS vhost url: cluster-c1.proxy.example.com.
  2. The user initiates a run command to their local kubectl.
  3. Kubectl in turn passes the command to kubelogin as a client-go credential plugin.
  4. Kubelogin opens the browser to connect to the external IDP for authentication.
  5. The browser sends an authentication request to the OpenID connect (OIDC)-enabled IDP. The IDP performs authentication and may further enforce MFA.
  6. If authenticated, the IDP returns the authentication response.
  7. The browser passes the authentication response back to kubelogin.
  8. Kubelogin extracts idtoken_ (and refreshtoken_ ) from the authentication response.
  9. Kubectl can now send the request along with idtoken_ to the FRPS vhost url, which is listed as the K8s endpoint in the kubeconfig file. Kubeconfig also embeds the cluster-c1 specific client cert to allow it to connect to the FRPS vhost endpoint.
  10. The request is passed along to FRPC client.
  11. The request is passed further to kubeapiserver_.
  12. If kubeapiserver_ does not have the cached IDP cert, it will get the cert from the external IDP.
  13. The IDP returns its cert upon request.
  14. Kubeapiserver_ uses the IDP cert to verify the idtoken_ and its expiration time. If 0. valid, the user is authenticated.
  15. Kubeapiserver_ checks the user’s roles and permissions against the action and resources. Once authorized, kubeapiserver_ performs the action and returns the result.
  16. The return results are passed to FRPC client.
  17. FRPC client forwards the return results to FRPS server.
  18. FRPS server returns the results back to kubectl.
  19. Kubectl displays the result to the user.

Multicluster Management for Security at Scale

If securing K8s access with zero-trust authentication and authorization is complicated enough for a single K8s cluster, imagine the challenge you’d face when managing hundreds or even thousands of such clusters — especially if all those clusters are in different edge locations, such as one cluster per retail store.

This is where a modern enterprise multicluster management platform is needed. Our Palette platform focuses on solving the challenges of management at scale just like this one. It not only simplifies security, but every aspect of full-stack cluster life cycle management, Day 2 operations such as operating system patching, governance, monitoring and more. To find out more, visit spectrocloud.com.

Tags:
Security
Best Practices
Thought Leadership
Subscribe to our newsletter
By signing up, you agree with our Terms of Service and our Privacy Policy