Kubernetes Secrets were designed to store sensitive data such as passwords, tokens, and keys. As opposed to hard coding these credentials in a pod definition or putting them directly inside a container image, with Kubernetes Secrets, secrets can be managed separately from application pods, giving users more control over how credentials are distributed and used, thus lessening risk of exposure.
However, if you think putting credentials into a secret store is all you have to do to keep them safe, you will be surprised to find out that Kubernetes with a default setup puts your secrets data directly into etcd in clear text. Anyone that gets to your etcd data backup gets your credentials. So what do you do?
Encryption by APIServer
The solution is of course encryption. Encryption at rest was introduced in Kubernetes v1.7.0 as an alpha feature, and became stable from v1.13.0. Once enabled, APIServer will encrypt data before putting it into etcd. Following is an example configuration for APIServer to encrypt all resources of type ‘secrets’ using the ‘aescbc’ algorithm. You can encrypt other types of resources using different algorithms.
apiVersion: apiserver.config.k8s.io/v1 kind: EncryptionConfiguration resources: - resources: - secrets providers: - aescbc: keys: - name: key1 secret: c3BlY3Ryb2Nsb3VkIGlzIGF3ZXNvbWUK
With this, credentials are encrypted, so even if someone gets to the etcd data they will not be able to get the raw credentials. Problem solved, right?
Unfortunately, the answer is “No”.
The local encryption shown above contains the raw key used to encrypt data. This configuration file is stored on the APIServer’s machine disk. If someone compromises the APIServer or gets the APIServer backup, they can get the raw key and can then decrypt data in etcd and, again, have their hands on your credentials.
Envelope encryption using cloud Key Management Services (KMS)
The solution for the issue with local encryption is to use envelope encryption with a cloud KMS. The way envelope encryption works is shown in the following diagram.
- APIServer generates a new Data Encryption Key (DEK) for each encryption
- APIServer encrypts data with the DEK
- APIServer sends DEK to kms-plugin which is running locally with APIServer, kms-plugin sends DEK to cloud kms
- Cloud KMS encrypts DEK with a Key Encryption Key (KEK)
- Cloud KMS sends back encrypted DEK to kms-plugin, then to APIServer
- APIServer stores encrypted DEK together with encrypted data inside etcd.
This way data is encrypted by DEK, which is itself encrypted by KEK. Although etcd has both encrypted data and the DEK, the KEK is never exposed outside of the cloud KMS. So even if both APIServer and etcd were compromised at the same time, your credentials are still safe, right?
Well, yes and no.
Yes, if your APIServer is running in the same cloud as your cloud KMS. In this case, no credential needs to be stored on APIServer’s disk for the kms-plugin to talk to the cloud KMS. Public clouds can use IAMRoles to authenticate kms-plugin to cloud KMS.
However, if your APIServer is running in a different environment, then the answer is no. In this case, the APIServer needs to keep credentials locally to authenticate to KMS, which has the same issue with local encryption. Let’s put this aside this case for the moment; we’ll get back to the topic, I promise (see Solutions for On-prem).
How do you secretly deliver the secrets to APIServer?
In this section, we’ll assume that APIServer is running in the same cloud as your cloud KMS. As we saw, we have a solution to keep credentials safe once APIServer has them. But the input to APIserver is still clear text. How do you securely deliver credentials to APIServer?
Again, encryption can help. We can encrypt data out of band and deliver the encrypted data to the APIServer. Then a customized controller running within Kubernetes can automatically decrypt and create secrets for placement into APIServer.
The sealed-secrets project is one good example. After you encrypt secrets, you can safely put them inside your application YAMLs. Only the controller running within Kubernetes is able to decrypt it and create a Kubernetes secret for it. The decryption and creation of a secret is transparent; applications can consume the secret like it consumes a native Kubernetes secret.
(Diagram from bitnami)
Another project ‘kubernetes-external-secrets’ uses a custom controller to automatically fetch secrets from an external secret management system.
(Diagram from godaddy)
All these projects utilize Kubernetes CRD to automatically and securely deliver secrets into APIServer. Together with envelope encryption and using cloud KMS, you’ve got an end to end solution to keep your credentials safe.
Solutions for on-prem
Now we have an end to end solution for a cloud environment using a cloud KMS. What about on-prem setups? Can you use Vault to do envelope encryption for on-prem Kubernetes? Unfortunately, there is no solution out there using Vault as a KMS-plugin to do envelope encryption.
Why is that? Because unlike a cloud environment where you can rely on a cloud IAM system to authenticate master nodes to KMS, on-prem you’ll have to put the Vault credentials on master nodes for the KMS-plugin to authenticate to Vault. And that has the same local encryption issue: once a master node gets compromised, Vault is open to the hacker.
Bypass Kubernetes Secrets
Although an on-prem setup won’t be able to use Vault as a KMS-plugin, Vault can still help secure Kubernetes secrets in a different way.
As the de facto secret management system in enterprises, Vault usually already has most of the credentials under management within the organization. Can the applications running in Kubernetes directly talk to Vault to get credentials, without relying on Kubernetes Secrets?
Vault natively supports Kubernetes applications using service account tokens to authenticate to Vault. Once authenticated, applications can directly talk to Vault to retrieve credentials. No Kubernetes Secrets involved.
If you still want to use Kubernetes Secrets to make your application more portable, Hashicorp recently announced vault-k8s, which uses mutation webhooks and sidecar injectors to retrieve secrets from Vault and mount the secret into application containers. The secrets data only exists in memory within the application pod and never goes into etcd, since a Kubernetes Secret object isn’t created.
What enterprise solutions should do for you
As you can see, keeping your Kubernetes secrets secret requires careful integration and configuration. Take envelope encryption as an example: you’ll need to 1) run an additional static pod in master nodes, 2) inject a configuration file into the static pod, 3) create encryptionconfig for APIServer, 4) mount KMS-plugin unix socket into the APIServer container, and 5) change APIServer command line options to enable the feature. As a bonus, when you upgrade your Kubernetes to a newer version, you’ll get to do all this all over again.
We believe that a secure enterprise Kubernetes solution has to provide fully automated integration for all the security solutions mentioned above, for initial deployment and continuous upgrade. You should be able to pick and choose the technologies which meet your specific requirements with just a click, and know that your Kubernetes solution will guarantee to keep your secrets in Kubernetes, well… a secret.
K8s, Palette, our upcoming webinar, events, and much more!
We are using the information you provide to us to send you our montly newsletter. You may unsubscribe at any time.