Secure distribution of secrets is a problem affecting many who run automatic provisioning systems up to a point that the (re-)distribution of secrets to stages and/or environments is the major obstacle to (not even necessarily rapid) deployment.
For fear of an in(de)finite rant-loop, I do not wish to delve into the security impacts resulting directly therefrom - think of secrets compromised but not revoked because “there is no room for twenty-one story points in the next three sprints” - but instead suggest a methodical and structured way out.
With examples how to consume PKI certificates from Hashicorp’s Vault generically and by leveraging Kubernetes primitives, I hope to introduce the broader principles more stringently than in many blog posts which focus on usage in a specific scenario.
Provisioning Systems Automatically
When providing and/or provisioning machines and/or applications by an arbitrary automaton, the automatron employs some logic (viz. how to) on a given configuration (viz. what).
Concretely, consider an arbitrary web server:
There, the logic comprises the syntactical and semantic structure of configuration files, how to restart or reload the server and how to functionally ensure the availability of the service.
Thinking along that pattern, the logic may be expressed as a template file.
The parameters to render the configuration file with may then be expressed in any structured data representation format.
Both logic (templates, renderers, etc.) and parameters are usually checked in into a version control system for source code, a repository.
Problem: Secret Parameters
Extending the example from above to a more life-like situation, consider the case when the traffic from that server is encrypted using https.
While the location in terms of file names is not exactly secret, the files content, the certificate itself or the key certainly are.
The same holds true for ssh-keys or login credentials to databases and/or machines to name but a few.
It stands to reason that such secrets still may be parameters, but cannot be stored alongside publicly accessible non-secret parameters (well, they can, they should not).
Storage and Distribution of Secrets
What sets secrets apart from public information is that access to secrets should be prohibited by default and only be granted to a tightly controlled group of authorized personnel on a need-to-know basis.
When everyone has access to secrets, they are not secret, but public.
Easily we can reach agreement, that even if it solves the access control problem, remembering secrets and not telling anyone is not feasible over a large and possibly rapidly changing number of secrets.
The number of people with photographic memories is finite.
So, a mechanism needs to be established which stores information protected against loss against technical errors and against accidental or deliberate disclosure.
This mechanism needs to be accessible and needs to allow for fine-grained access and modification control.
Ideally, it should be accessible via network to allow for automated secret consumption alike to access to “normal”, non-secret parameters.
In the absence of such a mechanism, secrets need to be transported and installed by hand, thus bypassing automation.
In practice, secrets are often distributed manually, either directly or by proxy, viz., manually feeding them into a provisioning automaton.
Storage in that case is left to the discretion of the authorized developer and most often, the local workstation or notebook will end up the dedicated storage location.
Inadvertently, secrets will spill over to locations not intended for storage of non-public information such as backup-systems or will be checked into the repositories for source code or binary artifacts by mistake.
In that state, firstly, the reasons why automation was introduced in the first place are in contradiction with the reality of human labour: Slow, difficult to reproduce, near impossible to audit, etc.
Secondly, with such realities of secret storage, the so-called secrets will not be secret very long and could then well be stored permissively alongside the code and non-secret parameters in the source code repository, conveniently saving all that hassle on the way.
Dedicated Secret Store and Automatic Secret Distribution
To be able to automate machines or application in the presence of secrets, these need to be checked in by authorized personnel into a secrets store.
Permissions to access that store need to be restrictive, but accessible to automation.
Ideally, secrets should be secret, viz., not known to anybody.
Then, secrets should not be touched by humans and not be stored on machines less secure than the dedicated secret store.
They should be short-lived, as over time secrets could end up in long-lived backups or accidentally logged.
A dedicated system to manage secrets in large and possibly dynamic environments exists with Hashicorp’s Vaultvaultproject.io..
While the “Getting Started” Tour Hashicorp offersVault: Geting Started. is truly excellent, I would like to take a different angle, which I believe is better suited to learning by experimentation.
A very common scenario to run vault is using Hashicorp’s distributed key-value store Consulconsul.io. as storage backend.
Using a garden-variety docker-compose setup may be easiest for the start, I suggest using caultgithub.com, tolitius/cault.
Installing and starting vault boils then down to a simple docker-compose up -d.
Depending on the local setup, it may be advisable to change local-port redirection to prevent port collisions with local instances - I use consul for service discovery in my local LAN.
I also advise to enforce pulling the latest releases and suggest to note where local files will be available in the container.
Because vault then runs inside a container, it may be handy setting a shell alias.
alias vault='docker exec -it cault_vault_1 vault "$@"'
Increasingly, applications communication without TLS are of limited value, so I suggest practicing and demonstrating with https endpoints from the start, for instance using Let’sEncrypt certificates.
When testing locally and without having certificates with IP Subject Alternative Names, providing the necessary name resolution can easily be accomplished using /etc/hosts.
The corresponding configuration of vault is easy to supply.
Vault needs to be initialized before operation.
To fulfill the requirement of protecting secret data against loss, data is encrypted before being written to any kind of persistent storage.
Keys are generated upon initialization and split using Shamir’s Secret Sharing method so that unlocking the vault and thus access to the secrets requires a quorum of authorized personnel.Remember hunting Red October: “The reason for having two missile keys is so that no one man may fire the missiles.”
Because the vault’s operator, the person factually performing the initialization, will know the key material otherwise, the keys may be passed transparently and asymmetrically encrypted to the operator using PGP.
The root token should be used for further vault operations and the unseal keys securely stored.
Apart from shell operations, vault may be initialized from the web UI, which I for sake of reproducibility I advise against.
The vault may then be unsealed issuing vault operator unseal, a session may be established
Vault is structured around backends for secrets and authentication.
Secret backends allow to provider or generate secrets for different consumers and may vary in their degree of specialization.
For instance, one backend is responsible for simple key-value pairs and another backend generates and injects login credentials into database management systems.
Yet another backend generates short lived certificates on a preconfigured certificate authority, which consumers may be set to trust.
Secrets are released to consumers on authentication and authorization only.
Authentication backends provide different methods for different usage patterns.
These may either consist of “classical” username-password credentials or web-token, but may expand to the needs of specific application orchestration frameworks.
For instance, it is possible to register the authentication engine of a Kubernetes cluster at a vault instance, which may then provide authentication credentials to applications orchestrated by the cluster as consumers.
Using the PKI Secret Engine to Generate Certificates
An example relatively straight-forward to demonstrate and grasp in practice is Vault’s Certificate Engine.Vault: PKI.
Vault may be setup as root or interim certificate authority and will then generate certificates against this CA, so that authorized members of a communication system may cryptographically verify and secure their interactions based on a mutually shared trust.
Enabling and configuring PKI engine
For vault, engines must be enabled and pre-configured.
Separate instances of the same backend type may be created supplying the -path=<name> parameter, which allows to separate on configuration of secret and/or usage policies.
At least in relatively volatile container environments, where applications scale horizontally on changes in load, certificates are short-lived.
When exiting, applications will revoke their certificate and when certificates are short-lived, CRLs may be periodically pruned and thus kept short.
Also, left-overs from failed revocation at application shutdown events may then time-out instead of polluting the validity checks until eternity.
Being a modern application for web usage, output may be passed as JSON, allowing easy filtering with tools such as
to generate a human-readable representation of the certificate just generated.
Configuring paramters for issuer endpoints
Certificates should include endpoints where the certificates are issued and where to check if certificates presented my an untrusted party have been revoked at the CA, i.e., are invalid.
Secret “roles” (not to be confused with authentication roles) represent
configuration and match an issuer path, so that for instance pki/roles/hb22 corresponds to pki/issue/hb22.
They govern what certificate parameters the PKI-backend should enforce when called via the corresponding issuer endpoint.
Parameters set at the endpoint of the engine may be altered, but not exceeded, so that the maximum TTL at the role configuration may not exceed the maximum TTL at the secret backend.
Configuring Authentication for Consumers of Secrets
Authentication endpoints need to be enabled as with secret backends.Vault: Authentication.
For instance, the approle authenticator is well suited for applications, i.e., non-interactive secret consumption by e.g. an application and/or machine provisioning automaton.
Policies may be in place which set permissions how to interact with the authentication endpoint.
Authentication roles may be bound to policies and other restrictions, such as CIDR addresses.
“Logging in” and obtaining certificates
Then, the role can be used to collect a web token which will grant permissions on the PKI secret endpoint.
Practically, first role-id and corresponding secret are gathered
and then used to obtain a web-token representing the login session:
The client token then will map to the permissions present in the auth-dictionary amd can be used to read a secret from the issuer endpoint pki/issue/hb22.
$ curl \--header"X-Vault-Token: $(jq -r'.auth.client_token' login.json)"\--request POST \--data'
| jq '.'\
| tee cert.json
Again, the certificate may be examined and extracted for later use calling jq.
Authenticating with Kubernetes Service Account Tokens
Vault backends may be very general or the may be specifically tailored to the needs of one application.
An example for such a backend is the Kubernetes authentication backend.
Here, “service accounts”, which represent “technical users” in Kubernetes may be given permissions to the auth-delegator role.
Then, applications run with this technical user’s permissions may consume web-tokens for authentication from Kubernetes, which will be accepted by the corresponding Vault authentication endpoint.
Configure Kubernetes and Kubernetes Authentication Backend
Having created a service account, a backend can be created with the account’s tokens and the Kubernetes certificate authority,
will yield a backend which can by means of the policy passed issue certificates to applications running under this service account.
USing Kubernetes Service Account Tokens to Obtain Certificates
Assuming a baseimage with curl and jq installed, this may be tested manually in the context of a trivial pod, first extracting the service accounts’s token, always present at /var/run/secrets/kubernetes.io/serviceaccount/token, and then passing that token the Kubernetes authentication endpoint to obtain a login.
Passing the client token present therein in the X-Vault-Token header to the issuer endpoint will grant permissions as in the token’s .auth.policies and auth.token_policies section and will obtain a certificate signed by the Vault PKI CA.
The certifcate can again be examined calling
Automating Certificate Distribution
The idea is to have two simple scripts present in all containers run in Kubernetes, which obtain secrets and unmarshal the JSON payload returned by vault.
The idea is as simple as the manual demonstration:
Pass the service account’s token to the login endpoint, obtain a client token and then obtain an X509 certificate.
Store the certificate at a predefined location and then run the application which requires the certificate.
Two get the desired sequence and to pass the secret to the consumer, we use two Kubernetes primitives, the initContainer, which run before the main containers of a pod are started and the emtpyDir volume, which represents a directory shared between a pod’s containers.
Here, the mountpoint /var/secrets is shared by both containers and
will place a freshly generated certificate therein.
The consuming applications will then just need to unmarshal the certificate and key from the JSON-payload
and may start running.
To get that orchestration, it suffices to amend the pod specs.
Because certificates are short-lived and may be valid shorter than the application’s instance will live, a certificate needs to be refreshed regularly.
In many cases, this may be as trivial as running the certificate getter und unmarshal code in a sleep-loop, SIGHUP-ing the application on new certificates, although sending kill to a secret consumer in the fashion common on physical hosts requires alpha-state PodShareProcessNamespace to be enabled.Kubernetes > v1.10: Share Process Namespace between Containers in a Pod.
A in my opinion more elegant pattern is to use a so-called sidekick container to continuously watch login leases and certificates and refresh either when necessary.
On the consuming pod, it then suffices to set an inotify watch on the JSON secrets package and unmarshal and SIGHUP there.
Regardless of the secret consumed, certificate, database credentials or others, having secrets provisioned with human intervention thus causes the first leak in the handling process. Having secrets handled exclusively by machines and meticulously logging each interaction greatly enhances the overall security of the system protected.
Concepts, mechanisms and technologies behind such automated handling are straightforward and, reduced to the necessary, easy to apply.
So, not primarily speed, but reproducibility and auditability may be reaped by eliminating human intervention from the deployment process, even when the management of secret is considered.
Managing Secrets in Automated Environments - September 18, 2018 - Christopher J. Ruwe