Vault with AD
Vaultඞ
Vault is an identity-based secrets and encryption management system. A secret is anything that you want to tightly control access to, such as API encryption keys, passwords, or certificates. Vault provides encryption services that are gated by authentication and authorization methods. Using Vault's UI, CLI, or HTTP API, access to secrets and other sensitive data can be securely stored and managed, tightly controlled (restricted), and auditable.
First setup for back-endඞ
Copy vault.hcl
to /etc/vault.d/
and start the back-end:
- Manually...
Bash | |
---|---|
- ...or as a systemd service
Bash | |
---|---|
Then initialize the host and NOTE DOWN THE GENERATED KEYS:
Bash | |
---|---|
Vault starts in a sealed state, to unseal, use the following command three times:
Bash | |
---|---|
Back-end should now be running and Vault should be unsealed.
Write policiesඞ
Create hpc-default
policy. Applied to all logins with AD group HPC Centre
:
Bash | |
---|---|
Azure AD authentication method setupඞ
NOTE: users with over 200 groups might run into problems. Additional setup needed to accommodate users with over 200 groups. Check the relevant chapter in Vault official docs HERE!
Authenticate as root with the generated root token:
Bash | |
---|---|
Enable OIDC auth:
Bash | |
---|---|
Configure OIDC for Azure AD with the default role:
Configure the default role
Link default role and policy for users with AD group HPC Centre
:
Key/Value version 1 engine setupඞ
The kv
secrets engine is used to store arbitrary secrets within the configured physical storage for Vault. Writing to a key in the kv
backend will replace the old value; sub-fields are not merged together.
Enable a version 1 kv for personal secrets:
Bash | |
---|---|
According to hpc-default
policy:
- Every
HPC Centre
group user can store personal secrets that only they can modify and read at the templated pathpersonal/{{identity.entity.aliases.$AUTH0_ACCESSOR.name}}/*
- Every
HPC Centre
group user can create write-only secrets to any other users pathpersonal/*
(Existing values can not be modified/overwritten).
Enable a version 1 kv for general secrets:
Bash | |
---|---|
General secrets should follow a defined structure that is agreed upon beforehand. The more granular the paths are, the easier it will be to manage these secrets as the amount of secrets grows. In essence paths should be something like the following example:
Bash | |
---|---|
Userpass authentication method setupඞ
The userpass
auth method allows users to authenticate with Vault using a username and password combination.
Userpasses have to be made manually and can then be linked with other entities, like the one generated by logging in via OIDC Azure AD method.
Enable userpass
auth method:
Bash | |
---|---|
Then create new user with a random password:
Bash | |
---|---|
And then link it with the users entity identity manually via the GUI from Access > Entities > USERS-ENTITY > Add alias
with Name
field value as the same name as entered in the step before and with Auth Backend
as userpass/ (userpass)
.
Then, according to hpc-default
policy, the user can change their password with:
Bash | |
---|---|
Client SSH key signing setupඞ
Mount the ssh-client-signer
secrets engine:
Bash | |
---|---|
Configure Vault with a CA for signing client keys using the /config/ca
endpoint. If you do not have an internal CA, Vault can generate a keypair for you:
Bash | |
---|---|
If you already have a keypair, specify the public and private key parts as part of the payload:
Host machine(s) setupඞ
Add the public key to all target host's SSH configuration. This process can be manual or automated using a configuration management tool. The public key is accessible via the API and does not require authentication.
Use:
Bash | |
---|---|
Or:
Bash | |
---|---|
Add the path where the public key contents are stored to the SSH configuration file at /etc/ssh/sshd_config
as the TrustedUserCAKeys
option:
Text Only | |
---|---|
Create AuthorizedPrincipalsFile file structure:
Bash | |
---|---|
Create principals with users that can authenticate inside them. With this example ubuntu@host
:
Bash | |
---|---|
Add the path where the principals are stored to the SSH configuration file at /etc/ssh/sshd_config
as the AuthorizedPrincipalsFile
option:
Bash | |
---|---|
It is good practice to also disable password auth via SSH at the configuration file at /etc/ssh/sshd_config
:
Restart sshd
service:
Bash | |
---|---|
Then create separate roles for each user to have granular management over each user. NOTE: setting algorithm_signer
is especially important (Valid values are ssh-rsa
, rsa-sha2-256
, and rsa-sha2-512
), as ssh-rsa
is the default but now considered insecure. Note that allowed_users
controls the principals the Vault user is allowed to sign and connect with. If you need to restrict someone's SSH access, just change their roles allowed_users
field. Example:
It is beneficial for SSH key signing roles use templating to avoid manual policy creations. At the time of writing hpc-default.hcl
used ssh-client-signer/sign/{{identity.entity.aliases.<USERPASS-ACCESSOR-ID-HERE>.name}}
for templating. Which means SSH key signing roles must match userpass
auth method names (which also have to be created manually).
Client side setupඞ
Ask Vault to sign your public key. valid_principals
field MUST match ssh-client-signer/sign/<YOUR-ROLE-HERE>
role's allowed_users
:
Bash | |
---|---|
To customize the signing options, use a JSON payload:
Bash | |
---|---|
(Optional) View enabled extensions, principals, and metadata of the signed key:
Bash | |
---|---|
SSH into the host machine using the signed key. You must supply both the signed public key from Vault and the corresponding private key as authentication to the SSH call:
Bash | |
---|---|
Host SSH key signing setup (not tested as of writing)ඞ
Mount the ssh-host-signer
secrets engine:
Bash | |
---|---|
Configure Vault with a CA for signing client keys using the /config/ca
endpoint. If you do not have an internal CA, Vault can generate a keypair for you:
Bash | |
---|---|
If you already have a keypair, specify the public and private key parts as part of the payload:
Extend host key certificate TTLs:
Bash | |
---|---|
Create a role for signing host keys. Be sure to fill in the list of allowed domains:
Bash | |
---|---|
Sign the host's SSH public key:
Bash | |
---|---|
Set the resulting signed certificate as HostCertificate
in the SSH configuration on the host machine:
Bash | |
---|---|
Set permissions on the certificate to be 0640
:
Bash | |
---|---|
Add host key and host certificate to the SSH configuration file at /etc/ssh/sshd_config
:
Text Only | |
---|---|
Restart sshd
service:
Bash | |
---|---|
Client-Side Host Verificationඞ
Retrieve the host signing CA public key to validate the host signature of target machines.
Use:
Bash | |
---|---|
Or:
Bash | |
---|---|
Backup setupඞ
This assumes that Raft is used as the storage back-end.
Create a backup:
- Manually:
Bash | |
---|---|
- Automatically with schedule via CLI:
Bash | |
---|---|
Automatically with schedule with:
Bash | |
---|---|
Restore with:
Bash | |
---|---|
Other secret engines and use casesඞ
Vault provides more secret engines and use cases that may be of use but were not covered in this test/demo.
Using Key/Value engine as a way to track and renew SSL certsඞ
Certificates get issued after Let's Encrypt validates that users control the domain names in those certificates using the ACME API and "challenges". The most popular ones are the HTTP-01 and the DNS-01. The first requires users to get a particular file and serve it via HTTP or HTTPS, so that the Let's Encrypt servers are able to retrieve it. The latter uses DNS records respectively, so that Let's Encrypt can validate the domain ownership via queries. There are already many clients which ease both of those processes with EFF's Certbot being the most prominent one.
Certbot supports certificate creation and renewal using both challenge types. For dealing with multiple domain names from one server, HTTP-01 challenges seem to be cumbersome: Certbot must serve some traffic on ports 80 and 443 for the Let's Encrypt servers to validate the domains. DNS-01 challenges are better on this perspective, but still this is not to be considered cloud-ready for two reasons:
- The certificate state is stored locally on the server
- The renewal process depends on a running cronjob of the same server
To tackle with the first point, could still use Certbot, but store certificates, tokens etc. in Vault.
Vault engines not covered in testඞ
The database secrets engine generates database credentials dynamically based on configured roles. It works with a number of different databases through a plugin interface. There are a number of built-in database types, and an exposed framework for running custom database types for extendability. This means that services that need to access a database no longer need to hardcode credentials: they can request them from Vault, and use Vault's leasing mechanism to more easily roll keys. These are referred to as "dynamic roles" or "dynamic secrets".
Since every service is accessing the database with unique credentials, it makes auditing much easier when questionable data access is discovered. You can track it down to the specific instance of a service based on the SQL username.
The PKI secrets engine generates dynamic X.509 certificates. With this secrets engine, services can get certificates without going through the usual manual process of generating a private key and CSR, submitting to a CA, and waiting for a verification and signing process to complete. Vault's built-in authentication and authorization mechanisms provide the verification functionality.
By keeping TTLs relatively short, revocations are less likely to be needed, keeping CRLs short and helping the secrets engine scale to large workloads. This in turn allows each instance of a running application to have a unique certificate, eliminating sharing and the accompanying pain of revocation and rollover.
In addition, by allowing revocation to mostly be forgone, this secrets engine allows for ephemeral certificates. Certificates can be fetched and stored in memory upon application startup and discarded upon shutdown, without ever being written to disk.
The transit secrets engine handles cryptographic functions on data in-transit. Vault doesn't store the data sent to the secrets engine. It can also be viewed as "cryptography as a service" or "encryption as a service". The transit secrets engine can also sign and verify data; generate hashes and HMACs of data; and act as a source of random bytes.
The primary use case for transit is to encrypt data from applications while still storing that encrypted data in some primary data store. This relieves the burden of proper encryption/decryption from application developers and pushes the burden onto the operators of Vault.
Key derivation is supported, which allows the same key to be used for multiple purposes by deriving a new key based on a user-supplied context value. In this mode, convergent encryption can optionally be supported, which allows the same input values to produce the same ciphertext.
Datakey generation allows processes to request a high-entropy key of a given bit length be returned to them, encrypted with the named key. Normally this will also return the key in plaintext to allow for immediate use, but this can be disabled to accommodate auditing requirements.
Security/Threat modelඞ
Some risks:
- Vault traffic might be eavesdropped
- Solution: Disable unencrypted HTTP comms, only use TLS encrypted traffic.
- User with signed SSH key might insert their own key to authorized keys list.
- Solution: Restrict machine user permissions.
- Solution: Schedule automatic SSH configuration deployment that clears dirs and inserts up-to-date configs to machines.
- Signed SSH key most likely TTL left after revoking SSH signing permissions.
- Solution: Admin generates new CA key.
- Authenticated user might have TTL left after revoking auth permissions.
- Solution: Disable/delete auth entity.