Deploy and Run Hashicorp Vault With TLS Security in AWS Cloud | by Krishnadutt Panchagnula | Nov, 2022

Security and AWS

Deploying a Production-Grade Vault in AWS

Often in software engineering, when we are developing new features, it is a fairly common feature for our code to encode some sensitive information, such as passwords, secret keys, or tokens, to perform the functionality we want. will do. Different professionals in the IT sector use it in different ways, such as the following:

  • Developers access secrets from API tokens, database credentials, or other sensitive information within the code.
  • Dev-Ops engineers may need to export some values ​​as environment variables and write the values ​​to a YAML file for the CI/CD pipeline to run efficiently.
  • Cloud engineers may need to pass credentials, secret tokens, and other secret information to access their respective cloud (in the case of AWS, even if we .credentials file, we still need to pass the file name to the Terraform block, which will indicate that the credentials are available locally within the computer.)
  • System administrators may need to send people different logins and passwords for employees to access different services.

But writing it in plain text or sharing it in plain text is quite a security problem, as anyone with access to the code-base can access the secret or pull off a man-in-the-middle attack. To combat this, in the developing world, we have different options such as importing secrets from another file (YAML, .py, etc.) or exporting them as environment variables. But both of these still have a problem: a person who has access to a single configuration file or machine can echo (read print) the password. Given these problems, it would be very helpful if we could deploy a solution that would provide solutions to all the IT professionals mentioned above. This is the perfect place to introduce Walt.

HashiCorp Vault is a secret and encryption management system based on user identity. If we have to compare it with AWS, it is like an IAM user-based resource (read Vault here) management system that keeps your sensitive information secure. This sensitive information can be API encryption keys, passwords, and certificates.

Its workflow can be seen as follows:

  • Local Hosting: This method is usually done when the secrets are to be accessed only by local users or during the development phase. This method has to be discarded if these secret engines are to be shared with others. As it is within the local development environment, there is no additional investment for deployment. it can be hosted directly in a local machine by either official docker image
  • Public cloud hosting (EC2 in AWS/Virtual machines in Azure): If the idea is to set up the vault to be shared with people in different regions, hosting it on a public cloud is a good idea. Although we can achieve this with on-premises servers, the upfront cost and scalability are troublesome. In the case of AWS, we can easily secure the endpoint by hosting Vault in an EC2 instance and creating a security group on which IPs can access EC2. If you feel more adventurous, you can map it to a domain name and route it through Route 53 to access Vault on the domain as a service for end users. In case of EC2 hosting with AWS defined domain, the cost is $0.0116/hr.
  • Vault Cloud Hosting (HashiCorp Cloud Platform): If you don’t want to set up infrastructure in a public cloud environment, there is an option to choose a cloud hosted by Vault. We can think of it as a SaaS-based cloud platform that enables us to use Vault as a Service on a subscription basis. Since HashiCorp manages the cloud itself, we can expect a consistent user experience. For cost, it has three production grades alternative: Starter at $0.50/hr, Standard at $1.58/hr, and Plus at $1.84/hr (as seen in July 2022).

Our goal in this project is to create a Vault instance in EC2 and store static secrets in the Key-Value Secrets Engine. These secrets are later retrieved in a Terraform script that, when implemented, will pull secrets from the Vault Secrets Engine and use them to build the infrastructure in AWS.

To create a ready-to-use vault, we are going to follow these steps:

  1. Create an EC2 Linux instance with ssh keys to access it.
  2. SSH into the instance and install Vault to install and run it
  3. Configure Valve Secrets Manager

Step 1: Create an EC2 Linux instance with ssh keys to access it

To create an EC2 instance and access it remotely via SSH, we need to generate a key pair. First, let’s create an SSH key through the AWS console.

Once the keys are generated and downloaded to the local workspace, we create an EC2 (t2.micro) Linux instance and associate it with the above keys. The EC2 size can be chosen based on your needs, but usually, a t2.micro is more than sufficient.

Step 2: Set up the secret in the SSH instance and get it up and running

Once the EC2 status changes to Running, open the directory in which you saved the SSH(.pem) key. open a terminal and type ssh -i ec2-user @ , Once we have established a successful SSH session to our Ec2 instance, we can install Vault using the following command:

wget -O-  | gpg — dearmor | sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg

echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list

sudo apt update && sudo apt install vault

The above command will install Vault in the EC2 environment. Sometimes the second command is known to throw some error. In case of error, replace $(lsb_release -cs) with “jammy, [This entire process can be automated by copying the above commands to EC2 user data while creating an EC2 instance],

Step 3: Configure Hashicorp Valve

Before initializing Vault, make sure it is properly installed by running the following command:

vault

Let’s make sure no environment variables are called VAULT_TOKEN, To do this, use the following command:

$ unset VAULT_TOKEN

Once we have installed Vault, we need to configure Vault, which is done using HCL files. These hcl files contain data like backed, listener, cluster address, UI settings etc. As we discussed in Vault’s architecture, the back end where the data is stored is very different from the Vault engine, which has to be maintained. Even when the vault is closed (stateful resource). Additionally, we need to specify the following details:

  • Listener Ports: The port/s on which Vault listens for API requests.
  • API Address: Specifies the address to advertise for routing client requests.
  • Cluster Address: Indicates the address and port used for communication between Vault nodes in the cluster. To make it more secure, we can use TLS-based communication. This step is optional and can only be tried if you want to further secure your environment. TLS certificate can be generated using openssl in Linux.
# Installs openssl
sudo apt install openssl

#Generates TLS Certificate and Private Key
openssl req -newkey rsa:4096 -x509 -sha512 -days 365 -nodes -out certificate.pem -keyout privatekey.pem

Insert the TLS certificate and private key file path into their respective arguments in the listener “tcp” block.

  • tls_cert_file: specifies the path to the certificate for TLS in PEM encoded file format.
  • tls_key_file: specifies the path to the private key for the certificate in PEM-encoded file format.
#Configuration in config.hcl file

storage "raft"
path = "./vault/data"
node_id = "node1"

listener "tcp"
address = "127.0.0.1:8200"
tls_disable = "true"
tls_cert_file = certificate.pem
tls_key_file = privatekey.pem

disable_mlock = true
api_addr = "
cluster_addr = "https://127.0.0.1:8201"
ui = true

Once that is created, we create the folder where our backend will rest: vault/data.

mkdir -p ./vault/data

Once done, we can start Vault Server using the following command:

vault server -config=config.hcl

Once done, we can start our Vault instance with the backend and all its settings as mentioned in the configuration file.

export VAULT_ADDR='http://127.0.0.1:8200'

vault operator init

After it is initialized, it creates five unseal keys called Shamir keys (three of which are used by default settings to unseal the vault) and an initial root token. This is the only time ever that all this data is known by the vault, and these details have to be securely saved in order for the vault to be opened. In effect, these Shamir keys have to be distributed among the major stakeholders in the project, and key thresholds have to be set in such a way that the vault can be opened if there is a consensus in the majority to do so.

Once we’ve created these keys and initialization tokens, we need to open the vault:

vault operator unseal

Here we need to supply a threshold number of keys to unseal. Once we supply it, the sealed status changes to false.

Then we log in to Vault using the initial root token.

vault login

Once successfully authenticated, you can easily explore different encryption engines, such as the Transit Secret Engine. It helps encrypt data in transit, such as a key-value store, which is used to securely store key-value pairs such as passwords, credentials, etc.

As seen from the process, Vault is strong enough in terms of encryption, and as long as the Shamir key and initialization token are handled sensitively, we can ensure security and integrity.

And you have a very secure Vault engine (protected by your own Shamir keys) running on a free AWS EC2 instance (which, in turn, is protected by security groups)!

Leave a Reply