Keep Your Application Secrets Secret

Handling application secrets is an integral part of the job for any backend developer nowadays. Let me show you how we should tackle these challenges!

There is a common problem most backend developers face at least once in their careers: where should we store our secrets? It appears to be simple enough, we have a lot of services focusing on this very issue, we just need to pick one and get on the next task. Sounds easy, but how can we pick the right solution for our needs? We should evaluate our options to see more clearly. 

The Test
For the demonstration, we can take a simple Spring Boot application as an example. This will be perfect for us because that is one of the most popular technology choices on the backend today. In our example, we will assume we need to use a MySQL database over JDBC; therefore, our secrets will be the connection URL, driver class name, username, and password. This is only a proof of concept, any dependency would do as long as it uses secrets. We can easily generate such a project using Spring Initializr. We will get the DataSource auto configured and then create a bean that will do the

connection test. The test can look like this: 
Java
@Component
public class MySqlConnectionCheck {
    private final DataSource dataSource;
    @Autowired
    public MySqlConnectionCheck(DataSource dataSource) {
        this.dataSource = dataSource;
    }

    public void verifyConnectivity() throws SQLException {
        try (final Connection connection = dataSource.getConnection()) {
            query(connection);
        }
    }

    private void query(Connection connection) throws SQLException {
        final String sql = "SELECT CONCAT(@@version_comment, ' - ', VERSION()) FROM DUAL";
        try (final ResultSet resultSet = connection.prepareStatement(sql).executeQuery()) {
            resultSet.next();
            final String value = resultSet.getString(1);
            //write something that will be visible on the Gradle output
            System.err.println(value);
        }
    }
}

This class will establish a connection to MySQL, and make sure we are, in fact, using MySQL as it will print the MySQL version comment and version. This way we would notice our mistake even if an auto configured H2 instance was used by the application. Furthermore, if we generate a random password for our MySQL Docker container, we can make sure we are using the instance we wanted, validating the whole configuration worked properly.

Back to the problem, shall we? 

Storing Secrets 
The Easy Way 
The most trivial option is to store the secrets together with the code, either hard-coded or as a configuration property, using some profiles to be able to use separate environments (dev/test/staging/prod). 
 
As simple as it is, this is a horrible idea as many popular sites had to learn the hard way over the years. These “secrets” are anything but a secret. As soon as someone gets access to a repository, they will have the credentials to the production database. Adding insult to injury, we won’t even know about it! This is the most common cause of data breaches. A good indicator of the seriousness of the situation is to see how common secret scanning offerings got for example on GitHub, GitLab, Bitbucket, or others hosting git repositories.

The Right Way
Now that we see what the problem is, we can start to look for better options. There is one common thing we will notice in all the solutions we can use: they want us to store our secrets in an external service that will keep them secure. This comes with a lot of benefits these services can provide, such as: 
• Solid access control. 
• Encrypted secrets (and sometimes more, like certificates, keys). 
• Auditable access logs. 
• A way to revoke access/rotate secrets in case of a suspected breach. 
• Natural separation of environments as they are part of the stack (one secrets manager per env). 
Sounds great, did we solve everything? Well, it is not that simple. We have some new questions we need to answer first: 
• Who will host and maintain these? 
• Where should we put the secrets we need for authentication when we want to access the secrets manager? 
• How will we run our code locally on the developer laptops? 
• How will we run our tests on CI? 
• Will it cost anything? 

These are not trivial, and their answers depend very much on the solution we want to use. Let us review them one by one in the next section.
Examples of Secrets Managers 
In all cases below, we will introduce the secrets manager as a new component of our stack, so if we had an application and a database, it would look like the following diagram. 
 
HashiCorp Vault 
If we go for the popular open-source option, HashiCorp Vault, then we can either self-host, or use their managed service, HCP Vault. Depending on the variant we select, we may or may not have some maintenance effort already, but it answers the first question. Answering the rest should be easy as well. Regarding the authentication piece, we can use, for example, the AppRole Auth Method using environment variables providing the necessary credentials to our application instances in each environment.

Regarding the local and CI execution, we can simply configure and run a vault instance in dev server mode on the machine where the app should run and pass the necessary credentials using environment variables similarly to the live app instances. As these are local vaults, providing access to throw-away dev databases, we should not worry too much about their security as we should avoid storing meaningful data in them.

To avoid spending a lot of effort on maintaining these local/CI vault instances, it can be a clever idea to store their contents in a central location, and let each developer update their vault using a single command every now and then. Regarding the cost, it depends on a few things. If you can go with the self-hosted open-source option, you should worry only about the VM cost (and the time spent on maintenance); otherwise, you might need to figure out how you can optimize the license/support cost.
Cloud-Based Solutions 
If we are hosting our services using the services of one of the three big cloud providers, we have even more options. AWS, Azure, and Google Cloud are all offering a managed service for secrets managers. Probably because of the nature of the problem, AWS Secrets Manager, Azure Key Vault, and Google Cloud Secret Manager share many similarities. Please see a list below for examples: 

• Stores versioned secrets. 
• Logs access to the service and its contents. 
• Uses solid authentication and authorization features. 
• Well integrated with other managed services of the same provider 
• Provides an SDK for developers of some popular languages  

At the same time, we should keep in mind that these are still hugely different services. Some of the obvious differences are the API they are using for communication, and the additional features they provide. For example, Azure Key Vault can store secrets, keys, and certificates, while AWS and GCP provide separate managed services for these additional features. 

Thinking about the questions we wanted to answer, they can answer the first two questions the same way. All of them are managed services, and the managed identity solution of the cloud provider they belong to is the most convenient, secure way to access them. Thanks to this, we do not need to bother storing secrets/tokens in our application configuration, just the URL of the secrets manager, which is not considered to be a secret. Regarding the cost, AWS and GCP can charge by the number of secrets and number of API calls. On the other hand, Azure only charges for the latter. In general, they are very reasonably priced, and we can sleep better at night knowing our security posture is a bit better. 

Trouble starts when we try to answer the remaining two questions dealing with the local and CI use-cases. All three solutions can be accessed from the outside world (given the proper network configuration), but simply punching holes on a firewall and sharing the same secrets manager credentials is not an ideal solution. There are situations when doing so is simply not practical, such as the following cases: 

• Our team is scattered around the globe in the home office, and we would not be able to use strong IP restrictions, or we would need constant VPN connection just to build/test the code. Needing internet connection for tests is bad enough. But, using VPN constantly while at work can put additional stress on the infrastructure and team at the same time. 
• When our CI instances are spawning with random IPs from an unknown range, we cannot set proper IP restrictions. A similar case to the previous. 
• We cannot trust the whole team with the secrets of the shared secrets manager. For example, in the case of open-source projects, we cannot run around and share a secrets manager instance with the rest of the world. 
• We need to change the contents of the secrets manager during the tests. When this happens, we are risking isolation problems between each developer and CI instance. We cannot launch a different secrets manager instance for each person and process (or test case) as that would not be very scalable. 
• We do not want to pay extra for the additional secrets managers used in these cases. 
Can We Fake It Locally?
Usually, this would be the moment when I start to search for a suitable test double and formulate plans about using that instead of the real service locally and on CI. What do we expect from such a test double?
• Behave like the real service would include in exceptional situations. 
• Be actively maintained to reduce the risk of lagging behind in case of API version changes in the real service. 
• Have a way to initialize the content of the secrets manager double on start-up to not need additional code in the application.  
• Allow us to synchronize the secret values between the team and CI instances to reduce maintenance cost. 
• Let us start and throw-away the test double simply, locally and on CI. 
• Do not use a lot of resources. 
• Do not introduce additional dependencies to our application if possible. 
I know about third-party solutions ticking all the boxes in case of AWS or Azure, while I have failed to locate one for GCP. 
Solving the Local Use Case for Each Secrets Manager in Practice 
It is finally time for us to roll up our sleeves and get our hands dirty. How should we modify our test project to be able to use our secrets manager integrations locally? Let us see for each of them:
HashiCorp Vault 
Since we can run the real thing locally, getting a test double is pointless. We can simply integrate vault using the Spring Vault module by adding a property source:

Java
@Component("SecretPropertySource")
@VaultPropertySource(value = "secret/datasource", propertyNamePrefix = "spring.datasource.")
public class SecretPropertySource {
    private String url;
    private String username;
    private String password;
    private String driverClassName;
    // ... getters and setters ...  
}

As well as a configuration for the “dev” profile:
Java
@Configuration
@Profile("dev")
public class DevClientConfig extends AbstractVaultConfiguration {
    @Override
    public VaultEndpoint vaultEndpoint() {
        final String uri = getEnvironment().getRequiredProperty("app.secrets.url");
        return VaultEndpoint.from(URI.create(uri));
    }

    @Override
    public ClientAuthentication clientAuthentication() {
        final String token = getEnvironment().getRequiredProperty("app.secrets.token");
        return new TokenAuthentication(token);
    }
    @Override
    public VaultTemplate vaultTemplate() {
        final VaultTemplate vaultTemplate = super.vaultTemplate();
        final SecretPropertySource datasourceProperties = new SecretPropertySource();
        datasourceProperties.setUrl("jdbc:mysql://localhost:15306/");
        datasourceProperties.setDriverClassName("com.mysql.cj.jdbc.Driver");
        datasourceProperties.setUsername("root");
        datasourceProperties.setPassword("16276ec1-a682-4022-b859-38797969abc6");
        vaultTemplate.write("secret/datasource", datasourceProperties);
        return vaultTemplate;
    }
}

We need to be careful, as each bean—depending on the fetched secret values (or the DataSource)—must be marked with @DependsOn("SecretPropertySource") to make sure it will not be populated earlier during start-up while the vault backend PropertySource is not registered.

As for the reason we used a “dev” specific profile, it was necessary because of two things:
1. The additional initialization of the vault contents on start-up.
2. The simplified authentication as we are using a simple token instead of the aforementioned AppRole. 
Performing the initialization here solves the worries about the maintenance of the vault contents as the code takes care of it, and we did not need any additional dependencies either. Of course, it would have been even better if we used some Docker magic to add those values without ever needing to touch Java. This might be an improvement for later. 
Speaking of Docker, the Docker Compose file is simple as seen below:

YAML
version: "3"
services:
  vault:
    container_name: self-hosted-vault-example
    image: vault
    ports:
      - '18201:18201'
    restart: always
    cap_add:
      - IPC_LOCK
    entrypoint:
      vault server -dev-kv-v1 -config=/vault/config/vault.hcl
    volumes:
      - config-import:/vault/config:ro
    environment:
      VAULT_DEV_ROOT_TOKEN_ID: 00000000-0000-0000-0000-000000000000
      VAULT_TOKEN: 00000000-0000-0000-0000-000000000000
  # ... MySQL config ...
volumes:
  config-import:
    driver: local
    driver_opts:
      type: "none"
      o: "bind"
      device: "vault"

The key points to remember are the dev mode in the entry point, the volume config that will allow us to add the configuration file, and the environment variables baking in the dummy credentials we will use in the application. As for the configuration, we need to set in-memory mode and configure a HTTP endpoint without TLS:
disable_mlock = true
storage "inmem" {}

listener "tcp" {
  address     = "0.0.0.0:18201"
  tls_disable = 1
}

ui                = true
max_lease_ttl     = "7200h"
default_lease_ttl = "7200h"
api_addr          = "http://127.0.0.1:18201"

The complexity of the application might need some changes in the vault configuration or the Docker Compose content. However, for this simple example, we should be fine. 

Running the project, should produce the expected output: 
• MySQL Community Server - GPL - 8.0.32
We are done with configuring vault for local use. Setting it up for tests should be even more simple using the things we have learned here. Also, we can simplify some of the steps there if we decide to use the relevant Testcontainers module. 

Google Cloud Secret Manager 
As there is no readily available test double for Google Cloud Secret Manager, we need to make a trade-off. We can decide what we would like to choose from the following three options: 
1. We can fall back to the easy option in case of the local/CI case, disabling the logic that will fetch the secrets for us in any real environment. In this case, we will not know whether the integration works until we deploy the application somewhere. 
2. We can decide to use some shared Secret Manager instances, or even let every developer create one for themselves. This can solve the problem locally, but it is inconvenient compared to the solution we wanted, and we would need to avoid running our CI tests in parallel and clean up perfectly in case the content of the Secret Manager must change on CI. 
3. We can try mocking/stubbing the necessary endpoints of the Secret Manager ourselves. WireMock can be a good start for the HTTP API, or we can even start from nothing. It is a worthy endeavor for sure, but will take a lot of time to do it well. Also, if we do this, we must consider the ongoing maintenance effort. 

As the decision will require quite different solutions for each, there is not much we can solve in general.
AWS Secrets Manager 
Things are better in case of AWS, where LocalStack is a tried-and-true test double with many features. Chances are that if you are using other AWS managed services in your application, you will be using LocalStack already, making this even more appealing. Let us make some changes to our demo application to demonstrate how simple it is to implement the AWS Secrets Manager integration as well as using LocalStack locally. 
Fetching the Secrets 
First, we need a class that will know the names of the secrets in the Secrets Manager:

Java
@Configuration
@ConfigurationProperties(prefix = "app.secrets.key.db"
public class SecretAccessProperties {
    private String url;
    private String username;
    private String password;
    private String driver;
    // ... getters and setters ..
}

This will read the configuration and let us conveniently access the names of each secret by a simple method call. Next, we need to implement a class that will handle communication with the Secrets Manager:
Java
@Component("SecretPropertySource")
public class SecretPropertySource extends EnumerablePropertySource> {<string,>
    private final AWSSecretsManager client;
    private final Map mapping;,>
    private final Map cache;,>

    @Autowired
    public SecretPropertySource(SecretAccessProperties properties,
                                final AWSSecretsManager client,
                                final ConfigurableEnvironment environment) {
        super("aws-secrets");
        this.client = client;
        mapping = Map.of(
                "spring.datasource.driver-class-name", properties.getDriver(),
                "spring.datasource.url", properties.getUrl(),
                "spring.datasource.username", properties.getUsername(),
                "spring.datasource.password", properties.getPassword()
        );
        environment.getPropertySources().addFirst(this);
        cache = new ConcurrentHashMap<>();
    }

    @Override
    public String[] getPropertyNames() {
        return mapping.keySet()
                .toArray(new String[0]);
    }
    @Override
    public String getProperty(String property) {
        if (!Arrays.asList(getPropertyNames()).contains(property)) {
            return null;
        }

        final String key = mapping.get(property);
        //not using computeIfAbsent to avoid locking map while the value is resolved
        if (!cache.containsKey(key)) {
            cache.put(key, client
                      .getSecretValue(new GetSecretValueRequest().withSecretId(key))
                      .getSecretString());
        }
        return cache.get(key);
    }

}

This PropertySource implementation will know how each secret name can be translated to Spring Boot configuration properties used for the DataSource configuration, self-register as the first property source, and cache the result whenever a known property is fetched. We need to use the @DependsOn annotation same as in case of the vault example to make sure the properties are fetched in time.  
As we need to use basic authentication with LocalStack, we need to implement one more class, which will only run in the “dev” profile:

Java
@Configuration
@Profile("dev")
public class DevClientConfig {
    @Value("${app.secrets.url}")
    private String managerUrl;
    @Value("${app.secrets.accessKey}")
    private String managerAccessKey;
    @Value("${app.secrets.secretKey}")
    private String managerSecretKey;

    @Bean
    public AWSSecretsManager secretClient() {
        final EndpointConfiguration endpointConfiguration =
                new EndpointConfiguration(managerUrl, Regions.DEFAULT_REGION.getName());
        final BasicAWSCredentials credentials =
                new BasicAWSCredentials(managerAccessKey, managerSecretKey);
        return AWSSecretsManagerClientBuilder.standard()
                .withEndpointConfiguration(endpointConfiguration)
                .withCredentials(new AWSStaticCredentialsProvider(credentials))
                .build();
    }

}

Our only goal with this service is to set up a suitable AWSSecretsManager bean just for local use. 
Setting Up the Test Double 
With the coding done, we need to make sure LocalStack will be started using Docker Compose whenever we start our Spring Boot app locally and stop it when we are done. 
Starting with the Docker Compose part, we need it to start LocalStack and make sure to use the built-in mechanism for running an initialization script when the container starts using the approach shared here. To do so, we need a script that can add the secrets:

Shell
#!/bin/bash

echo "########### Creating profile ###########"
aws configure set aws_access_key_id default_access_key --profile=localstack
aws configure set aws_secret_access_key default_secret_key --profile=localstack
aws configure set region us-west-2 --profile=localstack

echo "########### Listing profile ###########"
aws configure list --profile=localstack

echo "########### Creating secrets ###########"
aws secretsmanager create-secret --endpoint-url=http://localhost:4566 --name database-connection-url --secret-string "jdbc:mysql://localhost:13306/" --profile=localstack || echo "ERROR"
aws secretsmanager create-secret --endpoint-url=http://localhost:4566 --name database-driver --secret-string "com.mysql.cj.jdbc.Driver" --profile=localstack || echo "ERROR"
aws secretsmanager create-secret --endpoint-url=http://localhost:4566 --name database-username --secret-string "root" --profile=localstack || echo "ERROR"
aws secretsmanager create-secret --endpoint-url=http://localhost:4566 --name database-password --secret-string "e8ce8764-dad6-41de-a2fc-ef905bda44fb" --profile=localstack || echo "ERROR"

echo "########### Secrets created ###########"

This will configure the bundled AWS CLI inside the container and perform the necessary HTTP calls to port 4566 where the container listens. To let LocalStack use our script, we will need to start our container with a volume attached. We can do so using the following Docker Compose configuration: 

YAML
version: "3"
services:
  localstack:
    container_name: aws-example-localstack
    image: localstack/localstack:latest
    ports:
      - "14566:4566"
    environment:
      LAMBDA_DOCKER_NETWORK: 'my-local-aws-network'
      LAMBDA_REMOTE_DOCKER: 0
      SERVICES: 'secretsmanager'
      DEFAULT_REGION: 'us-west-2'
    volumes:
      - secrets-import:/docker-entrypoint-initaws.d:ro
  # ... MySQL config ...
volumes:
  secrets-import:
    driver: local
    driver_opts:
      type: "none"
      o: "bind"
      device: "localstack"

This will set up the volume, start LocalStack with the “secretsmanager” feature active, and allow us to map port 4566 from the container to port 14566 on the host so that our AWSSecretsManager can access it using the following configuration: 
Properties files

app.secrets.url=http://localhost:14566
app.secrets.accessKey=none
app.secrets.secretKey=none

If we run the project, we will see the expected output: 
• MySQL Community Server - GPL - 8.0.32
Well done, we have successfully configured our local environment. We can easily replicate these steps for the tests as well. We can even create multiple throw-away containers from our tests for example using Testcontainers. 
Azure Key Vault 
Implementing the Azure Key Vault solution will look like a cheap copy-paste job after the AWS Secrets Manager example we have just implemented above. 
Fetching the Secrets 
We have the same SecretAccessProperties class for the same reason. The only meaningful difference in SecretPropertySource is the fact that we are using the Azure SDK. The changed method will be this: 

Java
    @Override
    public String getProperty(String property) {
        if (!Arrays.asList(getPropertyNames()).contains(property)) {
            return null;
        }
        final String key = mapping.get(property);
        //not using computeIfAbsent to avoid locking map while the value is resolved
        if (!cache.containsKey(key)) {
            cache.put(key, client.getSecret(key).getValue());
        }
        return cache.get(key);
    }

The only missing piece is the “dev” specific client configuration that will create a dumb token and an Azure Key Vault SecretClient for us:
Java

@Configuration
@Profile("dev")
public class DevClientConfig {
    @Value("${app.secrets.url}")
    private String vaultUrl;
    @Value("${app.secrets.user}")
    private String vaultUser;
    @Value("${app.secrets.pass}")
    private String vaultPass;

    @Bean
    public SecretClient secretClient() {
        return new SecretClientBuilder()
                .credential(new BasicAuthenticationCredential(vaultUser, vaultPass))
                .vaultUrl(vaultUrl)
                .disableChallengeResourceVerification()
                .buildClient();
    }
}

With this, the Java side changes are completed, we can add the missing configuration and the application is ready:
Properties files

app.secrets.url=https://localhost:10443
app.secrets.user=dummy
app.secrets.pass=dummy

The file contents are self-explanatory, we have some dummy credentials for the simulated authentication and a URL for accessing the vault. 
Setting Up the Test Double 
Although setting up the test double will be like the LocalStack solution we implemented above, it will not be the same. We will use Lowkey Vault, a fake, that implements the API endpoints we need and more. As Lowkey Vault provides a way for us to import the vault contents using an attached volume, we can start by creating an import file containing the properties we will need:
{
  "vaults": [
    {
      "attributes": {
        "baseUri": "https://{{host}}:{{port}}",
        "recoveryLevel": "Recoverable+Purgeable",
        "recoverableDays": 90,
        "created": {{now 0}},
        "deleted": null
      },
      "keys": {
      },
      "secrets": {
        "database-connection-url": {
          "versions": [
            {
              "vaultBaseUri": "https://{{host}}:{{port}}",
              "entityId": "database-connection-url",
              "entityVersion": "00000000000000000000000000000001",
              "attributes": {
                "enabled": true,
                "created": {{now 0}},
                "updated": {{now 0}},
                "recoveryLevel": "Recoverable+Purgeable",
                "recoverableDays": 90
              },
              "tags": {},
              "managed": false,
              "value": "jdbc:mysql://localhost:23306/",
              "contentType": "text/plain"
            }
          ]
        },
        "database-username": {
          "versions": [
            {
              "vaultBaseUri": "https://{{host}}:{{port}}",
              "entityId": "database-username",
              "entityVersion": "00000000000000000000000000000001",
              "attributes": {
                "enabled": true,
                "created": {{now 0}},
                "updated": {{now 0}},
                "recoveryLevel": "Recoverable+Purgeable",
                "recoverableDays": 90
              },
              "tags": {},
              "managed": false,
              "value": "root",
              "contentType": "text/plain"
            }
          ]
        },
        "database-password": {
          "versions": [
            {
              "vaultBaseUri": "https://{{host}}:{{port}}",
              "entityId": "database-password",
              "entityVersion": "00000000000000000000000000000001",
              "attributes": {
                "enabled": true,
                "created": {{now 0}},
                "updated": {{now 0}},
                "recoveryLevel": "Recoverable+Purgeable",
                "recoverableDays": 90
              },
              "tags": {},
              "managed": false,
              "value": "5b8538b6-2bf1-4d38-94f0-308d4fbb757b",
              "contentType": "text/plain"
            }
          ]
        },
        "database-driver": {
          "versions": [
            {
              "vaultBaseUri": "https://{{host}}:{{port}}",
              "entityId": "database-driver",
              "entityVersion": "00000000000000000000000000000001",
              "attributes": {
                "enabled": true,
                "created": {{now 0}},
                "updated": {{now 0}},
                "recoveryLevel": "Recoverable+Purgeable",
                "recoverableDays": 90
              },
              "tags": {},
              "managed": false,
              "value": "com.mysql.cj.jdbc.Driver",
              "contentType": "text/plain"
            }
          ]
        }
      }
    }
  ]
}

This is a Handlebars template that would allow us to use placeholders for the host name, port, and the created/updated/etc., timestamp fields. We must use the {{port}} placeholder as we want to make sure we can use any port when we start our container, but the rest of the placeholders are optional, we could have just written a literal there. See the quick start documentation for more information. 
Starting the container has a similar complexity as in case of the AWS example: 

YAML
version: "3"
services:
  lowkey-vault:
    container_name: akv-example-lowkey-vault
    image: nagyesta/lowkey-vault:1.18.0
    ports:
      - "10443:10443"
    volumes:
      - vault-import:/import/:ro
    environment:
      LOWKEY_ARGS: >
        --server.port=10443
        --LOWKEY_VAULT_NAMES=- 
        --LOWKEY_IMPORT_LOCATION=/import/keyvault.json.hbs
  # ... MySQL config ...
volumes:
  vault-import:
    driver: local
    driver_opts:
      type: "none"
      o: "bind"
      device: "lowkey-vault/import"

We need to notice almost the same things as before, the port number is set, the Handlebars template will use the server.port parameter and localhost by default, so the import should work once we have attached the volume using the same approach as before. 
The only remaining step we need to solve is configuring our application to trust the self-signed certificate of the test double, which is used for providing an HTTPS connection. This can be done by using the PKCS#12 store from the Lowkey Vault repository and telling Java that it should be trusted: 

Groovy
bootRun {
    systemProperty("javax.net.ssl.trustStore", file("${projectDir}/local/local-certs.p12"))
    systemProperty("javax.net.ssl.trustStorePassword", "changeit")
    systemProperty("spring.profiles.active", "dev")
    dependsOn tasks.composeUp
    finalizedBy tasks.composeDown
}

Running the project will log the expected string as before:
• MySQL Community Server - GPL - 8.0.32
Congratulations, we can run our app without the real Azure Key Vault. Same as before, we can use Testcontainers for our test, but, in this case, the Lowkey Vault module is a third-party from the Lowkey Vault project home, so it is not in the list provided by the Testcontainers project. 
Summary 
We have established that keeping secrets in the repository defeats the purpose. Then, we have seen multiple solution options for the problem we have identified in the beginning, and can select the best secrets manager depending on our context. Also, we can tackle the local and CI use cases using the examples shown above.


We Provide consulting, implementation, and management services on DevOps, DevSecOps, Cloud, Automated Ops, Microservices, Infrastructure, and Security

Services offered by us: https://www.zippyops.com/services

Our Products: https://www.zippyops.com/products

Our Solutions: https://www.zippyops.com/solutions

For Demo, videos check out YouTube Playlist: https://www.youtube.com/watch?v=4FYvPooN_Tg&list=PLCJ3JpanNyCfXlHahZhYgJH9-rV6ouPro

If this seems interesting, please email us at [email protected] for a call.


Relevant Blogs:


The full example projects can be found on GitHub here. 

Recent Comments

No comments

Leave a Comment