Arguments and variables in Docker
Arguments and variables in Docker
Maybe you’re new to docker and wondering “How do I get my variables into the build process?”, “How do I get my secrets to the running application in my container?”, or are just genuinely curious about how everything fits together; if so, this article just might be what you’re looking for.
Build arguments vs Runtime arguments, which is right for you? Maybe both?
Build arguments exist to allow you to pass in arguments at build time that manifest as environment variables for use in your docker image build process:
$ docker build --build-arg HTTP_PROXY=http://10.20.30.2:1234 .
Some uses may be: to pass a value to one or more of your build steps, changing how things are run; another may be to bake a value into the image so that it’s accessible to both of your build steps and when it comes time to run the container.
Runtime arguments are passed in when you docker run or start your container:
$ docker run [OPTIONS] IMAGE[:TAG|@DIGEST] [COMMAND] [ARG…]
They allow you to send variables to your application that will be running in your container as defined in your dockerfile by your CMD or ENTRYPOINT definitions.
Though you can use build arguments to bake in values that will be accessible at runtime, the best use is to keep them solely for build purposes; as baking in secret values can lead to a security breach if anyone ever got ahold of your image. It also adds complexity to how you manage your runtime secrets and values, because you then have to rebuild your image(s) to update them.
Keeping that in mind, let’s dive in to how we can use build arguments in a safe way.
Say we have an application, and we want to supply a build version each time we do a new image build, so that we can internally track which version the image corresponds to. To do this we could specify a build argument in the following way:
$ docker build — build-arg VER=0.0.1 . $ docker build — build-arg VER=0.0.2 .
The VER environment variable will then be accessible for all our RUN commands defined in our dockerfile, so we can then use that variable or embed it in our image in some form ( It’s fine as long as it’s not a secret ).
For each build-arg that you define you will need to make sure you update your dockerfile to define it; with an optional default parameter like so:
ARG <name>[=<default value>]
In our case this could be:
Great! But what if we have a lot of values we want to supply, values that differ per build but are required to build our application:
$ docker build --build-arg VER=0.0.3 LIBRARY_GIT_HASH=1bdb374cf477ecb8e7c6dc338a4e4ea3d4838fd7 GCC_VERSION=7.2.1 MULTI_THREADING=1 EXPERIMENTAL_FEATURE=1 ADDITIONAL_ARGS… .
At some-point this becomes a lot to manage, and if you are using the same variables across other builds it can be daunting to keep everything in sync; luckily there are a few tools out there, and if you can forgive my plug and entertain my suggestion. The Manifold CLI can help solve this problem; I’ll show you how at the end of the article.
Runtime Arguments vs Environment Variables
So now that you understand the use cases for build arguments, and a bit about why baking them in can cause both management, and security issues; let’s talk about why you might prefer to pass in some of those arguments in at runtime.
There are two strong cases for using runtime arguments: Security and Management.
Maybe your arguments contain sensitive information that shouldn’t be baked into your image, maybe they relate to how or where an image is run, or maybe they don’t exist yet, or just aren’t able to be passed in at build time.
Here’s an example of how they work:
$ docker run my-awesome-image $SECRET_KEY $BACKEND_URL
The only downside here is that your arguments are going to be passed directly to your ENTRYPOINT program as command line arguments, not environment variables like we saw with build args.
These command line arguments can be hard to sort through, and they are order dependent unless you are doing some advanced parsing, but it’s possible you are running someone else’s program and they actually have a nice definition of command line arguments and this will be sufficient. You can read more about defining the ENTRYPOINT here: https://docs.docker.com/engine/reference/builder/#entrypoint
Though please be aware that CMD is not ENTRYPOINT, if you have a CMD definition and no ENTRYPOINT and assume my
docker run example above will pass the arguments to the program defined in your CMD definition, you could be in for a surprise to find that the first argument would replace your program and try to run a program matching it’s string ( defined by
You can read more about defining CMD here:
Now maybe command line arguments just aren’t for you; maybe you want to pass things along to more than just your entrypoint through the lifetime of your container; maybe you need environment variables. Just as build arguments are used as environment variables at build time, you can also pass environment variables to containers at runtime:
$ docker run my-awesome-image -e VAR_FROM_ENV -e "MYVAR=2"
There are a few ways to do this, so I recommend checking out the documentation here: https://docs.docker.com/engine/reference/commandline/run/#set-environment-variables--e-env-env-file
But once you have a few in, these variables will be available to your program and its sub-processes ( when shared ), which is super handy. You can also use the -env-file argument to docker run which allows you to specify all your environment variables from a file, so you don’t end up with a big inline list, similar to the build args case I showed you in the previous section.
Eventually you may want to share and manage theses variables and secrets outside the application code, so your ops team can rotate keys easier, or so you can use the same application code for different deployments. If so then the Manifold CLI might be the tool for you.
Using the Manifold CLI
I hope you now understand a few different approaches you can use to use your variables with your docker images.
If you want to stick around a bit longer and get started with the Manifold CLI you can start by downloading it here:
After you have it installed, check out our quick-start guide to get a little more familiar with the tool: https://docs.manifold.co/docs/cli-quickstart-6JMEw1CD6wguwIYymUuAQ6
I recommend you at least run the manifold signup, after your account has been verified you can start creating resources.
Resources and variables
Let’s first get started with a custom resource to hold your variables and secrets:
$ manifold create -c
After running the above you’ll be prompted to select a project ( as long as one isn’t set in your current context ). For now let’s select No Project, to create the resource at your account level. You’ll then be prompted to enter a resource name, resource names should be all lowercase with hyphens to split words, for example: my-first-resource, is a valid name.
Once your custom resource is created we can then add some data to it:
$ manifold config set -r my-first-resource VER=0.0.1
This will add a key called VER to the custom resource with a value of 0.0.1, you can then run:
$ manifold export
Which will show you all your user level resources and all their keys and values. Play around with this for a bit and set some variables that might be more meaningful to you. You can use manifold config unset, to clear any variables out of your config.
Now let’s talk about projects and teams, running either:
$ manifold export --help
$ manifold run --help
Will show you that each takes optional team and project arguments. Both teams and projects function as containers for resources in Manifold, though teams differ from projects in that they allow for access control. Teams have members and each member has a role which defines what they’re allowed to do. Teams like Users can also have projects to allow for further organization of resources.
Teams and Tokens
When it comes to creating an access token ( MANIFOLD_API_TOKEN ); each access token can be created at user or team level, giving it access to resources from either the perspective of that user or team.
For now, let’s get started with creating a team; creating a new custom resource as that team, and then an access token to read the resource values.
$ manifold teams create
This will prompt you to enter a team name; like resource use lower case letters with hyphens to separate words ( my-first-team is a valid name ).
You can now run the following to create a custom resource in the team:
$ manifold create -c -t my-first-team
Then like before enter a name; I’ll call mine my-team-resource. Then let’s store a secret value on the resource:
$ manifold config set -t my-first-team -r my-team-resource SECRET=42
$ manifold export -t my-first-team
Will show only the SECRET specified in the team resource.
You’re now ready to create an access token:
$ manifold tokens create -t my-first-team
Just enter a description, then select a role, for our case of consuming this key in docker or elsewhere let’s just use read-credentials; this is required over read for tokens in our use case, otherwise you may see some errors when trying to read your credentials.
The token will then be printed to your terminal, you can copy and paste this wherever you want to save it; the description is how you can remember which key is used where:
$ manifold tokens list -t my-first-team
Will show you all your tokens in your team, you can also remove them with manifold tokens delete.
Now you can set the MANIFOLD_API_TOKEN environment variable to your newly created token. Then you will be able to run:
**manifold export -t my-first-team** and **manifold run -t my-first-team**
without having to manifold login.
You can use manifold logout or a new user context on your machine to test it out!
For more information please refer to the documentation on our website: https://docs.manifold.co/docs/cli-quickstart-6JMEw1CD6wguwIYymUuAQ6 or the CLI tool’s builtin help:
**manifold --help**, or **manifold %SUBCOMMAND --help** for additional info.
The Manifold CLI with Docker
Now that you know the basics of how using the Manifold CLI for setting and using your variables and secrets, let’s dive in to a couple suggestions of how you could use it with docker.
$ docker build --build-arg "MANIFOLD_API_TOKEN=abc123def456ghi" .
The above command will pass in your token that can be generated from Manifold CLI to access a specific set of variables and secrets you’ve defined through Manifold. This way you can manage all your keys and variables in one central place; even when they may be replicated in multiple build processes. You just need to make sure to supply the MANIFOLD_API_TOKEN as a build argument and wrap each command you wish to have access to your variables with manifold run in your dockerfile:
RUN manifold run mycommand
This also allows you to keep any secrets away from other build commands that could have undesired side-effects, which should make it a little more comfortable to use your secrets when needed in your build process.
However, be wary that anything passed in through the build process runs the risk of being cached in the image or it’s layers; even if the variables come from the Manifold CLI, the program they’re being passed to could cache them in someway.
As described in the Using the Manifold CLI section above, manifold can also be used to group different sets of credentials in custom resources, projects and teams; which is useful for separating values out to different processes in the same build or different images.
To run the Manifold CLI when the image starts; you just need to add it to your entry point, wrapping the execution of your program. For example in your dockerfile you can do the following:
ENTRYPOINT manifold run amazing-program
Then to run the image, just make sure you pass in your MANIFOLD_API_TOKEN:
$ docker run my-awesome-image -e "MANIFOLD_API_TOKEN=abc123def456…"
And your amazing-program will now have all the environment variables you configured it to through Manifold.
I hope you now see how this all fits together, and have a better understanding of managing your variables and secrets, getting them into your docker builds, as well having them available at runtime when needed. Though more importantly, I hope you have also gained an understanding of when your secrets may be at risk as part of the process, and have started thinking about how you could better manage them.
Happy credential sharing!