Rationale
You are using the Dev Containers extension to develop your application within a Docker image, so that you don’t have surprises once you reach the deployment stage (this dependency isn’t available on Linux, surprise!). However, you need a persistent database server to test user interactions, perform load testing, etc.
You don’t have access to any cloud storage solution.
Generating another container within your current container is cumbersome.
Besides, what if you have separate repositories for different parts of the project? Like a data ingestion pipeline that generates info in the database, which is then retrieved by your application to display ingested information to users. In that case, you can’t have a db container within your app repo, you would need a common setup.
In this blog post, I’ll explain how to solve the problem of accessing external containers within a Dev Containers setup, which is pretty transferable to other development within a container strategies.
Dev Containers
Dev Containers is a VSCode extension that allows you to use a container as a development environment. You specify the configuration within a devcontainer.json
file, which allows flexibility to use docker compose, Dockerfile
builds, or even a native way to build images based on the instructions given in devcontainer.json
. After the build, the volume is mounted so that any files that you create within the container while developing will also appear in the same folder of the host machine, and vice versa.
It is a huge winner for any developer willing to avoid the “It works on my machine” embarrassing moment (but be aware that those phrases will still appear).
However, this article is pretty specific to setting up access to external containers (i.e., ones that have not been defined on your folder’s Dev Container setup), so I will assume some base knowledge of how dev containers work for now.
How to set up a dev container for external access
I have usually set up dev containers using just devcontainer.json
and Dockerfile
. However, if you are stuck trying to set up a database for a project, you will need to use a docker network to connect your app container with your external (not repo specific) database. This is where using a docker-compose.yml
file comes handy.
First, let’s create the external database container. In my case, I’ll be using PostgreSQL with the pgvector extension installed since our app is for a GenAI use case.
Creating a database container with external access
Before creating the container, we need to create a Docker network, which will make the db container reachable by any dev container that requires it.
We can do this by typing:
docker network create shared_network_db
Once the network’s been created, we run our db container. This is an example of postgres with pgvector, but you could use MySQL, SQLite or any db engine of your choice:
docker run -d \
--name postgres_db_project \
--network shared_network_db \
-e POSTGRES_USER=postgres \
-e POSTGRES_PASSWORD=postgres \
-e POSTGRES_DB=test_database \
-p 5433:5432 \
-v shared_pgdata:/var/lib/postgresql/data \
pgvector/pgvector:pg16
With the --network
flag, we’ve attached the shared_network_db
to this container.
The port mapping is optional, but you can use it if you want to access the database from your host machine.
The volume flag -v
is necessary if you want the database data to persist, which is the case for our project.
Remember that, after creating the db with docker run
, you only need to execute docker stop
and docker start
to stop it or start it, no new docker run execution.
Great! Now you’ve got a database container attached to a network, now let’s set up the app dev containers.
Creating a dev container
Imagine we have a Python backend app with the following Dockerfile:
FROM python:3.12-slim-bookworm
# Set working directory in the container
WORKDIR /backend_app
# Upgrade pip to the latest version
RUN pip install --no-cache-dir --upgrade pip setuptools wheel
# Install Python dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Add application code
COPY app.py .
COPY src ./src
COPY .env ./.env
CMD ["python", "app.py"]
This is relevant for the CI/CD process, which uses the Dockerfile. However, locally, Dev Containers can ignore the CMD command so that you can work in the container for an indefinite amount of time. Hence, I have been using for long the following configuration in devcontainer.json
:
{
"name": "Python 3.12 Development",
"build": {
"dockerfile": "../Dockerfile",
"context": ".."
},
"customizations": {
"vscode": {
"extensions": [
"ms-python.python",
"ms-python.vscode-pylance"
],
"settings": {
"python.defaultInterpreterPath": "/usr/local/bin/python"
}
}
},
"forwardPorts": []
}
The devcontainer.json file lived in the .devcontainer folder, whereas the Dockerfile stayed at the root folder level. This is the method I used to build the container. However, this doesn’t have access to the external network we’ve created before. To this end, we’ll create a docker-compose.yml
file.
Creating a dev container with access to an external container
Here’s our docker compose file:
services:
backend:
build:
context: ..
dockerfile: Dockerfile
# note that here we don't need the relative parent folder reference in
# ../Dockerfile, because docker compose detects it, but devcontainer.json
# doesn't
command: sleep infinity
volumes:
- ..:/backend_app:cached
ports:
- "8080:8080"
networks:
- shared_network_db
networks:
shared_network_db:
name: shared_network_db
external: true
Here we define networks and attach them to the backend service. We define the shared_network_db as external, to point out that it has already been created. We map ports for the backend application as well as volumes (to have all local files available in the container) and a sleep infinity command to prevent the CMD action we defined in the Dockerfile from taking place.
But there’s something else missing, we need to modify our devcontainer.json
or these changes won’t take effect:
{
"name": "Python 3.12 Development",
"dockerComposeFile": "docker-compose.yml",
"service": "backend",
"workspaceFolder": "/backend_app", // this must be the same as the Dockerfile WORKDIR
"customizations": {
"vscode": {
"extensions": [
"ms-python.python",
"ms-python.vscode-pylance",
],
"settings": {
"python.defaultInterpreterPath": "/usr/local/bin/python"
}
}
},
"forwardPorts": ["8080:8080"]
}
After building and opening the IDE in the container, your app should now be able to access the database container.
Try the following to test DB access from your dev container in Python:
import psycopg
="postgres_db_project" # the host is the service/container name from the
DB_HOST# other containers in the network. If you're using your host machine it's just localhost
="test_database"
DB_NAME="5432" # note that, for access from host machine to postgres db you'll need 5433
DB_PORT# in this example, because of the port mapping -p 5433:5432
="postgres"
DB_USER="postgres"
DB_PASSWORD
= f"postgresql://{DB_NAME}:{DB_PASSWORD}@{DB_HOST}:{DB_PORT}/{DB_NAME}"
psycopg_connection_string
try:
with psycopg.connect(psycopg_connection_string) as conn:
with conn.cursor() as cur:
"SELECT 1;")
cur.execute(print("✅ Database reachable:", cur.fetchone()[0])
except Exception as e:
print("❌ Could not connect to the database:", e)
Here’s the result of the current execution:
python test_db_access.py
✅ Database reachable: 1
Conclusion
In this tutorial we’ve seen how to create a database container and a container network, then how to create a dev container (very minimally), and how to modify it so that it is able to reach the external database container.
In future blog posts I’m planning on showing alternatives to this setup. Thanks for reading!