Controlling Docker from Python

· 5 min read · Updated March 7, 2026 · intermediate
docker devops containers docker-sdk

The Docker SDK for Python gives you programmatic control over Docker containers. Instead of running docker commands from the shell, you use Python code to manage images, containers, volumes, and networks. This is essential for building CI/CD pipelines, deployment scripts, and development automation.

Installing the Docker SDK

Install the Docker SDK with pip:

pip install docker

The SDK communicates with the Docker daemon through its REST API. Your user needs permission to access the Docker socket, typically by being in the docker group.

Connecting to Docker

Create a client connection to your local Docker daemon:

import docker

# Connect to the local Docker daemon
client = docker.from_env()

# Verify the connection
print(client.version())

The from_env() method reads your Docker host configuration from environment variables. For remote hosts, pass the base URL explicitly:

client = docker.DockerClient(base_url='tcp://remote-host:2375')

Listing Containers

List all containers, including stopped ones:

# List all containers (running and stopped)
containers = client.containers.list(all=True)

for container in containers:
    print(f"{container.short_id}: {container.status} - {container.name}")

Filter containers by status:

# Only running containers
running = client.containers.list()

# Only stopped containers
stopped = client.containers.list(all=True)
stopped = [c for c in stopped if c.status != 'running']

Get detailed container information:

container = client.containers.get('my-container')
print(container.attrs['Config']['Image'])
print(container.attrs['NetworkSettings']['IPAddress'])

Running Containers

Start a new container from an image:

# Run a simple container
nginx = client.containers.run(
    'nginx:latest',
    name='my-nginx',
    ports={'80/tcp': 8080},
    detach=True
)
print(f"Started {nginx.short_id}")

Run with environment variables and volume mounts:

container = client.containers.run(
    'postgres:15',
    name='my-db',
    environment={
        'POSTGRES_USER': 'appuser',
        'POSTGRES_PASSWORD': 'secret',
        'POSTGRES_DB': 'appdb'
    },
    volumes={'/host/data': {'bind': '/var/lib/postgresql/data', 'mode': 'rw'}},
    detach=True
)

Run and wait for completion (for batch jobs):

result = client.containers.run(
    'python:3.11',
    'python -c "print(2 + 2)"',
    remove=True
)
print(result.logs())

Managing Containers

Start, stop, restart, and remove containers:

container = client.containers.get('my-container')

# Start a stopped container
container.start()

# Stop a running container
container.stop(timeout=10)

# Restart a container
container.restart()

# Remove a container
container.remove(force=True)  # force=True stops if running

Execute commands inside a running container:

# Run a command and get output
result = container.exec_run('ls -la /app')
print(result.output.decode())

# Interactive shell
result = container.exec_run('bash', stream=True, demux=False)

Copy files to and from containers:

# Copy file from container to host
with open('file.txt', 'wb') as f:
    container.get_archive('/path/in/container', f)

# Copy file from host to container
with open('localfile.txt', 'rb') as f:
    container.put_archive('/path/in/container', f)

Working with Images

List and pull images:

# List local images
images = client.images.list()
for img in images:
    print(f"{img.short_id}: {img.tags}")

# Pull the latest image
client.images.pull('nginx')

# Pull a specific tag
client.images.pull('python', tag='3.11-slim')

Build images from a Dockerfile:

# Build an image
image, logs = client.images.build(
    path='.',
    dockerfile='Dockerfile',
    tag='my-app:latest',
    rm=True
)

# Stream build logs
for log in logs:
    if 'stream' in log:
        print(log['stream'], end='')

Tag and push images to a registry:

# Tag an image for your registry
client.images.tag('my-app:latest', 'registry.io/my-app:latest')

# Push to registry
for line in client.images.push('registry.io/my-app:latest', stream=True):
    print(line)

Practical Automation Examples

Clean up old containers

from datetime import datetime, timedelta

def cleanup_stopped_containers(older_than_hours=24):
    """Remove all stopped containers older than specified hours."""
    cutoff = datetime.now() - timedelta(hours=older_than_hours)
    
    for container in client.containers.list(all=True):
        if container.status != 'running':
            created = container.attrs['Created']
            if datetime.fromisoformat(created.replace('Z', '+00:00')) < cutoff:
                print(f"Removing {container.name}")
                container.remove()

cleanup_stopped_containers(older_than_hours=24)

Restart unhealthy containers

def restart_unhealthy():
    """Restart containers that are in an unhealthy state."""
    for container in client.containers.list():
        health = container.attrs['State'].get('Health', {})
        if health.get('Status') == 'unhealthy':
            print(f"Restarting {container.name}")
            container.restart()

restart_unhealthy()

Batch update images

def update_images(image_names):
    """Pull latest versions of specified images."""
    for name in image_names:
        print(f"Updating {name}")
        client.images.pull(name)
        print(f"Updated {name}")

update_images(['nginx', 'postgres', 'redis'])

When to Use the Docker SDK

Use the Docker SDK when you need:

  • Programmatic control over containers in scripts or applications
  • CI/CD pipeline integration (Jenkins, GitHub Actions, GitLab CI)
  • Automated testing environments that need container orchestration
  • Deployment and infrastructure automation
  • Monitoring and health check scripts

Use the Docker CLI when:

  • You need quick, one-off commands in the terminal
  • The operation is simpler to express in shell syntax
  • You’re not integrating with other code

The SDK lets you treat Docker as a library rather than an external process, which means better error handling, tighter integration with your Python code, and the ability to build complex automation workflows.

Working with Volumes

Volumes persist data beyond the lifetime of a container. The SDK lets you create, list, and manage them:

# Create a volume
volume = client.volumes.create(name='my-data', driver='local')

# List all volumes
volumes = client.volumes.list()

# Inspect a volume
volume = client.volumes.get('my-data')
print(volume.attrs)

# Remove a volume
volume.remove()

Working with Networks

Create isolated networks for multi-container applications:

# Create a network
network = client.networks.create(name='my-network', driver='bridge')

# Connect a container to a network
container = client.containers.get('my-container')
network.connect(container)

# Disconnect a container
network.disconnect(container)

# Remove a network
network.remove()

Handling Errors

The SDK raises exceptions when operations fail. Handle them gracefully:

import docker
from docker.errors import NotFound, APIError

try:
    container = client.containers.get('nonexistent')
except NotFound:
    print("Container not found")
except APIError as e:
    print(f"Docker API error: {e}")

Logging and Debugging

Enable debug logging to see API calls:

import docker
import logging

logging.basicConfig()
logging.getLogger('docker').setLevel(logging.DEBUG)

client = docker.from_env()

This outputs HTTP requests and responses, useful for debugging authentication issues or understanding API behavior.

Conclusion

The Docker SDK transforms Docker from an external tool into an integrated part of your Python applications. Whether you’re building deployment scripts, CI/CD pipelines, or development environments, the SDK provides the control and flexibility you need. Start with the basics (listing and running containers) and gradually add more complex operations as your automation needs grow.