AWS Automation with Boto3
Boto3 is the official AWS SDK for Python. It provides Pythonic interfaces for AWS services, making it easy to automate cloud infrastructure, manage resources, and build cloud-native applications.
Installing Boto3
Install Boto3 with pip:
pip install boto3
That is it. No additional dependencies required.
Configuring AWS Credentials
Before using Boto3, you need to configure your AWS credentials. There are several ways to do this:
Using the AWS CLI
The recommended way is to use the AWS CLI:
aws configure
This creates a configuration file at ~/.aws/credentials and ~/.aws/config.
Environment Variables
You can also set credentials via environment variables:
import os
import boto3
os.environ["AWS_ACCESS_KEY_ID"] = "your-access-key"
os.environ["AWS_SECRET_ACCESS_KEY"] = "your-secret-key"
os.environ["AWS_DEFAULT_REGION"] = "us-east-1"
s3 = boto3.client("s3")
Using a Specific Profile
If you have multiple AWS accounts, use named profiles:
import boto3
# Use a specific profile
session = boto3.Session(profile_name="production")
s3 = session.client("s3")
Working with S3
S3 is one of the most commonly used AWS services. Boto3 makes it easy to manage buckets and objects.
Listing Buckets
import boto3
s3 = boto3.client("s3")
response = s3.list_buckets()
for bucket in response["Buckets"]:
print(f"Bucket: {bucket['Name']} — Created: {bucket['CreationDate']}")
Uploading a File
import boto3
s3 = boto3.client("s3")
# Upload a file
s3.upload_file(
Filename="local_file.txt",
Bucket="my-bucket",
Key="remote/path/file.txt"
)
# Upload with custom settings
s3.upload_file(
Filename="image.png",
Bucket="my-bucket",
Key="images/photo.png",
ExtraArgs={"ContentType": "image/png", "ACL": "public-read"}
)
The upload_file method automatically handles multipart uploads for large files.
Downloading a File
import boto3
s3 = boto3.client("s3")
# Download a file
s3.download_file(
Bucket="my-bucket",
Key="remote/path/file.txt",
Filename="local_file.txt"
)
Listing Objects in a Bucket
import boto3
s3 = boto3.client("s3")
# List all objects
response = s3.list_objects_v2(Bucket="my-bucket")
if "Contents" in response:
for obj in response["Contents"]:
print(f"Key: {obj['Key']} — Size: {obj['Size']} bytes")
Using Paginators for Large Listings
For buckets with many objects, use paginators:
import boto3
s3 = boto3.client("s3")
paginator = s3.get_paginator("list_objects_v2")
for page in paginator.paginate(Bucket="my-bucket"):
if "Contents" in page:
for obj in page["Contents"]:
print(obj["Key"])
Deleting Objects
import boto3
s3 = boto3.client("s3")
# Delete a single object
s3.delete_object(Bucket="my-bucket", Key="old/file.txt")
# Delete multiple objects
objects_to_delete = [
{"Key": "file1.txt"},
{"Key": "file2.txt"},
{"Key": "file3.txt"},
]
s3.delete_objects(
Bucket="my-bucket",
Delete={"Objects": objects_to_delete}
)
Working with EC2
Boto3 can manage EC2 instances, security groups, and other compute resources.
Listing Instances
import boto3
ec2 = boto3.client("ec2")
response = ec2.describe_instances()
for reservation in response["Reservations"]:
for instance in reservation["Instances"]:
print(f"Instance ID: {instance['InstanceId']}")
print(f" Type: {instance['InstanceType']}")
print(f" State: {instance['State']['Name']}")
print(f" Public IP: {instance.get('PublicIpAddress', 'N/A')}")
Starting and Stopping Instances
import boto3
ec2 = boto3.client("ec2")
# Stop an instance
ec2.stop_instances(InstanceIds=["i-1234567890abcdef0"])
# Wait for it to stop
ec2.get_waiter("instance_stopped").wait(InstanceIds=["i-1234567890abcdef0"])
# Start an instance
ec2.start_instances(InstanceIds=["i-1234567890abcdef0"])
# Wait for it to run
ec2.get_waiter("instance_running").wait(InstanceIds=["i-1234567890abcdef0"])
Waiters automatically poll AWS until the desired state is reached.
Creating an Instance
import boto3
ec2 = boto3.client("ec2")
response = ec2.run_instances(
ImageId="ami-0c55b159cbfafe1f0",
InstanceType="t2.micro",
KeyName="my-key-pair",
MaxCount=1,
MinCount=1,
SecurityGroupIds=["sg-0123456789abcdef0"],
SubnetId="subnet-0123456789abcdef0",
)
instance = response["Instances"][0]
print(f"Created instance: {instance['InstanceId']}")
Managing Security Groups
import boto3
ec2 = boto3.client("ec2")
# Create a security group
response = ec2.create_security_group(
GroupName="my-web-sg",
Description="Security group for web servers",
VpcId="vpc-0123456789abcdef0"
)
sg_id = response["GroupId"]
# Add an inbound rule
ec2.authorize_security_group_ingress(
GroupId=sg_id,
IpPermissions=[
{
"IpProtocol": "tcp",
"FromPort": 80,
"ToPort": 80,
"IpRanges": [{"CidrIp": "0.0.0.0/0"}]
},
{
"IpProtocol": "tcp",
"FromPort": 443,
"ToPort": 443,
"IpRanges": [{"CidrIp": "0.0.0.0/0"}]
}
]
)
Using Resource Objects
Boto3 provides two levels of API: clients and resources. Resources offer a higher-level, object-oriented interface.
S3 Resource Example
import boto3
s3 = boto3.resource("s3")
# Get a bucket
bucket = s3.Bucket("my-bucket")
# Upload a file
bucket.upload_file("local.txt", "remote.txt")
# Iterate over objects
for obj in bucket.objects.all():
print(obj.key, obj.size)
# Delete an object
obj = bucket.Object("old.txt")
obj.delete()
EC2 Resource Example
import boto3
ec2 = boto3.resource("ec2")
# Get an instance by ID
instance = ec2.Instance("i-1234567890abcdef0")
# Start it
instance.start()
# Wait for running state
instance.wait_until_running()
# Get instance details
print(f"Public IP: {instance.public_ip_address}")
Handling Errors
Always handle AWS errors gracefully:
import boto3
from botocore.exceptions import ClientError
s3 = boto3.client("s3")
def delete_bucket(bucket_name):
try:
# First, delete all objects
s3 = boto3.client("s3")
response = s3.list_objects_v2(Bucket=bucket_name)
if "Contents" in response:
objects = [{"Key": obj["Key"]} for obj in response["Contents"]]
s3.delete_objects(Bucket=bucket_name, Delete={"Objects": objects})
# Then delete the bucket
s3.delete_bucket(Bucket=bucket_name)
print(f"Deleted bucket: {bucket_name}")
except ClientError as e:
error_code = e.response["Error"]["Code"]
if error_code == "NoSuchBucket":
print(f"Bucket {bucket_name} does not exist")
else:
print(f"AWS Error: {e}")
Using Waiters
Waiters automatically poll AWS and wait for a specific state:
import boto3
s3 = boto3.client("s3")
ec2 = boto3.client("ec2")
# Wait for an S3 bucket to exist
s3.get_waiter("bucket_exists").wait(Bucket="my-bucket")
# Wait for an EC2 instance to be running
ec2.get_waiter("instance_running").wait(InstanceIds=["i-1234567890abcdef0"])
# Wait for EC2 instance termination
ec2.get_waiter("instance_terminated").wait(InstanceIds=["i-1234567890abcdef0"])
Using Paginators
For operations that return many results, use paginators:
import boto3
s3 = boto3.client("s3")
# Use a paginator to list all objects
paginator = s3.get_paginator("list_objects_v2")
for page in paginator.paginate(Bucket="large-bucket"):
if "Contents" in page:
print(f"Page with {len(page['Contents'])} objects")
Best Practices
- Use IAM roles when possible — Avoid embedding credentials in code
- Specify regions explicitly — Do not rely on default region
- Use waiters instead of sleep — They handle timing correctly
- Use paginators for large listings — Avoid memory issues
- Handle rate limiting — AWS throttles requests; implement retries
- Use resource objects for cleaner code — They handle pagination automatically
See Also
paramiko— SSH automation for server managementsubprocess— Running external commands- AWS Official Documentation — Full Boto3 reference
Next Steps
You now know how to automate AWS services with Boto3. Combined with Paramiko for SSH and subprocess for shell commands, you have a complete toolkit for infrastructure automation. Try combining these tools to build deployment pipelines that provision infrastructure, deploy code, and manage servers—all in Python.