All-in-One Django Deployment to AWS

19 February, 2025

All-in-One Django Deployment to AWS


In this post, we’ll take a Django project and fully deploy it to AWS with:

  • A Docker image running on EC2
  • A connected PostgreSQL database on RDS
  • A custom domain on Route53 with https enabled

This is a complete setup to run a Django project in production for free. With that let’s get started.

Step 1: Start a django project

First, lets install django and start a project.

python3.12 -m pip install django
python3.12 -m django startproject django_ec2_complete
cd django_ec2_complete

Now that we have a Django project, let’s install a virtual environment and the necessary dependencies.

# Set up and activate the virtual environment
python3.12 -m venv venv
source venv/bin/activate
pip install --upgrade pip
pip install django gunicorn psycopg2-binary
pip freeze > requirements.txt
  • django is needed for Django
  • gunicorn is needed to run Django in production
  • psycopg2-binary connects Django to PostgreSQL

Our Django project is up and we have the correct dependencies installed. Now let’s run Django.

python manage.py runserver

Django should now be running at http://127.0.0.1:8000/ - perfect!

Next, we need to dockerize this project so it can be deployed onto any AWS server.

Step 2: Dockerize your Django project

First we need to create a Dockerfile with steps to build this Django project.

We can use the following code:

FROM python:3.12.2                      # Use you own version
ENV PYTHONBUFFERED=1                    # Output to terminal
ENV PORT 8080                           # Run on port 8080
WORKDIR /app
COPY . /app/                            # Copy app into working directory
RUN pip install --upgrade pip
RUN pip install -r requirements.txt     # Install dependencies

# Run app with gunicorn command
CMD gunicorn django_ec2_complete.wsgi:application --bind 0.0.0.0:"${PORT}"

EXPOSE ${PORT}                          # Open port 8080 on container

Our Dockerfile will properly build our Django project. Let’s build and run it now.

docker build -t django-ec2-complete:latest .
docker run -p 8000:8080 django-ec2-complete:latest

You should be able to see the image exposed on port 8000. Go to http://localhost:8000/ - perfect!

To stop the image, just run:

docker ps
docker kill <image_id>

We now have a basic docker image. Let’s deploy this to Amazon ECR.

Step 3: Push your Docker image on AWS ECR

ECR will host out built image. This way the server can pull the image and run in production.

First, you’ll need to create an account or login at AWS then we should be good to setup ECR. We’ll also need to install the AWS CLI V2.

Now that we have an AWS account and CLI, lets deploy this image to ECR.

# Create an ECR registry
aws ecr create-repository --repository-name django-ec2-complete

# Boom! We just made a registry!
# Copy "repositoryUri": "620457613573.dkr.ecr.us-east-1.amazonaws.com/django-ec2-complete" value

# Login to your ECR registry
docker login -u AWS -p $(aws ecr get-login-password --region us-east-1) 620457613573.dkr.ecr.us-east-1.amazonaws.com

# Tag your built image
docker tag django-ec2-complete:latest 620457613573.dkr.ecr.us-east-1.amazonaws.com/django-ec2-complete:latest

# Push to your ECR registry
docker push 620457613573.dkr.ecr.us-east-1.amazonaws.com/django-ec2-complete:latest

Step 4: Put our ECR Image on a Server

To run Django in production, we need to setup:

  • A Virtual Private Cloud (VPC) to networks and connect all resources
  • Subnets (two) to host our resources in two zones (for reliability / AWS requires it)
  • Security groups to define what web traffic is allowed
  • Our server to run the Django image
  • IAM Roles to give our server access to ECR
# Define AWS provider and set the region for resource provisioning
provider "aws" {
  region = "us-east-1"
}

# Create a Virtual Private Cloud to isolate the infrastructure
resource "aws_vpc" "default" {
  cidr_block           = "10.0.0.0/16"
  enable_dns_support   = true
  enable_dns_hostnames = true
  tags = {
    Name = "Django_EC2_VPC"
  }
}

# Internet Gateway to allow internet access to the VPC
resource "aws_internet_gateway" "default" {
  vpc_id = aws_vpc.default.id
  tags = {
    Name = "Django_EC2_Internet_Gateway"
  }
}

# Route table for controlling traffic leaving the VPC
resource "aws_route_table" "default" {
  vpc_id = aws_vpc.default.id
  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.default.id
  }
  tags = {
    Name = "Django_EC2_Route_Table"
  }
}

# Subnet within VPC for resource allocation, in availability zone us-east-1a
resource "aws_subnet" "subnet1" {
  vpc_id                  = aws_vpc.default.id
  cidr_block              = "10.0.1.0/24"
  map_public_ip_on_launch = false
  availability_zone       = "us-east-1a"
  tags = {
    Name = "Django_EC2_Subnet_1"
  }
}

# Another subnet for redundancy, in availability zone us-east-1b
resource "aws_subnet" "subnet2" {
  vpc_id                  = aws_vpc.default.id
  cidr_block              = "10.0.2.0/24"
  map_public_ip_on_launch = false
  availability_zone       = "us-east-1b"
  tags = {
    Name = "Django_EC2_Subnet_2"
  }
}

# Associate subnets with route table for internet access
resource "aws_route_table_association" "a" {
  subnet_id      = aws_subnet.subnet1.id
  route_table_id = aws_route_table.default.id
}
resource "aws_route_table_association" "b" {
  subnet_id      = aws_subnet.subnet2.id
  route_table_id = aws_route_table.default.id
}



# Security group for EC2 instance
resource "aws_security_group" "ec2_sg" {
  vpc_id = aws_vpc.default.id
  ingress {
    from_port   = 22
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"] # Only allow HTTPS traffic from everywhere
  }
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
  tags = {
    Name = "EC2_Security_Group"
  }
}

# Define variable for RDS password to avoid hardcoding secrets
variable "secret_key" {
  description = "The Secret Key for Django"
  type        = string
  sensitive   = true
}

# EC2 instance for the local web app
resource "aws_instance" "web" {
  ami                    = "ami-0c101f26f147fa7fd" # Amazon Linux
  instance_type          = "t3.micro"
  subnet_id              = aws_subnet.subnet1.id # Place this instance in one of the private subnets
  vpc_security_group_ids = [aws_security_group.ec2_sg.id]

  associate_public_ip_address = true # Assigns a public IP address to your instance
  user_data_replace_on_change = true # Replace the user data when it changes

  iam_instance_profile = aws_iam_instance_profile.ec2_profile.name

  user_data = <<-EOF
    #!/bin/bash
    set -ex
    yum update -y
    yum install -y yum-utils

    # Install Docker
    yum install -y docker
    service docker start

    # Install AWS CLI
    yum install -y aws-cli

    # Authenticate to ECR
    docker login -u AWS -p $(aws ecr get-login-password --region us-east-1) 620457613573.dkr.ecr.us-east-1.amazonaws.com

    # Pull the Docker image from ECR
    docker pull 620457613573.dkr.ecr.us-east-1.amazonaws.com/django-ec2-complete:latest

    # Run the Docker image
    docker run -d -p 80:8080 \
    --env SECRET_KEY=${var.secret_key} \
    620457613573.dkr.ecr.us-east-1.amazonaws.com/django-ec2-complete:latest
    EOF

  tags = {
    Name = "Django_EC2_Complete_Server"
  }
}

# IAM role for EC2 instance to access ECR
resource "aws_iam_role" "ec2_role" {
  assume_role_policy = jsonencode({
    Version = "2012-10-17",
    Statement = [{
      Action = "sts:AssumeRole",
      Principal = {
        Service = "ec2.amazonaws.com",
      },
      Effect = "Allow",
    }],
  })
}

# Attach the AmazonEC2ContainerRegistryReadOnly policy to the role
resource "aws_iam_role_policy_attachment" "ecr_read" {
  role       = aws_iam_role.ec2_role.name
  policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
}

# IAM instance profile for EC2 instance
resource "aws_iam_instance_profile" "ec2_profile" {
  name = "django_ec2_complete_profile"
  role = aws_iam_role.ec2_role.name
}

Don’t forget to add your IP address on lines 8591. Remember to add a /32 at the end to be CIDR compatible.

Now run:

terraform init
terraform apply

Step 5: Connect a PostgreSQL Databse on AWS

Next, lets setup a new terraform file to define the database. We’ll need to create:

  • A Subnet Group to connect Postgres to our subnets
  • Another security group to allow SQL traffic
  • A database to host PostgreSQL

Let’s create a database.tf file to define the following:

# DB subnet group for RDS instances, using the created subnets
resource "aws_db_subnet_group" "default" {
  subnet_ids = [aws_subnet.subnet1.id, aws_subnet.subnet2.id]
  tags = {
    Name = "Django_EC2_Subnet_Group"
  }
}

# Security group for RDS, allows PostgreSQL traffic
resource "aws_security_group" "rds_sg" {
  vpc_id      = aws_vpc.default.id
  name        = "DjangoRDSSecurityGroup"
  description = "Allow PostgreSQL traffic"
  ingress {
    from_port   = 5432
    to_port     = 5432
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"] # Updated to "10.0.0.0/16"
  }
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"] # Updated to "10.0.0.0/16"
  }
  tags = {
    Name = "RDS_Security_Group"
  }
}

variable "db_password" {
  description = "The password for the database"
  type        = string
  sensitive   = true
}

# RDS instance for Django backend, now privately accessible
resource "aws_db_instance" "default" {
  allocated_storage      = 20
  storage_type           = "gp2"
  engine                 = "postgres"
  engine_version         = "16.1"
  instance_class         = "db.t3.micro"
  identifier             = "my-django-rds"
  db_name                = "djangodb"
  username               = "adam"
  password               = var.db_password
  db_subnet_group_name   = aws_db_subnet_group.default.name
  vpc_security_group_ids = [aws_security_group.rds_sg.id]
  skip_final_snapshot    = true
  publicly_accessible    = true # Changed to false for private access
  multi_az               = false
  tags = {
    Name = "Django_RDS_Instance"
  }
}

Run the following to set environment variables for db_password and secret_key

export TF_VAR_secret_key=pass1234
export TF_VAR_db_password=pass1234

Re-apply terraform changes.

terraform apply --auto-approve

We can see RDS is deployed! Go to the RDS dashboard and see for yourself.

Now, lets connect Django to this new database. Go into settings.py and add/replace the following settings:

import os

SECRET_KEY = os.getenv('SECRET_KEY')

DEBUG = False

ALLOWED_HOSTS = ['*']

# TODO: Add your domains
# If you have no domains, don't add this line
CSRF_TRUSTED_ORIGINS = ['https://lamorre.com', 'https:/www.lamorre.com']

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.postgresql',
        'NAME': os.getenv('DB_NAME'),
        'USER': os.getenv('DB_USER_NM'),
        "PASSWORD": os.getenv('DB_USER_PW'),
        "HOST": os.getenv('DB_IP'),
        "PORT": os.getenv('DB_PORT'),
    }
}

Be sure to replace CSRF_TRUSTED_ORIGINS with the internet domain you have (for http:// and https://).

We should set these environment variables in the end of my-venv/bin/activate

export SECRET_KEY=pass1234
export DB_NAME=djangodb
export DB_USER_NM=adam
export DB_USER_PW=pass1234
export DB_IP=my-django-rds.cb2u6sse4azd.us-east-1.rds.amazonaws.com # Your DB on AWS console
export DB_PORT=5432

Run the following commands to reactivate the virtual environment with these variables active.

deactivate
source my-venv/bin/activate

Run the following commands to migrate the database and make an admin user:

python manage.py migrate
python manage.py createsuperuser

To lock everything down, you can now remove your IP address from main.tf.

Finally, let’s re-deploy this new Django code to ECR and put it on the server.

# Re-build the image (with changes)
docker build -t django-ec2-complete:latest .

# Tag the new image
docker tag django-ec2-complete:latest 620457613573.dkr.ecr.us-east-1.amazonaws.com/django-ec2-complete:latest

# Push the new image
docker push 620457613573.dkr.ecr.us-east-1.amazonaws.com/django-ec2-complete:latest

Last, edit the user_data parameter to main.tf to run Django with the proper environment variables:

...
resource "aws_instance" "web" {
    ...
    user_data = <<-EOF
        ...
        # Run the Docker image
        docker run -d -p 80:8080 \
        --env SECRET_KEY=${var.secret_key} \
        --env DB_NAME=djangodb \
        --env DB_USER_NM=adam \
        --env DB_USER_PW=pass1234 \
        --env DB_IP=${aws_db_instance.default.address} \
        --env DB_PORT=5432 \
        620457613573.dkr.ecr.us-east-1.amazonaws.com/django-ec2-complete:latest
        EOF
}

Re-apply your terraform:

terraform apply --auto-approve

Boom! We should be able to see django running with the IP address from the admin dashboard!

Go to /admin and login with the super user we created.

Step 6: Connect a Custom Domain (Optional)

If you want to connect a custom domain, all we need to do is buy a domain on Route53 in AWS and connect it with a domain.tf terraform file.

In our domain.tf file, we’ll set up:

  • A load balancer to direct public http traffic to our server
  • An ACM certificate for https://
  • A Route 53 zone (the domain we bought)
  • Some AWS Route 53 records - to map this domain to load balancer

With that, let’s add the domain.tf file:

# Request a certificate for the domain and its www subdomain
resource "aws_acm_certificate" "cert" {
  domain_name       = "lamorre.com"
  validation_method = "DNS"

  subject_alternative_names = ["www.lamorre.com"]

  tags = {
    Name = "my_domain_certificate"
  }

  lifecycle {
    create_before_destroy = true
  }
}

# Declare the Route 53 zone for the domain
data "aws_route53_zone" "selected" {
  name = "lamorre.com"
}

# Define the Route 53 records for certificate validation
resource "aws_route53_record" "cert_validation" {
  for_each = {
    for dvo in aws_acm_certificate.cert.domain_validation_options : dvo.domain_name => {
      name   = dvo.resource_record_name
      record = dvo.resource_record_value
      type   = dvo.resource_record_type
    }
  }

  zone_id = data.aws_route53_zone.selected.zone_id
  name    = each.value.name
  type    = each.value.type
  records = [each.value.record]
  ttl     = 60
}

# Define the Route 53 records for the domain and its www subdomain
resource "aws_route53_record" "root_record" {
  zone_id = data.aws_route53_zone.selected.zone_id
  name    = "lamorre.com"
  type    = "A"

  alias {
    name                   = aws_lb.default.dns_name
    zone_id                = aws_lb.default.zone_id
    evaluate_target_health = true
  }
}

resource "aws_route53_record" "www_record" {
  zone_id = data.aws_route53_zone.selected.zone_id
  name    = "www.lamorre.com"
  type    = "A"

  alias {
    name                   = aws_lb.default.dns_name
    zone_id                = aws_lb.default.zone_id
    evaluate_target_health = true
  }
}

# Define the certificate validation resource
resource "aws_acm_certificate_validation" "cert" {
  certificate_arn         = aws_acm_certificate.cert.arn
  validation_record_fqdns = [for record in aws_route53_record.cert_validation : record.fqdn]
}

# Security group for ALB, allows HTTPS traffic
resource "aws_security_group" "alb_sg" {
  vpc_id      = aws_vpc.default.id
  name        = "alb-https-security-group"
  description = "Allow all inbound HTTPS traffic"

  ingress {
    from_port   = 443
    to_port     = 443
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

# Application Load Balancer for HTTPS traffic
resource "aws_lb" "default" {
  name               = "django-ec2-alb-https"
  internal           = false
  load_balancer_type = "application"
  security_groups    = [aws_security_group.alb_sg.id]
  subnets            = [aws_subnet.subnet1.id, aws_subnet.subnet2.id]

  enable_deletion_protection = false
}

# Target group for the ALB to route traffic from ALB to VPC
resource "aws_lb_target_group" "default" {
  name     = "django-ec2-tg-https"
  port     = 443
  protocol = "HTTP" # Protocol used between the load balancer and targets
  vpc_id   = aws_vpc.default.id
}

# Attach the EC2 instance to the target group
resource "aws_lb_target_group_attachment" "default" {
  target_group_arn = aws_lb_target_group.default.arn
  target_id        = aws_instance.web.id # Your EC2 instance ID
  port             = 80                  # Port the EC2 instance listens on; adjust if different
}


# HTTPS listener for the ALB to route traffic to the target group
resource "aws_lb_listener" "default" {
  load_balancer_arn = aws_lb.default.arn
  port              = 443
  protocol          = "HTTPS"
  ssl_policy        = "ELBSecurityPolicy-2016-08" # Default policy, adjust as needed
  certificate_arn   = aws_acm_certificate.cert.arn

  default_action {
    type             = "forward"
    target_group_arn = aws_lb_target_group.default.arn
  }
}

Re-run terraform apply:

terraform apply --auto-approve

Boom, you should be able to see a Django project running (and connected to PostgreSQL and https://) on your new domain!