Databases ·

A Practical PostgreSQL Backup Setup Using the 3-2-1 Rule

A real-world PostgreSQL backup setup using the 3-2-1 backup rule with pg_dump, S3-compatible storage, cleanup scripts, and local restores.


Backups are important. Not in a theoretical “one day this might matter” way, but in a very real “something will break eventually” kind of way. Disks fail, servers get deleted, credentials leak, and sometimes we simply make mistakes.

This post walks through how I’ve set up backups for my PostgreSQL databases using a simple, reliable approach that follows the well-known 3-2-1 backup rule. Nothing fancy, no enterprise tooling — just scripts that run consistently and are easy to restore from.

What is the 3-2-1 Backup Rule?

The 3-2-1 backup rule is a simple guideline that dramatically improves your odds of surviving data loss:

  • 3 copies of your data – the original plus two backups
  • 2 different storage types – for example disk and object storage
  • 1 offsite copy – stored somewhere physically separate

The goal is redundancy across both location and technology. If a server dies, a provider goes down, or ransomware hits, you still have at least one clean copy available.

Step 1: Creating PostgreSQL Data Dumps

The first part of my setup runs directly on the database server. A Python script uses pg_dump to create daily SQL dumps for multiple databases. I only dump data (not schema), as schema is managed through migrations.


import os
import subprocess
from datetime import datetime
from dotenv import load_dotenv

load_dotenv()

DB_HOST = os.getenv("DB_HOST", "localhost")
DB_PORT = os.getenv("DB_PORT", "5432")
DB_USER = os.getenv("DB_USER")
DB_PASSWORD = os.getenv("DB_PASSWORD")

databases = [
    "mycooldatabase1",
    "mycooldatabase2",
]

date_str = datetime.now().strftime("%Y-%m-%d")

for database in databases:
    print(f"Backing up {database}...")

    backup_dir = os.path.join("backups", database)
    os.makedirs(backup_dir, exist_ok=True)

    output_file = os.path.join(
        backup_dir,
        f"{database}_{date_str}.sql"
    )

    env = os.environ.copy()
    env["PGPASSWORD"] = DB_PASSWORD

    cmd = [
        "pg_dump",
        "--data-only",
        "--disable-triggers",
        "-h", DB_HOST,
        "-p", DB_PORT,
        "-U", DB_USER,
        database,
    ]

    with open(output_file, "w") as f:
        subprocess.run(cmd, stdout=f, env=env, check=True)

    print(f"✔ Saved to {output_file}")
    

At this point, the dumps exist only on the database server — which is not enough on its own.

Step 2: Uploading Dumps to S3-Compatible Storage

The next step is pushing those dumps off the server. I use an S3-compatible object storage (in this example Linode Object Storage), but this works the same with AWS S3, Backblaze B2, or similar services. The S3 bucket is also in a different geographic location.


import boto3
import dotenv
import glob
import os

dotenv.load_dotenv()

linode_obj_config = {
    "aws_access_key_id": os.environ.get("ACCESS_KEY"),
    "aws_secret_access_key": os.environ.get("SECRET_KEY"),
    "endpoint_url": os.environ.get("ENDPOINT_URL"),
}

s3 = boto3.client("s3", **linode_obj_config)

for filename in glob.glob('**/*.sql', recursive=True):
    print(filename)
    s3.upload_file(filename, 'db', filename.replace('backups/', ''))
    os.remove(filename)
    

After upload, the local SQL files are deleted. This keeps disk usage on the database server low and ensures that the offsite copy is always the source of truth.

Step 3: Cleanup and Local Offline Backups

PostgreSQL generates a lot of dumps over time, so I also run a scheduled cleanup script that removes older backups based on retention rules.

Finally, I sync new files from the S3 bucket down to a local server. This gives me a physically accessible backup that lives outside the cloud entirely — ticking the final box of the 3-2-1 rule.

The result is:

  • Production database
  • Offsite object storage backup
  • Local offline copy

Restoring from this setup is fast, predictable, and stress-free — which is exactly what backups should be.