Skip to main content

I Stopped Letting Prisma Write My SQL. Lemme do it myself.

·13 min read
prismapostgresqlsqlengineering

If you migrate a migration and something is wrong, you want to immediately roll it back. Prisma does not support down migrations. That's it. That's the core problem. The rest of this post is what I built instead.

TLDR

I inverted Prisma's default workflow. Instead of editing schema.prisma and letting Prisma generate SQL, I write raw SQL migrations by hand, lint them with Squawk, apply them, then pull the schema back from the database. Prisma's schema file is always generated, never hand-edited. This gives me rollbacks, full control over DDL, and reviewable SQL in every PR.

Why I Don't Trust the Default Flow

There are three angles to this.

You can't undo anything. Prisma has no migrate down. If you deploy a migration and something breaks, you're writing rollback SQL on the spot, under pressure, in production. I'd rather write it beforehand, when I'm calm and thinking clearly.

If I'm reviewing the SQL anyway, why not write it myself? For every migration, I have to make sure nothing bullshit gets past my eye. So I review the generated SQL. And I've already reviewed it. At that point, why am I not the one writing it? That's simple. I don't quite trust the auto-generated DDL, even though the Prisma team worked hard on it.

Prisma's schema language has real gaps. At some point, schema-first breaks down. I wanted to implement a CHECK constraint because Prisma enums in PostgreSQL are a pain in the ass. And I fact-checked this: Prisma literally cannot create PostgreSQL CHECK constraints. There's been an open feature request for this since December 2019 (GitHub issue #3388). So in Prisma, if you have a column that should only accept specific string values like 'MONTH', 'YEAR', 'LIFETIME', it's just a TEXT field with no enforcement at the database level. You have to write the migration SQL yourself to add the CHECK. And PostgreSQL enums through Prisma have their own problems. There's a long-standing bug where ALTER TYPE ... ADD for enums can't run inside a transaction block, and Prisma wraps migrations in transactions. So it just fails.

So, well, here we go.

Where This Comes From

Honestly, from the start. From the start of my journey as a software engineer, I wrote schema by hand. At that time, AI didn't exist. I had to write schema by hand and then evolve it over time, just like migration files. I used that as a chance to practice writing SQL. And sometimes I needed to migrate data with one-off scripts, and I'd run those scripts on the production database. So I had to make sure it executed perfectly on the dev database first, and then I'd execute it on production.

I've maintained tables with more than a dozen million records. That is a really big database. And you have to write migration scripts by hand, and you have to run them very carefully to not lock the whole table, to not make the app unusable. You have to find ways to execute without locking the whole database out.

I already write so much SQL by hand. Why not migrations? I trust myself.

The Core Workflow

write SQL → lint with Squawk → apply to DB → pull schema → regenerate client

The Prisma schema is an artifact, not an input. I never touch it. This gives me full control over every DDL statement that touches production, reviewable SQL in PRs, real rollback capability via down.sql, and SQL linting to catch dangerous migrations before they run.

Project Layout

apps/api/
├── prisma/
│   ├── schema.prisma              ← generated, never hand-edited
│   └── migrations/
│       ├── 0_init/
│       │   ├── migration.sql
│       │   └── down.sql
│       ├── 20260226131653_user-pref/
│       │   ├── migration.sql
│       │   └── down.sql
│       └── 20260331125854_payment/
│           ├── migration.sql
│           └── down.sql
└── scripts/
    ├── migrate-new.sh
    ├── migrate-apply.sh
    ├── migrate-down.sh
    ├── db-reset.sh
    ├── dump-schema.sh
    ├── migrate-squash.sh
    └── migrate-squash-resolve.sh

Every migration folder has exactly two files: migration.sql (the up migration) and down.sql (the rollback). They ship together in the same commit.

The Scripts

Create a migration

#!/usr/bin/env bash
set -euo pipefail

NAME="${1:?"Usage: ./scripts/migrate-new.sh <migration_name>"}"
TIMESTAMP=$(date +%Y%m%d%H%M%S)
DIR="prisma/migrations/${TIMESTAMP}_${NAME}"

mkdir -p "$DIR"
touch "$DIR/migration.sql"
echo "-- Rollback for: ${NAME}" > "$DIR/down.sql"

echo ""
echo "Created migration: $DIR"
echo "  UP:   ${DIR}/migration.sql"
echo "  DOWN: ${DIR}/down.sql"
echo ""
echo "Write your SQL, then run: pnpm migrate:apply"

Usage: pnpm migrate:new add_payment_tables. Creates a timestamped directory with two empty files. You write the SQL by hand.

Lint, apply, sync

#!/usr/bin/env bash
set -euo pipefail

LATEST=$(ls -1d prisma/migrations/*/ | sort -r | head -1)

echo "Linting migration: ${LATEST}migration.sql"
npx squawk "${LATEST}migration.sql"

echo ""
echo "Applying migration..."
npx prisma migrate deploy

echo ""
echo "Syncing schema.prisma from database..."
npx prisma db pull

echo ""
echo "Regenerating Prisma client..."
npx prisma generate

echo ""
echo "✅ Migration applied, schema synced, client regenerated."

Four things in one command:

  1. Lint. Squawk checks the latest migration for unsafe patterns: missing CONCURRENTLY on index creation, missing lock timeouts, etc.
  2. Apply. prisma migrate deploy runs the SQL and records it in _prisma_migrations.
  3. Sync. prisma db pull regenerates schema.prisma from the live database.
  4. Regenerate. prisma generate rebuilds the typed client.

The schema and client are always derived from what is actually in the database, not from what I think is there.

Roll back

#!/usr/bin/env bash
set -euo pipefail

LATEST=$(ls -1d prisma/migrations/*/ | sort -r | head -1)
MIGRATION_NAME=$(basename "$LATEST")
DOWN_FILE="${LATEST}down.sql"

if [ ! -f "$DOWN_FILE" ]; then
  echo "❌ No down.sql found at $DOWN_FILE"
  exit 1
fi

echo "Latest migration: $MIGRATION_NAME"
read -r -p "Roll back this migration? (y/N): " confirmation

if [ "${confirmation,,}" != "y" ]; then
  echo "Rollback cancelled."
  exit 0
fi

echo ""
echo "Running down migration..."
npx prisma db execute --file "$DOWN_FILE"

echo ""
echo "Removing migration record from _prisma_migrations..."
npx prisma db execute --stdin <<SQL
DELETE FROM "_prisma_migrations"
WHERE "migration_name" = '${MIGRATION_NAME}';
SQL

echo ""
echo "Syncing schema.prisma from database..."
npx prisma db pull

echo ""
echo "Regenerating Prisma client..."
npx prisma generate

echo ""
echo "✅ Rolled back: $MIGRATION_NAME"

Finds the latest migration, prompts for confirmation, runs the down.sql, marks it rolled back in Prisma's migration table, then re-syncs. No custom migration runner. Just raw SQL and prisma db execute.

In dev mode on the local machine, I do write a wrong migration once or twice or infinite times. But it's local, so no one sees that. I just type the down SQL, run it, done. It's actually really pleasant to have all these commands at your fingertips.

Full local reset

#!/usr/bin/env bash
set -euo pipefail

echo "⚠️  This will destroy all local data. Ctrl+C to cancel."
read -r -p "Continue? (y/N): " confirmation

if [ "${confirmation,,}" != "y" ]; then
  echo "Reset cancelled."
  exit 0
fi

docker compose down -v
docker compose up -d db shadow-db

until docker compose exec db pg_isready -U postgres -d myapp > /dev/null 2>&1; do
  sleep 1
done

npx prisma migrate deploy
npx prisma db pull
npx prisma generate

echo "✅ Fresh database ready."

Tears down Docker volumes, waits for Postgres, replays all migrations from scratch. Useful for testing that your migration history is clean and complete.

What a Migration File Looks Like

From the payment tables migration:

-- squawk-ignore-file
SET LOCK_TIMEOUT = '4s';
SET STATEMENT_TIMEOUT = '60s';

CREATE TABLE IF NOT EXISTS "plans" (
  "id"          TEXT PRIMARY KEY,
  "name"        TEXT NOT NULL,
  "tier"        TEXT NOT NULL,
  "interval"    TEXT NOT NULL CHECK ("interval" IN ('MONTH', 'YEAR', 'LIFETIME', 'NONE')),
  "price_cents" INTEGER NOT NULL,
  "seat_limit"  INTEGER,
  "is_active"   BOOLEAN NOT NULL DEFAULT TRUE,
  "created_at"  TIMESTAMPTZ NOT NULL DEFAULT CURRENT_TIMESTAMP,
  "updated_at"  TIMESTAMPTZ NOT NULL DEFAULT CURRENT_TIMESTAMP
);

CREATE UNIQUE INDEX CONCURRENTLY IF NOT EXISTS "subscriptions_uid"
  ON "subscriptions" ("uid");

INSERT INTO "plans" ("id", "name", "tier", "interval", "price_cents")
VALUES
  ('free',     'Free',     'free', 'NONE',     0),
  ('plus',     'Plus',     'plus', 'MONTH',  600),
  ('annual',   'Annual',   'plus', 'YEAR',  6000),
  ('lifetime', 'Lifetime', 'plus', 'LIFETIME', 19900);

The rollback:

-- squawk-ignore-file
DROP TABLE IF EXISTS "webhook_events";
DROP TABLE IF EXISTS "customer_provider_ids";
DROP TABLE IF EXISTS "subscriptions";
DROP TABLE IF EXISTS "plan_provider_products";
DROP TABLE IF EXISTS "plans";

A few things worth noting:

  • Lock and statement timeouts at the top. Any DDL that takes too long fails fast rather than holding locks.
  • CONCURRENTLY on every index. Never blocks reads or writes during index builds.
  • IF NOT EXISTS / IF EXISTS. Idempotent by default. Safe to re-run.
  • Inline comments explaining non-obvious decisions. Why TEXT PK instead of BIGINT, why a partial index. This is the place to capture design reasoning that won't survive a schema diff.
  • Seed data in the same migration. Static reference data ships with the table that owns it.

That CHECK constraint on the interval column? That's the thing Prisma can't generate. I wrote it in 10 seconds. In schema-first mode, you'd have a naked TEXT column with no enforcement at the database level.

When Down Migrations Get Serious

The easy down.sql is DROP TABLE. The hard one is when you have to populate old data back.

Here's a real example. Say you want to rename a column on a table with millions of rows. You can't just ALTER TABLE ... RENAME COLUMN because the app code still references the old name, and on a large table you need to be careful about locks. So you do the expand-and-contract pattern:

migration.sql (up):

SET LOCK_TIMEOUT = '4s';
SET STATEMENT_TIMEOUT = '60s';

-- Step 1: Add the new column
ALTER TABLE "users" ADD COLUMN "display_name" TEXT;

-- Step 2: Backfill from old column (batch on large tables)
UPDATE "users" SET "display_name" = "username"
  WHERE "display_name" IS NULL;

-- Step 3: Enforce NOT NULL after backfill
ALTER TABLE "users" ALTER COLUMN "display_name" SET NOT NULL;

-- Step 4: Drop the old column
ALTER TABLE "users" DROP COLUMN "username";

down.sql (rollback):

SET LOCK_TIMEOUT = '4s';
SET STATEMENT_TIMEOUT = '60s';

-- Recreate old column
ALTER TABLE "users" ADD COLUMN "username" TEXT;

-- Copy data back
UPDATE "users" SET "username" = "display_name"
  WHERE "username" IS NULL;

ALTER TABLE "users" ALTER COLUMN "username" SET NOT NULL;

-- Drop new column
ALTER TABLE "users" DROP COLUMN "display_name";

The up migration destroyed the old column. Your rollback has to recreate it AND repopulate the data. If someone inserted rows after the up migration with a display_name that never had a corresponding username, you now have data that never existed in the original schema. And if you batched the backfill in the up but the rollback runs in one shot, you might lock the table for minutes on a 12M+ row table.

You create a new column, you change the name, you migrate the data, you delete the old column. The down SQL for that gets serious real fast.

Squawk: Linting for the Team

Squawk is for enforcing standards throughout the team. You can make sure of yourself, but you cannot be sure others are on the same page with you. So you need a unified tool to standardize the whole team.

Squawk catches things like creating an index without CONCURRENTLY (locks the table on writes), adding a NOT NULL column without a default (rewrites the entire table), or missing lock timeouts. Even if it catches the error in development and not production, that's the point. Squawk catches dev errors before they reach production. That is a perfectly good answer.

"lint:migrations":         "squawk 'prisma/migrations/*/migration.sql'",
"lint:migrations:changed": "squawk $(git diff --name-only origin/main -- 'prisma/migrations/*/migration.sql')"

How PRs Changed

A new PR now contains a new migration folder: a migration.sql and a down.sql. You can see the exact SQL sequence of the migration. Not a schema change between Git diffs. The actual SQL gets reviewed by the team line by line, so everyone understands what the fuck is happening. And the team can check the down SQL script right there in the same PR.

Before: a diff of schema.prisma tells you what changed but not what SQL will run. Now: the reviewer sees lock timeouts, CONCURRENTLY usage, naming conventions, and whether the rollback actually undoes what the up does.

Why Prisma at All?

Because Prisma is exceptional at typing TypeScript and at migration tracking. It's state of the art when it comes to ORM typing. The db pull command is really pleasant because it syncs between my database schema and the typed client. It just syncs. And that is the best type system I have ever had.

Prisma's migration engine is not why I use Prisma. The typed client is. This setup lets me use the client without being held hostage by the migration workflow. I get the best of both: hand-written SQL for control, Prisma Client for type safety.

On Squashing

I haven't needed to squash yet. My migration count isn't there. But I know it's coming. Past 30-50 migrations, db:reset gets slow and the history becomes noise. When that happens, you collapse everything into a single baseline using prisma migrate diff to generate the full DDL from the current schema.

The trap I'm already watching for: prisma migrate diff generates DDL from the Prisma schema, and the Prisma schema does not represent CHECK constraints, views, triggers, or custom functions. So a squashed baseline would silently drop every CHECK constraint I wrote by hand. That CHECK ("interval" IN ('MONTH', 'YEAR', 'LIFETIME', 'NONE')) I carefully added? Gone after squashing. Same for seed data, lock timeouts, CONCURRENTLY, and inline comments.

The plan when I get there: grep the old migrations for everything Prisma can't represent and manually add it back to the baseline. I'll write about that when I've actually done it.

Who This Is For (and Not For)

If your app has real users and real data that you cannot lose, writing everything by hand is the correct move. If it's semi-automated, you must review it to make sure it works.

This is also for teams that have tables large enough that an ALTER can lag for minutes. My case is more than a dozen million records. Updating that thing, altering that thing, already lags for a good few minutes.

If you're just prototyping, or at a hackathon, or building a toy project, just use schema-first. You don't need all this.

"This Is Over-Engineered"

Yeah, it's over-engineered until you need it. Rollbacks, linting, lock timeouts. None of these matter until they do. And when they do, you're fucked.

The friction is the feature. You write more, yes. But that friction makes you think about what you are doing to your database. The sacred. Schema-first makes it too easy to not think.

The extra time per migration is maybe 5 minutes. And I have to know PostgreSQL DDL syntax instead of just learning the Prisma schema file. But I don't want to learn the Prisma schema file. I want to learn SQL. This is the fundamental. I want to learn it. And I will start this setup on every new project from now on.