February 22, 2026
Building SoundSignal in 5 Days with Claude Code
How one developer and an AI pair-programmed a full-stack permit intelligence platform — 143 commits, 95 PRs, and ~29,500 lines of code in under a week.
I built SoundSignal — a building permit intelligence platform — in 5 days using Claude Code as my primary development partner. This is what that actually looked like: the workflow, the numbers, the friction, and what shipped.
The raw numbers
| Metric | Value |
|---|---|
| Calendar days | 5 |
| Claude Code sessions | 56 |
| Total messages | 414 |
| Commits | 143 |
| Pull requests | 95 |
| Lines of code | ~29,500 |
| Tests | 506 (464 backend + 42 frontend) |
| Estimated cost (Claude Code) | ~$200 |
Every commit went through a PR. No direct pushes to main. Claude Code handled the full implement → commit → push → PR → merge loop.
What SoundSignal does
You give it a property address on Bainbridge Island, WA, and it produces a building permit risk report. The pipeline resolves the address to a parcel via the county’s SmartGov portal, scrapes every associated building permit, downloads all the permit documents (PDFs — blueprints, inspection reports, applications), then runs a 3-layer AI extraction:
- Layer 1: Extract structured data from each individual document
- Layer 2: Aggregate those into per-permit summaries
- Layer 3: Synthesize a final parcel report with risk flags
The layered approach keeps any single Claude call from blowing up the context window. Each layer summarizes before passing data up. Total cost is about $1.30 per parcel on claude-sonnet-4-5-20250929.
The stack
- Backend: Python 3.12, FastAPI, Celery, PostgreSQL, Redis
- Frontend: Next.js 14, TypeScript, Tailwind CSS
- AI: Claude API (Bedrock in production — IAM auth, no API keys)
- Scraping: Playwright for browser automation
- Infrastructure: ECS Fargate (API, worker, dashboard), RDS PostgreSQL, ElastiCache Redis, CloudFront, S3, ALB, Cognito, SES, API Gateway, Route 53, Secrets Manager — all Terraform, all in a VPC with proper subnets and security groups
How I actually used Claude Code
The loop was the same across all 56 sessions: describe what I want, Claude Code reads the relevant code, proposes an approach (I’d redirect maybe 20% of the time), writes the code, creates a branch, commits, pushes, opens a PR. I review the diff, request changes or merge.
On --dangerously-skip-permissions
I ran with this flag the whole time. It lets Claude Code execute commands, write files, and run git operations without confirming each one. The name is intentionally scary — yes, it could rm -rf your repo. It never did anything destructive, but you’re trusting the model for every file write and shell command.
I was OK with this because everything was in git with frequent commits, I reviewed every PR before merging, and it was a greenfield project with nothing sensitive. Would I do this on a production codebase with secrets? No.
The worktree pattern
By day 2, we’d settled into a pattern where Claude Code creates a git worktree for each task:
git worktree add .claude/worktrees/feat/my-feature -b feat/my-feature
# ... make changes ...
git push -u origin feat/my-feature
gh pr create --title "..." --body "..."
# merge via PR
This kept main clean and runnable at all times. I could use the app from main while Claude Code worked on something in a worktree.
What went well
Getting from “I want X” to “X is deployed” often took under 10 minutes for straightforward features. Claude Code would read the relevant code, pick up the patterns, write the implementation with tests, and open a PR.
Once a pattern existed — like the docs system using MDX, or the API endpoint structure — it replicated faithfully. It also wrote tests without being asked, which I appreciated. The suite grew to 506 tests covering models, endpoints, scraper parsing, downloader logic, newsletter generation, rate limiting, and frontend utilities.
The Terraform was probably the most impressive part. The entire AWS deployment — 15+ services, 4,400+ lines of infrastructure code — came out of Claude Code. CI/CD builds Docker images, runs migrations, applies Terraform, and deploys to ECS on every merge to main.
Where it got frustrating
Claude Code sometimes spent too long reading files before writing any code. For a 5-day sprint I needed it to move fast, so I added “Limit exploration/planning to 2-3 min max before producing code” to CLAUDE.md.
The most annoying recurring issue: it would add a feature that needs an environment variable (API key, S3 bucket), update the application code correctly, but forget to add the variable to the ECS task definition in Terraform. This caused several production debugging sessions where the code was fine but the container didn’t have the config. Not hard to fix once you spot it, but easy to miss.
Long sessions filled up the context window and degraded quality. Shorter, focused sessions — one feature per session — worked much better.
There was also a PyMuPDF thing that tripped us both up: page.widgets() returns a generator that’s always truthy, so you can’t just do if page.widgets(). You need any(w for page in doc for w in page.widgets()). Small, but the kind of thing that wastes 20 minutes.
Everything that shipped
Beyond the core pipeline:
- Next.js dashboard with real-time job progress and structured results
- Landing page with feature overview and newsletter signup
- REST API with OpenAPI docs, API key auth, and rate limiting
- Chrome extension for Zillow integration — run a permit check from a listing page
- Contractor scoring from permit history
- Property watchlist for ongoing monitoring
- Admin panel for user management and approval
- Newsletter system — weekly AI-generated building activity digests via SES, with privacy rules that never expose residential addresses
- CloudFront CDN with separate caching strategies for static assets vs. SSR/API routes
- Redis-backed rate limiting distributed across Fargate tasks
- Full CI/CD — lint, test, build, migrate, deploy on every merge, with self-hosted GitHub Actions runners on ECS Fargate
- 506 tests across backend and frontend
- Per-job cost tracking with token usage breakdowns
- Lightweight database migration runner
The $1.30/parcel breakdown
- Layer 1 (document extraction): ~$0.80 — the expensive part, especially architectural drawings that go through the vision path
- Layer 2 (permit aggregation): ~$0.30
- Layer 3 (parcel report): ~$0.20
Vision-path documents (blueprints, scanned forms) cost 5-20x more than text documents. There’s a PDF classifier in downloader.py that routes each document to the cheapest viable extraction method. Prompt caching on system prompts helps a lot for repeat calls within the same job.
What I took away from this
I was actively directing every session — choosing what to build, reviewing every PR, catching the stuff Claude Code missed (like those env var gaps), making architecture calls. It handled implementation. I handled product and architecture.
CLAUDE.md turned out to be the most important file in the repo. It’s the project instructions file that Claude Code reads at the start of every session — architecture, data models, deployment pipeline, conventions. Without it, each session starts cold. With it, Claude Code picks up exactly where you left off.
The PR workflow was essential. Every change went through a branch and PR, so main was always deployable. If something went wrong, I could just not merge.
Five days isn’t a magic number. I was working on this full-time and knew what I wanted to build. Claude Code removed the implementation bottleneck, but the thinking — what to build, how to structure it, what tradeoffs to accept — was all me.
The short version: Claude Code is a very good pair programmer. It keeps context across a complex codebase, follows established patterns, and executes multi-file changes reliably. It doesn’t replace you. It makes you a lot faster.