March 22, 2026
Building a Medical Research System for Our Daughter with Claude Code
Using Claude Code to help our family understand complex chronic illness research and ask better questions at appointments.
Our daughter has ten diagnoses, fourteen daily medications, and a care team spread across three states. She’s fifteen and mostly housebound. Each specialist sees their slice - the autonomic doctor manages her POTS, the gastroenterologist handles her GI issues, the rheumatologist screens for autoimmune overlap. The connections between domains are hard to see when you’re focused on one.
I’m a software engineer, not a doctor. My wife handles most of the direct care coordination and does deep research of her own. Between us, we can read a paper and I can build tools. Over two days with Claude Code, we built a system that queries medical research databases, synthesizes findings across domains, and cross-references everything against her specific conditions, medications, and labs. It gives us a way to keep up.
The Problem with Complex Chronic Illness
When your kid has ten overlapping conditions, the full picture is hard for anyone to hold. Her mast cell activation affects her autonomic nervous system. Her autonomic dysfunction worsens her GI symptoms. Her GI medications deplete nutrients that worsen her fatigue. Her care team works hard to coordinate, but each specialist is managing their piece of a puzzle that spans five or six disciplines.
We were spending hours every week in PubMed, ClinicalTrials.gov, and Reddit communities, trying to build that full picture ourselves. So we built a system to help.
What We Built
A Python script queries five public APIs - PubMed, ClinicalTrials.gov, OpenFDA adverse events, medRxiv preprints, and Reddit - using search terms derived from a YAML profile of her conditions and medications. All stdlib urllib, no external dependencies beyond pyyaml.
The profile is the source of truth: ten confirmed diagnoses with MeSH terms and aliases, fourteen medications with dosages, her age and location for trial filtering, her care team, and her latest lab results. Every query traces back to this file.
The Weekly Scout
We designed a Cowork skill that runs the research scout on a weekly schedule - fetching new publications and trials, deduplicating against a JSON ledger of previously seen results, and synthesizing a narrative digest. Sunday morning, a new digest lands in the repo. Each run surfaces only what’s genuinely new.
The Cowork approach means the digest is conversational. After it generates, we can follow up in the same session - “tell me more about that POTS trial,” “skip fibromyalgia this week,” “make this a Word doc for the autonomic specialist” - without switching tools.
Parallel Research Agents
We used Claude Code’s subagent capability to run five specialist researchers in parallel, each focused on a domain: autonomic dysfunction, pain and cervical instability, mast cell immunology, GI and nutrition, and integrative medicine. Each agent reads the full profile, produces a graded finding - every claim tagged Level I through V using the Oxford evidence hierarchy.
A lead agent reads all findings and produces a cross-domain synthesis: where do the domains connect? What did one researcher find that changes another’s recommendations?
Twenty-one findings in a single session. The synthesis surfaced connections that are hard to spot when you’re reading papers one at a time.
A Question Worth Asking
One of her March labs showed elevated Chromogranin A - a neuroendocrine biomarker. Her medical record noted it was “likely elevated due to PPI” (a stomach acid medication). A reasonable assumption - PPIs commonly cause this.
The system flagged a timing detail we hadn’t noticed: her blood was drawn on March 2nd. The PPI wasn’t started until March 10th. The elevation predated the medication.
Digging further, the system found that one of her other medications increases peripheral catecholamines by 952%. Chromogranin A is co-released with catecholamines on a 1:1 basis. That same medication’s catecholamine surge produces a fragment called catestatin, which triggers mast cell degranulation - potentially worsening the condition another medication was trying to treat.
Three independent researcher agents flagged the same medication from different angles - autonomic, immunological, and pharmacological. That’s the kind of cross-domain question that’s hard to formulate without seeing the full picture. Now we can bring it to her care team with the evidence organized and the question clearly framed.
The Viewer
Twenty-one research findings in markdown aren’t useful if you can’t navigate them. We built a static site generator that transforms them into an interactive viewer with three levels of progressive disclosure.
The hub page shows safety alerts at top, actionable recommendations grouped by timeline, and cross-domain connection pills. Each finding page has its lead section always visible with collapsible detail below. Medical terms get dotted underlines - click for a definition sourced from the corpus. Citations link to PubMed.
Vanilla HTML, CSS, and JS. Built in one session with subagent-driven development - each task dispatched to a fresh agent, reviewed, and committed.
The viewer is for us on the couch, not during a fifteen-minute appointment.
The Research Query Skill
The latest addition lets us paste a URL, a chunk of text, or a question, and get an answer cross-referenced against the entire corpus. We pasted a link about sodium-potassium pump dysfunction in ME/CFS - a new drug that’s five years from market. The system fetched the article, connected it to our daughter’s exercise intolerance and mitochondrial issues, found that magnesium (a required cofactor for the pump) is being depleted by her PPI, and suggested IV magnesium through the in-home infusion provider already on her care team.
External research comes in, gets grounded in her specific situation, and produces questions worth asking at the next appointment.
What Worries Us
This system does not give medical advice. It produces evidence-graded research summaries that help us prepare for appointments. Every claim must cite a source, speculation is flagged explicitly, and treatment suggestions always say “discuss with care team.”
The risk of hallucination in medical contexts is real. A confident wrong answer about a drug interaction isn’t a minor bug. Evidence grading is the main defense - Level I (systematic review) means something different than Level V (expert opinion), and the system makes that visible.
We also think about the emotional weight. When you’re building this for your kid, the line between research tool and anxious doom-scrolling is thin. The progressive disclosure design is partly about pacing - recommendations first, mechanism details only if you expand them.
What’s Next
Over time, we want to evolve the research team and scout to find patients who present like our daughter - similar constellations of conditions, similar ages - and surface what’s worked for them. The pattern-matching across case reports and cohort studies is exactly the kind of work that’s tedious for humans and natural for this system.
What Shipped
Claude Code skills and Python scripts. Everything runs locally.
- Research scout querying five medical databases with condition-specific search terms
- Parallel researcher agents producing evidence-graded findings
- Cross-domain synthesis identifying connections between conditions
- Static site generator with three-layer progressive disclosure
- Ad-hoc research query skill with full corpus cross-referencing
- Auto-saved research notes for future reference
Built in two days. The hardest part wasn’t the code - it was learning enough about mast cell biochemistry to know whether the output made sense.
Closing
This won’t fix her. We know that. None of this is going to measurably change her prognosis or lead to a recovery. Her conditions are chronic, complex, and mostly without cures.
But understanding what’s happening to her body, having the right questions ready when we walk into an appointment, and feeling like we’re doing something with the hours we’d otherwise spend worrying - that matters to us. It makes us better advocates. It’s how we cope.
The pattern - structured medical profile, automated research discovery, evidence-graded synthesis, progressive disclosure for review - feels like something that shouldn’t require a software engineer parent to exist. For now, it does.
Built with Claude Code and Claude Cowork. ~30 commits across two sessions.