How AI Can Cut the Cost of Personalized Gene Therapy
Key takeaways:
A critical cost component of personalized gene therapy is senior scientist and senior regulator time. Baby KJ Muldoon's custom CRISPR therapy required 45+ scientists working for six months.
Frontier AI models with biological reasoning capabilities can now take on key tasks in therapeutic design and regulatory drafting, but they remain unintegrated into the development workflow and therefore underutilized.
Manufacturing, preclinical studies, and clinical execution remain fully human. As AI shrinks the upstream slices, these operational layers become the dominant cost, and the dominant bottleneck.
Every personalized therapy starts with a bill measured in human hours, not dollars. The team that designed and dosed Baby KJ Muldoon's custom CRISPR base-editing therapy proved that. More than 45 scientists across Penn, CHOP, the Innovative Genomics Institute, Harvard, MGH, Jackson Lab, Aldevron, IDT, and Acuitas worked flat out for six months to treat one child.
Fyodor Urnov, scientific director of the Innovative Genomics Institute at Berkeley, put it well in the New York Times:
"Scientists burned a vat of midnight oil on this the size of San Francisco Bay."
That is the key bottleneck for personalized medicine. Not the reagent cost. The human capital. Forty-five scientists, six months, one child. The science works. But we cannot burn a vat that size for every patient. We do not have that many vats.
The question worth asking now, as large language models cross the 90th percentile on biological reasoning benchmarks, is which slices of that senior scientist time these tools actually take off the bill.
Here is how we think about it at Nome.
What AI changes in therapeutic study design
In KJ's program, the design phase was roughly eight weeks of senior computational biology and molecular biology work. Iterating guide RNA candidates. Running off-target predictions across the genome. Selecting the right base editor for the specific substitution. Tuning LNP chemistry against the cargo. Reading hundreds of papers to triangulate decisions.
Properly trained and harnessed AI models compress the entire study design phase by reasoning across the full landscape of what is known about a gene, variant, and disease mechanism, then mapping it against every available experimental tool, model system, and regulatory precedent. The result is a complete study plan that is based on clear precedents and is highly specific to that patient’s case. There is no wasted experimental time or cost because every experiment was chosen for a reason at the start.
As somebody who has personally sat through years(!) of manual work mapping this out for my own program, this is a massive change. Further, these tools have a democratizing effect where more scientists can use these tools successfully, expanding the total capacity of the market from a few academic centers to the scale required to address millions of patients with unmet need.
What AI changes in regulatory preparation
The regulatory preparation for KJ's program was roughly ten weeks of senior FDA-experienced labor, much of it absorbed inside Penn and CHOP. Drafting the single-patient IND. Building the pre-IND briefing book. Writing the IRB protocol for in vivo base editing in an infant, which had no direct precedent. Mapping prior CRISPR INDs to build the safety case. Responding to FDA questions during the seven-day approval window.
This is not a task where an off-the-shelf chatbot helps. Regulatory drafting for personalized therapeutics is one of the most technically exacting workflows in medicine. Every data point and sentence on the filing matters. A misstatement in an IND can delay a filing by months. A poorly mapped precedent can sink a pre-IND meeting. The models that are useful here need to be specially engineered, trained on regulatory precedent data, and harnessed within workflows that enforce accuracy at every step.
When that engineering is done right, the impact is large. A purpose-built model grounded in prior INDs, FDA guidance, and IRB precedent can produce a strong first-draft IND, briefing book, and IRB protocol, with citations, in hours. The senior regulator's time shifts from drafting on a blank page to refining and pressure-testing.
What AI changes in operational coordination
A personalized therapeutic program is not just science and regulation. It is a web of coordination across institutions: the academic lab running the design work, the CRO handling preclinical studies, the CDMO manufacturing the drug product, the clinical site preparing for dosing, the regulatory team filing in parallel, and the family waiting for updates. In KJ's program, that coordination was absorbed by senior people at Penn and CHOP who held the full picture in their heads and ran it through emails, spreadsheets, and meetings.
That coordination tax scales badly. Every new program requires someone to rebuild the project plan, re-identify the right vendors, re-negotiate timelines, and re-learn lessons that the last program already paid for. The institutional knowledge compounds inside individuals, not inside systems.
AI agents built for program orchestration can take on the structured coordination work: tracking milestones across vendors, flagging timeline conflicts before they cascade, generating status updates for families and collaborators, and surfacing relevant precedent from prior programs. The senior program lead's time shifts from chasing updates to making decisions about where to intervene.
What still requires human time, today
Animal studies still require bench scientists to dose, monitor, harvest, and analyze. Off-target safety verification still runs on real wet-lab cycles. GMP manufacturing of a custom therapeutic runs on physical reagents and trained operators, with QC and documentation that must match the regulatory filing line for line. The clinical apparatus, the IRB, the dosing team, the monitoring board, the family consent conversations. All human, all required.
But the physical layer is not standing still. Labs are beginning to deploy high-throughput cell and organoid screening platforms that run largely via robotic automation. As these robotic systems mature and physical AI becomes more capable, the leverage point shifts: the limiting factor is no longer whether a step can be automated, but whether there is an operating system that can coordinate the automation across every step in the right sequence, with the right data flowing between them.
What's possible with Nome
At Nome, we see the possibility for personalized medicine to collapse to the cost of compute + manufacturing. That will only happen through a series of well coordinated AI agents working together to mesh in silico predictions, operational actions, data analysis, and physical infrastructure.
As we achieve this, the “vat of oil” gets much smaller.
Sources
Musunuru et al, NEJM, May 2025
Gina Kolata, "Baby Is Healed With World's First Personalized Gene-Editing Treatment," New York Times, May 15, 2025 (Urnov quote)
Innovative Genomics Institute, "First Patient Treated with On-Demand CRISPR Therapy," May 15, 2025
MIT Technology Review, "This baby boy was treated with the first personalized gene-editing drug," May 15, 2025 (45+ scientists)
Genetic Engineering News, ASGCT 2025 plenary coverage, May 16, 2025
Nature's 10 profile, KJ Muldoon, December 2025
Matt Wilsey, "Zero Days to Lose," February 2025
About Nome
Nome Therapeutics is the Operating System for Personalized Therapeutics, turning every rare disease patient's genome into an actionable treatment plan and helping execute on it at the lowest cost and highest speed in the industry.