Beyond the Cage: How the FDA Modernization Act is Accelerating the Shift Toward Human-Based Models

For nearly a century, animal testing has been the gatekeeper of drug development — a necessary, but yet imperfect, bridge between the lab bench and the clinic. Mice, monkeys, and other animals have long served as the proxies through which safety and efficacy were judged before any human could take a new compound. But in 2022, a quiet revolution began.

That year, the FDA Modernization Act 2.0 rewrote a rule that had stood since 1938 — a rule born from a mass poisoning caused by an untested sulfanilamide formulation. In response to that tragedy, Congress mandated toxicity testing in animals. Now, the updated Act states that animal testing is no longer legally required for new drug applications. It’s a technical change with far-reaching consequences for the drug discovery industry — and, above all, a recognition that science has advanced beyond a one-size-fits-all reliance on animal models.

Fast forward to 2025, and the transformation is gaining momentum. The FDA recently released its Roadmap to Reducing Animal Testing in Preclinical Safety Studies, outlining a path toward what regulators call New Approach Methodologies (NAMs) — innovative systems that can replicate human biology in silico or in vitro, often more accurately than animals ever could. Meanwhile, the NIH launched ORIVA — the Office of Research Innovation, Validation, and Application — tasked with scaling up human-relevant research models nationwide.

Together, these initiatives mark the dawn of a new era in biomedical science: one that’s human by design.

The FDA Modernization Act 2.0 — More Than a Legal Update

The original Federal Food, Drug, and Cosmetic Act of 1938 made animal testing a legal prerequisite for drug approval — a reaction to a public health disaster involving untested sulfa drugs. For decades, that requirement seemed unassailable. Even the FDA Modernization Act of 1997, which streamlined clinical research regulations, left it untouched.

But as failures mounted — nine out of ten drugs that look promising in animals still fail in human trials — it became increasingly clear that the old model wasn’t working. Animal studies often mispredict human responses, especially for biologics like monoclonal antibodies, where interspecies immune differences obscure key safety signals.

By 2022, the science — and the sentiment — had shifted. Lawmakers recognized that animal tests were not only slow and expensive but frequently misleading. FDA Modernization Act 2.0 removed the mandatory animal-testing clause and explicitly opened the door for validated alternatives: organoids, organs-on-chips, induced pluripotent stem cell (iPSC)-derived assays, and computational models.

Crucially, the law doesn’t ban animal studies — it simply stops requiring them. Sponsors can now choose the most scientifically appropriate models, human or otherwise, to demonstrate safety and efficacy.

The National Academies of Medicine (NAM), which has long championed “human-relevant science,” praised the move as “a catalyst for innovation that aligns ethical progress with scientific rigor.”

In essence, the FDA has given researchers permission — and encouragement — to rethink what “preclinical” really means.

From Petri Dishes to Mini-Organs

If you’ve seen an organoid under a microscope, you understand the fascination. These 3D clusters of cells self-organize into miniature versions of tissues — liver, lung, intestine, even brain — that mimic key aspects of human physiology. They can metabolize drugs, secrete hormones, and exhibit disease phenotypes.

Organs-on-chips take it one step further. These microfluidic devices recreate the architecture and mechanical forces of real organs — a “lung-on-a-chip” literally breathes, while a “liver-on-a-chip” processes drugs under continuous flow. Companies like 28bio, Emulate, Nagi Bioscience, CN Bio, and Mimetas have developed validated systems that can predict human toxicities better than traditional animal models. In fact, the FDA’s ISTAND pilot program recently accepted a human Liver-Chip as a qualified method for predicting drug-induced liver injury, correctly identifying 87% of hepatotoxic drugs — far outperforming conventional rodent assays.

But NAMs don’t stop at organoids and chips. Human cell-based assays and high-throughput screening (HTS) have quietly become some of the most impactful tools reshaping early decision-making. Robotic platforms can now evaluate thousands of compounds across panels of human iPSC-derived cells — cardiomyocytes, hepatocytes, neurons — each capturing facets of human physiology and genetic diversity. Programs like Tox21 have screened over 10,000 chemicals this way, revealing endocrine disruptors and off-target effects that animal models routinely missed.

These cell-based platforms are especially powerful for biologics. Cytokine-release assays using human blood, for example, are now standard for assessing immunotoxicity — a direct response to the TGN1412 disaster that bypasses the misleading safety signals from monkeys. HTS-enabled imaging systems can monitor subtle phenotypes such as mitochondrial stress, lysosomal swelling, or synaptic activity, giving developers a granular, human-specific picture of a drug’s liabilities long before it reaches clinical testing.

And then there are ex vivo human tissues — precision-cut slices of donated liver, heart, or kidney that retain native cellular architecture, extracellular matrix, and metabolic pathways. These tissues often reveal toxicities invisible in animal models. Human heart slices, for instance, have detected arrhythmogenic effects of kinase inhibitors that rodents completely failed to flag. When combined with organs-on-chips, ex vivo tissues even allow multi-organ interaction studies, such as liver-to-heart metabolic cross-talk.

Taken together, these platforms represent a spectrum of human-based systems — from high-throughput functional assays to architecturally intact tissues — that can be deployed strategically depending on the scientific question at hand. They aren’t just incremental improvements; they fundamentally redefine what “preclinical evidence” can look like.

Figure 1. Key impacts of the FDA Modernization Act 2.0 on clinical research laboratories.

When Machine Learning Meets Molecular Biology

If the first revolution is biological, the second is digital. Artificial intelligence (AI) is rapidly transforming how we discover, design, and validate drugs — and it’s doing so without a single cage in sight.

Imagine training a neural network on millions of molecular structures, clinical outcomes, and toxicity data. The algorithm begins to “learn” what makes a molecule safe or dangerous, active or inert. These models can now flag potential risks — or generate entirely new, optimized chemotypes — in silico, without touching a test tube.

A few striking examples:


- In 2023, In silico Medicine advanced the first AI-designed drug for idiopathic pulmonary fibrosis into Phase II clinical trials — achieving this milestone in record time.


- DeepMind’s AlphaFold cracked the protein-folding problem, allowing scientists to predict 3D protein structures with near-experimental accuracy — a leap that fuels rational drug design.


- Companies like Schrödinger and Atomwise use generative AI to create molecules that bind targets predicted from human data, sidestepping much of the trial-and-error that once relied on animal validation.


Even for biologics, AI is finding its footing. Machine learning models can now analyze antibody sequences to predict immunogenicity — identifying variants likely to provoke unwanted immune responses in humans. Computational platforms like physiologically based pharmacokinetic (PBPK) and quantitative systems pharmacology (QSP) models simulate how a drug will behave in the human body — absorption, distribution, metabolism, excretion — all before the first dose is ever given.

These are still predictions that often require validation with lab-based techniques, but the advantages they offer are unquestionable. Going forward, creating feedback loops — where experimental results are fed back into algorithms to refine future predictions — will likely be key. After all, the main limitation of any model is the quality and diversity of the data it learns from.

Taken together, these digital methods are turning drug development into a data-driven science of prediction rather than extrapolation. By integrating human clinical and omics data early, researchers can design safer, more effective drugs — and do it faster.

Building Trust in New Systems

None of this progress comes without challenges. As mentioned previously, regulators, scientists, and companies alike are still working out the details of how to validate and standardize these models.

The FDA’s Roadmap to Reducing Animal Testing lays out a cautious plan. Over the next three years, the agency aims to:

Pilot NAMs alongside animal studies to compare predictive performance directly.

Create open-access toxicity databases combining animal, human, and NAM data to fuel future modeling.

Shorten animal testing timelines — for instance, reducing six-month primate toxicity studies for antibodies to just three months when supported by strong NAM evidence.

Implement regulatory relief for companies submitting validated NAM data, encouraging broad adoption.

Track progress biannually, monitoring metrics such as testing costs, time to approval, and rates of safety issues identified by NAMs vs. animals.

Ultimately, the goal is clear: make animal studies the exception rather than the norm. In five years, it’s conceivable that for biologics — starting with monoclonal antibodies — animal testing will be largely reduced.

Validation is essential. For NAMs to be accepted, they must prove not only reproducibility across labs but also correlation with real-world human outcomes. That’s why agencies are investing heavily in retrospective analyses (re-testing old drugs in NAM systems to see how well they would have predicted human toxicity) and prospective validation (running new drugs in both systems and comparing outcomes).

To support this, the FDA and NIH are investing in cross-platform validation — comparing organ-on-chip results with ex vivo tissue data, benchmarking cell-based HTS assays against clinical outcomes, and building large public datasets that merge animal, human, and computational readouts. These efforts aim to standardize protocols, reduce variability, and ensure that NAM-generated evidence is reproducible across labs.

These efforts are already paying off. In some cases, organ-on-chip results have matched clinical toxicity outcomes more closely than historical animal data, strengthening confidence that we’re heading in the right direction.

A New Definition of Preclinical

This transition isn’t just about replacing one model with another; it’s about redefining the very logic of preclinical science. The term “preclinical” once meant “before we test in humans.” In the coming decade, it may mean “human data before clinical trials.”

Consider this emerging workflow:

  1. A candidate drug is screened through AI models for toxicity and immunogenicity.

  2. High-throughput human cell-based assays identify off-target effects and prioritize leads.

  3. Validated human organoid, chip, and ex vivo tissue systems test safety, metabolism, and tissue-specific vulnerabilities.

  4. PBPK simulations estimate safe starting doses for human trials.

  5. Microdosing or imaging studies confirm distribution in real volunteers.

By the time the first full clinical trial begins, the molecule will have already undergone extensive “humanized” evaluation — without relying on a single rodent or primate.

That’s not just more ethical; it’s more efficient. The average cost to develop a monoclonal antibody, currently around $700 million, could drop significantly if NAMs reduce late-stage attrition. Shorter development times mean faster access to life-saving therapies — and potentially lower drug prices.

Beyond Ethics — Toward Efficiency and Accuracy

Animal welfare has long been the moral driver of this conversation. But today, the scientific and economic arguments are equally compelling.

Animal immunogenicity rarely predicts human immune reactions. Stress, microbiome differences, and metabolic variation all add noise that obscures true signals. In contrast, human-based systems — whether engineered or computational — offer data directly relevant to patient biology.

And as animal testing costs soar (non-human primates can exceed $50,000 each), even pragmatic voices in pharma are turning toward NAMs as both an ethical and economic necessity.

Human cell-based assays and ex vivo tissues capture nuances of human metabolism, immune reactivity, and genetic diversity that animal systems simply cannot, making them indispensable complements to engineered microphysiological platforms.

This isn’t about rejecting animal research wholesale — it’s about acknowledging that it was always a proxy. Now, we finally have tools precise enough to study us on our own biological terms.

The Road Ahead

Integrating biology, bioengineering, and AI is transforming the scientific landscape. It’s not hard to imagine a future where “preclinical” means a dynamic ecosystem of human cells, computational twins, and data-rich simulations that mirror the diversity of real patients.

The shift away from animal testing isn’t just a regulatory milestone — it’s a philosophical one. It redefines what it means to understand human biology.

From organoids to algorithms, the path forward is clear: the best model for a human is, and always has been, another human.

Would you like to stay updated on the latest breakthroughs in biomedical science? 
Subscribe to my blog and join me in exploring the next frontier of medicine!
Next
Next

The Superbug Wars