Jason Stiltner

ML Engineer • Applied Research • Production AI Systems

I work at the intersection of frontier ML research and real-world AI systems, turning ideas like continual learning, clinical RAG, and agentic workflows into working software.

Currently at HCA Healthcare, serving 180+ hospitals with 44M+ patient encounters annually.

Recent Work

Featured
Research Implementation+89% improvement

Extended Google's NeurIPS 2025 Nested Learning paper with bidirectional knowledge bridges—improving continual learning performance by +89% at high regularization settings. Implemented the full system in a single day, demonstrating the feasibility of rapid research-to-production when combining deep domain knowledge with production engineering discipline.

The work addresses a critical challenge in production ML: how to enable models to learn new information without forgetting established knowledge. This matters particularly in safety-critical domains like healthcare, where preserving baseline capabilities while adapting to new protocols is essential.

Research Implementation at Production Scale

I specialize in translating frontier research into production-ready systems. This capability comes from 20 years building integration layers across domains—from language instruction in Paris to voice-first field applications for beekeeping to cross-platform gaming APIs. When I encounter a new ML architecture, I recognize patterns I've implemented before in other contexts.

This pattern recognition enables rapid implementation without sacrificing production quality: comprehensive testing, proper error handling, and systems designed for reliability in safety-critical environments.

Learn More About My Background

What Drives Me

I'm driven by learning—both how machines learn and how I can keep learning myself. The most important technical challenge of our time is building AI systems that are genuinely helpful, harmless, and honest. I believe this requires both rigorous safety research and production engineering discipline—understanding how models behave in theory and ensuring they behave reliably in practice.

My work in clinical AI has taught me that safety isn't a constraint on capability—it's a design requirement. Systems that refuse to answer when uncertain are more trustworthy than systems that always produce output. This philosophy extends beyond healthcare to all frontier AI development.

Explore Further

See what I've built, learn my story, or get in touch