Background
ML engineer who ships research.
Research: multi-agent coordination, continual learning
Production: ML systems, cloud infrastructure
Research
Multi-agent coordination
Grounded Commitment Learning: verifiable behavioral contracts for AI coordination. Applies Hart-Moore incomplete contract theory. 36.8% hold-up reduction, 382 statistical tests.
Continual learning
Collaborative Nested Learning: multi-timescale optimization with non-adjacent knowledge bridges. +89% accuracy at high regularization. Pareto-dominant across retention-accuracy tradeoff.
Methods
- Empirical validation with statistical rigor (effect sizes, CIs, multiple hypothesis correction)
- Formal proofs where applicable (convergence guarantees, conservation laws)
- Production-quality implementation (95% test coverage, CI/CD, documentation)
Production Experience
Document understanding pipeline
Multi-provider LLM routing with confidence-based escalation. HIPAA-compliant. Human-in-the-loop for low-confidence outputs, RLHF for continuous improvement.
Edge ML
TensorFlow.js models under 25KB. Voice-first field application with Web Speech API + AWS Polly, GPT-powered NLU.
Cross-platform APIs
Minecraft/Roblox/Fortnite integration. State coordination across systems with incompatible assumptions.
Technical Stack
Languages: Python, TypeScript, SQL
ML: PyTorch, TensorFlow, TensorFlow.js
Infrastructure: GCP, Terraform, Docker, CI/CD
Data: PostgreSQL, BigQuery, vector databases
Deployment: Edge (< 25KB models), cloud, hybrid
Education
M.A. Université de Paris VII — Denis Diderot (French-language program)
Littérature, Langues, et Civilisations des Pays Anglophones
National Merit Scholar (Florida State, Emory)
Research Interests
Pre-representational computation—the operations that exist before and enable representation (whether human language, embeddings, or any other representational layer).
- Which compositions of projection, attention, and regularization preserve structure vs. destroy it?
- How do systems learn priors over which operations to apply when?
- Verifiable behavior grounded in observable actions (scalable oversight)
- Alignment for systems whose operations are themselves learned