About
Building AI/ML Solutions for Problems Without Off-the-Shelf Answers
I build AI/ML solutions for problems that don't have off-the-shelf answers—particularly where uncertainty quantification and probabilistic reasoning create competitive advantage.
What drew me to this work? Curiosity about uncertainty—and a journey that took me from Jabalpur to Bhopal, then to Nantes on an Erasmus Mundus scholarship where I first fell in love with stochastic modeling, through a joint PhD between IIT Bombay and Monash, and finally to Melbourne where I now get to build the kind of systems I once only read about in papers.
My path has crossed disciplines—civil engineering to statistics to ML—and industries—rail, oil & gas, quantum sensing. Each transition driven by following the most interesting problems I could find. That cross-pollination is what lets me see solutions others miss.
What I'm Building Now: Quantum Sensing Meets ML
At Nomad Atomics, I'm part of something genuinely new—a deep-tech company using quantum gravity sensors to map what's beneath the earth's surface. My role? Architect the ML infrastructure that turns noisy quantum sensor data into 3D subsurface maps.
I built a high-performance geophysical inversion library in JAX that serves as the core ML engine for our analytical platform. I designed signal processing pipelines that clean time-series data from quantum sensors, isolating the anomaly signals that matter. And I created a full-stack AI application powered by a LangGraph stateful agent for autonomous document generation.
This is the work I've been building toward my whole career: novel problems that require first-principles thinking, where probabilistic reasoning and production engineering have to work together. No tutorials to follow. No existing solutions to adapt. Just interesting problems and the freedom to solve them properly.
The Journey to Here: Building Across Industries
Before Nomad Atomics, I spent years at Monash University leading data science projects for major infrastructure clients. Each project taught me something different about building ML systems that actually work in production:
Queensland Rail: I got fascinated by a question—how do you predict bridge degradation when your data is sparse and uncertain? I built a probabilistic rare-event forecasting model that updates predictions as new inspection data arrives. Result: $60M in maintenance cost savings and 40% increase in operational safety.
Level Crossing Removal Project: The question: can we adopt a data-driven approach and optimize the next generation of bridges? The problem seemed impossible—millions of possible problems to solve at every step of the way. Using Machine learning and Bayesian statistics, I built custom tailored operational hazard and vulnerability models that quantified the risks and using rare event forecasting, helped optimize the next generation of bridge designs for operational safety. Result: 20% optimization in design efficiency while maintaining safety standards for hundres of bridges yet to be built across Victoria.
Oil & Gas: Sometimes the most valuable work is unglamorous. I developed a probabilistic corrosion prediction model (90% accuracy) that transformed inspection schedules from time-based to condition-based—a simple idea that required careful feature engineering and domain understanding to get right.
The Technical Journey: From First Principles to Production ML
My PhD was a deep dive into a deceptively simple question: How much is data worth? Not in abstract terms, but in dollars. When should Queensland spend money on bridge sensors versus just replacing the bridge? When does collecting more data actually improve decisions?
I developed what was recognized as the first Value of Information framework of its kind in India and Australia—a mathematical approach to quantify the economic value of data acquisition. VicRoads used it to justify a multi-million-dollar project, demonstrating $800K in benefit per use case.
But building that framework required mastering uncertainty quantification from the ground up:
- Probabilistic modeling: Monte Carlo simulation, Bayesian inference, sensitivity analysis
- Metamodeling: Polynomial Chaos Expansions, Kriging, neural network surrogates
- Causal inference: Using Bayesian Networks to estimate the impact of operational decisions
- High-performance computing: Scaling thousands of simulations on SLURM-managed HPC clusters
- Physics-based understanding: Structural performance, material degradation, Quantum sensing principles
Now I apply those same foundations to machine learning and predictive analytics. The difference is that I don't just tune hyperparameters—I understand the mathematical principles underlying the algorithms, which means I can build models that are robust, interpretable, and actually work in production.
Open Source & Code Quality
I'm a core developer of PySTRA, an open-source Python library for structural reliability analysis, where I've contributed over 3,500 lines of code. This taught me that research code and production code are fundamentally different animals. The former needs to work once for a paper; the latter needs to work reliably for thousands of users.
My development toolkit:
- Python: 10+ years, professional-grade code with Pandas, NumPy, Scikit-learn, PyMC
- DevOps: Git/GitHub (averaging 232 yearly contributions), Docker, Linux, CI/CD via GitHub Actions
- Scalability: SLURM for HPC parallelization, confident in AWS/GCP environments
I also built a custom OCR solution in Python to extract material strength data from client images—sometimes the most valuable ML work happens before you even get to modeling.
What Drives Me
I love the intellectual challenge of solving hard problems, but what actually gets me out of bed is knowing the work matters. Here's what that looks like:
First-principles thinking: I don't blindly apply frameworks. I need to understand why something works before I trust it. That PhD training is the difference between "I ran XGBoost" and "I understand why this particular model structure fits this problem."
Pragmatic excellence: I care about elegant solutions, but only if they ship. A 99% accurate model that takes three months to deploy is worse than a 95% model that's live next week. Perfection is the enemy of impact.
Clear communication: The best model is useless if stakeholders don't trust it. I've spent years translating complex technical findings into boardroom presentations—if I can't explain why this matters, I haven't finished the job.
Genuine curiosity: I transitioned from civil engineering to data science because I'm endlessly curious. I taught myself Python, DevOps, and ML through projects. Right now I'm deep into LLMs and causal ML. There's always something new to learn.
Beyond the Resume
When I'm not building models or writing code, I'm teaching and mentoring. As a Teaching Associate at Monash, I've developed curriculum and delivered lectures for courses like Bridge Engineering and Structural Reliability. I've co-supervised 5 Bachelor's and 1 Master's research projects, and I frequently mentor junior PhD students on research ethics and collaboration.
I'm also passionate about mentoring early-career professionals that come from under-represented backgrounds in STEM. I co-founded Qalam Youth Collective, a non-profit that provides mentorship and skill-building workshops for students from marginalized communities in India.
I'm also an active peer reviewer for high-impact scientific journals.
What Excites Me Next
After eight years building probabilistic models and deploying ML in high-stakes environments, I've developed a pretty clear sense of what energizes me most—and I'm fortunate to be doing much of it already at Nomad Atomics:
- Production ML: There's something deeply satisfying about deploying models that make real-time decisions at scale—watching something you built actually work in the wild.
- Probabilistic ML: I love blending uncertainty quantification with machine learning to build models that aren't just accurate, but trustworthy.
- Physics-informed ML: Combining first-principles understanding with data-driven methods to create hybrid models that leverage the best of both worlds.
- Causal ML: I'm increasingly drawn to moving beyond correlation to understand interventions. What happens if we actually change something?
- MLOps: Building robust pipelines that make good ML practices the default, not the exception.
- Hard problems with real stakes: I do my best work when failure matters—when there's genuine pressure to get it right.
I'm particularly interested in domains where deep understanding beats shallow automation—FinTech, HealthTech, Climate Tech, Infrastructure Analytics—anywhere that first-principles thinking creates real advantage.
Let's Connect
If you're building something interesting and think my background could help, I'd love to hear from you. Whether it's an ML role, a research collaboration, or just chatting about Bayesian networks over coffee—I'm always up for a good conversat$$ion about hard problems.
This page was last updated in January 2025. For a detailed CV including publications and projects, see my CV page.