Health coach for a weight management startup
A RAG-grounded coaching agent aligned with the client's nutritionists — answering from approved protocols only, matching each dietician's communication style, handling 2,000+ conversations daily.
Coaching doesn't scale.
The client is a health-coaching company with 60+ nutritionists managing weight-loss programs across India and the Middle East. Each patient gets a personalised diet plan, WhatsApp check-ins, and ongoing adjustments. The model works — retention is high, outcomes are strong.
The problem: each nutritionist can handle about 40 active patients. At that ratio, scaling means hiring proportionally. A team of 60 covers 2,400 patients. To reach 10,000, they'd need 250 nutritionists — and the hiring pipeline doesn't move that fast.
They'd tried a basic chatbot before. It lasted two weeks. The bot gave generic advice ("eat more vegetables"), ignored patient history, and on one occasion suggested a meal plan that conflicted with a patient's medication. The nutritionists pulled the plug.
"We don't need a chatbot. We need something that answers exactly the way Dr. Mehra would answer — and never, ever goes off-script on medical advice."
RAG with a structured context window.
We designed a retrieval layer that ensures the bot only answers from approved content. No general knowledge, no improvisation. Every response traces back to a source document that the client's medical team has signed off on.
The context window for each patient conversation is explicitly structured:
- Past conversations — to match the assigned nutritionist's tone
- Medical history and key risks
- Current prescriptions
- Weight and body metrics (weekly updates)
- Goals and time horizon
- Previous recommendations and adherence notes
Knowledge sources include: the client's proprietary diet protocols, a regional diet library covering South Asian, Middle Eastern, and Mediterranean cuisines, approved FAQ responses, and escalation SOPs for medical situations the bot should never handle.
What Preflight monitors on every response.
The core risk isn't hallucination in the traditional sense — it's scope creep. The bot knows a lot about nutrition and will confidently answer questions about medication, exercise physiology, or medical conditions if you let it. That's the failure mode Preflight is designed to catch.
In the first 30 days, Preflight blocked 47 responses — mostly medication-related scope violations where the LLM tried to be helpful about drug interactions. None of those responses reached patients. The model is not perfect. The system is.
The numbers after 90 days.
The bot handles the first 2-3 turns of most conversations — answering diet questions, logging meals, adjusting portions based on weekly weigh-ins. When a patient asks something outside scope (medication, exercise injury, emotional distress), the bot escalates to the human nutritionist with full context.
Each nutritionist now manages 120 patients instead of 40. The company is scaling to 10,000 patients without proportional hiring. Nutritionists spend their time on complex cases, not answering "can I eat rice at dinner."