Designing Medical Chatbots where Accuracy and Acceptability are in Conflict: An Exploratory, Vignette-based Study in Urban India

arXiv

When medical chatbots provide advice that conflicts with users’ lived care experiences, users are left to interpret, negotiate, and evaluate the legitimacy of that guidance. In India, the widespread overuse of antibiotics, antidiarrheals, and injections has shifted patient expectations away from the guideline-aligned advice that chatbots are trained to provide. We present a mixed-methods, vignette-based study with 200 urban Indian adults examining preferences for and against guideline-aligned, norm-divergent advice in chatbot transcripts. We find that a majority of users reject such advice, drawing on diverse rationales grounded in their lived expectations. Through the design and introduction of context-aware nudges, we support expectation alignment that shifts preferences towards transcripts containing guideline-aligned advice. In doing so, we surface key tensions in the equitable design of medical chatbots in the Global South.