Most physicians I know are already using AI tools like ChatGPT or Claude in their practice — looking up drug interactions, summarizing notes, drafting patient letters. But very few have been taught how to talk to these systems effectively. That gap matters more than most people realize.
In a 2024 paper I co-authored with Rajvardhan Patil and Vijay Bhuse, published in Electronics, we tackled a surprisingly overlooked problem: medical education has almost no formal training on prompt engineering — the skill of crafting the right input to get clinically useful output from a large language model (LLM).
The core finding is straightforward: how you ask matters as much as what you ask. A vague prompt gets a generic answer. A well-structured prompt — with context, role, constraints, and a clear task — gets something you can actually use at the bedside. We outlined specific strategies tailored to primary care: how to frame a differential diagnosis request, how to get a patient-friendly explanation of a complex condition, how to use AI for documentation without losing clinical nuance.
This isn't just an efficiency play. Done right, prompt engineering can reduce cognitive load, surface considerations a busy clinician might miss, and make AI a genuine thought partner rather than a glorified search engine. Done poorly, it produces confident-sounding nonsense.
The good news: this is a learnable skill. It doesn't require a computer science background. It requires intentionality — thinking clearly about what you need before you type. That's something physicians are already trained to do. We just need to apply it to a new tool.
If you're using AI in your practice and haven't thought much about how you're prompting it, this paper is a good place to start. The full text is open access at MDPI.