
Researchers at the Johns Hopkins Applied Physics Laboratory, in Laurel, are working on a proof of concept for a conversational artificial intelligence agent. It will be able to provide medical guidance to untrained soldiers in plain English, by applying knowledge gleaned from established care procedures.
Known as Clinical Practice Guideline-driven AI, the project is based on a type of AI known as a large language model — the best-known example of which is ChatGPT.
Methods of providing clinical support using AI tend to be highly structured, requiring precisely calibrated rules and meticulously labeled training data. That approach is well suited to providing alerts and reminders to experts in a relatively calm environment; however, coaching untrained novices or even trained medics as they provide medical care in a chaotic environment is challenging.
“There might be 20 or 30 individual components running behind the scenes to enable a conversational agent to help soldiers assist their buddies on the battlefield — everything from search components to deciding which information from the search is relevant to managing the structure of the dialogue,” said Sam Barham, a computer scientist in APL’s Research and Exploratory Development Department, the leader of the CPG-AI project. “In the past, to enable a system like this, you’d have had to train a bespoke neural network on each very specific task.”
A Large Language Model, on the other hand, is trained on vast amounts of unlabeled data (text in this case) and not specialized for any particular task. That means it can theoretically adapt to any situation that can be described in words, using text prompts that provide the situational context and relevant information.
“LLMs have this incredible ability to adapt to whatever task you set for them, virtually anything that’s in the realm of natural language,” said Barham. “So instead of training a neural network on all these different capabilities, you can train a single neural network to respond fluidly to the situation.”