ECRI and the PSO Institute for Safe Medicine are aware that there were thousands of patient safety events in 2021 that will never be addressed.
V patient safety organization is one of approximately 96 nationwide and collects data on errors that have resulted in patient injury or near-fatal accidents. According to Director Sheila Rossi, participating hospitals have sent over 800,000 such reports to ECRI this year.
Federal agencies and PSO can only get an idea of a small fraction of the events that are reported each year. Failure to view all reports has consequences, although this is not required by law. But there is a growing movement among practitioners, civil society organizations and the federal government to integrate technologies that can improve safety.
Even small sample safety reports can be helpful in understanding. Agency for Medical Research and Quality last month analyzed about 300 reports of safety events involving COVID-19 patients in the first seven months of the pandemic. A small sample showed that falls in COVID-19 patients were a problem.
“It takes a long time to collect data and analyze it, so these are months of falls in COVID-19 patients, and perhaps if we had received this information earlier, staff could develop strategies to reduce falls among COVID-19 patients,” Rossi said.
These delays are mainly related to the nature of the reports. There is good data – for example, the age of the patient and the location of the event – but there is also unstructured data in which workers write a summary of the event and the reasons why it happened. Until recently, most of the ideas for patient safety organizations came from analysts who manually read each report.
“They have to read hundreds of reports, track them down in something like Microsoft Excel, and then they rely on their memory to make connections,” said Raj Ratwani, vice president of scientific affairs at MedStar Health Research Institute. … “There’s a really big need for some computational support.”
Natural language processing has the potential to transform the field of improved security, allowing PSOs and hospitals to quickly query millions of events, identify patient risks faster, and take action faster. The method involves building an algorithm that is trained to understand keywords as well as a security analyst.
“We want to reduce the cycle time from identifying an issue to notifying our members. [hospitals]“Ultimately, this can improve patient safety in the long term,” Rossi said.
AHRQ on Wednesday released a report presented to Congress on recommendations for improving patient safety. The agency said it is actively studying natural language processing to help analyze unstructured narratives.
“Technology solutions … that can reduce the workload and accelerate data collection and analysis, if possible, will be the preferred approach to accelerating collaborative learning opportunities at the national level,” writes AHRQ.
The caveat is that while natural language processing is promising, the algorithms are not sufficiently developed for widespread use.
“There are many NLP algorithms out there ready for use by patient safety organizations,” said Ratwani, whose organization has developed some of the tools. But the algorithms are complex, and no one has created a user-friendly way to get information for the PSO. It is like being presented to a GPS user, he said, but only behind the scenes, without a map.
“Our security analysts [inside PSOs and health systems] not necessarily trained in data science, so we have to create the right layer for them to interact with, ”Ratwani said. “As a community of researchers and practitioners, we have to really push this forward.”
Health systems can also use NLP themselves. Boston Children’s Hospital has been using this technology in clinical practice for over a decade. Rather than just looking at existing safety reports, the hospital is trying to identify unreported bugs. For example, most emergency departments are unaware of how often workers violate procedures.
When a doctor performs a lumbar puncture but does not remove fluid, he is not required to document the failed procedure. According to Dr. Amir Kimia, Boston Children’s Emergency Pediatrician and NLP Researcher, this occurs in “a significant number” of lumbar puncture procedures and is still reflected in medical records and consent forms.
Kimia and his team used data on failed lumbar punctures to develop the NLP method in 2010. When a child came to the emergency room with a seizure, doctors usually performed lumbar punctures to rule out meningitis. Using NLP, Boston Children’s was able to: find that in almost all cases the children did not have meningitis. Most of these failed lumbar punctures were unnecessary. After a study in a hospital, the American Academy of Pediatrics changed its management who recommended the procedure.
Boston Children’s efforts to improve patient safety are primarily funded by grants, and are not used in operational finance. Kimia and others used grants to create algorithms. Each institution has its own keywords, which may refer to medical procedures, dialect and specialty.
Although there are companies that manufacture NLP products, setting up algorithms can be a daunting task for a hospital that does not have the technical expertise of its staff. In addition, NLP products need to be integrated into software for electronic health records and patient safety event reporting, which has not happened yet.
“Until it fully evolves into a platform that is widely available to everyone, it will be slow implementation,” Ratwani said.