Healthcare

Northwell closes maternal health disparities with AI chatbot

Northwell Health is trying to address disparities in maternal health with the help of an artificial intelligence chatbot.

He Northwell Health The Pregnancy Chats tool, developed in collaboration with Conversa Health, guides patients through their prenatal and postpartum journeys while assessing social barriers and mental health issues.

The tool is part of an initiative within the Northwell Maternal Health Center that aims to reduce the maternal mortality rate, particularly among black women. A major barrier is addressing gaps in behavioral health, education and community resources, said Dr. Zenobia Brown, senior vice president of population health and associate medical director for the New Hyde Park, N.J.-based health system. York.

Employing a “high-tech, high-touch” approach, the chatbot helps Northwell providers manage high-risk pregnant patients by implementing personalized education and patient assessments. The tool offers patients relevant information for each stage of pregnancy, such as blood pressure monitoring, prenatal tests, birth plans and breastfeeding support, and regularly assesses them for mental health and social needs.

The chatbot is integrated with Northwell’s care management team and can direct patients to relevant resources and alert providers if interventions are needed. When a patient tells the chatbot that they have medical complications, the tool triggers a call from a Northwell representative or directs the patient to visit the emergency department.

“You could have someone call the moms three times a week and ask how they’re doing. But it allowed us to implement a lot more touches using technology than we could with people,” Brown said.

Since its launch earlier this year, the AI ​​chatbot has shown promising preliminary results, according to the healthcare system. An internal survey revealed that 96% of users expressed satisfaction with their experience. Additionally, the chatbot effectively identified patients experiencing complications and guided them to appropriate care, Brown said.

For example, the chatbot identified a woman who suffered from postpartum depression, despite the fact that she had not disclosed her symptoms during a previous mental health evaluation with her doctor. The patient confided in the chatbot that she was having suicidal thoughts, prompting a response from the care team with psychiatric and mental health support.

Using AI-powered chatbots in healthcare has been shown to improve interactions, delivering more detailed and empathic conversations compared to traditional doctor-patient interactions, according to a study published by University of California researchers. in San Diego at JAMA Internal Medicine in April.

“These chatbots never get tired,” said John Ayers, vice chief of innovation in the division of infectious diseases and global public health at the UC San Diego School of Medicine, who co-authored the study. The findings suggest that AI chatbots have the potential to increase patient satisfaction while easing administrative burdens on clinicians.

“We’re using these really cool, sophisticated tools to get back to what we know absolutely works in healthcare, which is listening to your patient, letting them ask a lot of questions and getting them to commit to their care,” Brown said.

The approach could also increase the amount of money doctors can earn from insurers by responding to more patient emails, Ayers said. However, to fully harness the potential of the technology, the tools must be tailored to meet the individual needs of patients. For example, many chatbots on the market are designed to alleviate worker burnout and facilitate patient management. For patients, these tools can be analogous to phone trees, she said. A chatbot must be linked to a real person if a patient requires more complicated assistance, she said.

Bioethicists warn against considering AI-powered chatbots as a definitive solution for patient engagement and have called for stricter oversight.

“Regulation has to come in some form,” said Bryn Williams-Jones, a bioethicist at the University of Montreal. “It’s not clear what form it will take because what you’re trying to regulate is evolving incredibly fast.”

To responsibly implement the technology now, healthcare providers need to clearly understand the methodology behind the software, verify its work and create accountability mechanisms to respond when something goes wrong, Williams-Jones said. These tools should be designed in accordance with standards of care and seek to avoid overuse, she said.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button