New Tool May Flag Signs of Pandemic-Related Anxiety and Depression in Health Care Workers

Source: DALLE-3 via BING Images
Copyright: N/A (AI generated)
URL: https://www.bing.com/images/create/aa-healthcare-worker-wearing-mask-she-is-l…
License: Public Domain (CC0)


An artificial intelligence tool effectively detected distress in hospital workers’ conversations with their therapists early during the COVID-19 pandemic, a new study shows, suggesting a potential new technology that screens for depression and anxiety.

As the COVID-19 pandemic forced many hospitals to operate beyond capacity, medical workers were faced with overwhelming numbers of work shifts, limited rest, and an increased risk of COVID-19 infection. At the same time, quarantine policies and fear of infecting family reduced their access to social support, with the combination increasing the risk of medical errors and burnout.

As a result, virtual psychotherapy, which offers treatment access without leaving home, boomed during this period. The researchers took advantage of the related flood of digitalized sessions’ transcripts to identify common phrases used by patients and link the terms to mental illness using a technique called natural language processing (NLP). In this method, a computer algorithm combs through data to pinpoint keywords that capture the meaning of a body of text. All identifying information about each patient was removed to protect their privacy.

Led by researchers at NYU (New York University) Grossman School of Medicine, the analysis involved treatment sessions’ transcripts from more than 800 physicians, nurses, and emergency medical staff. Also included were transcripts from 820 people also receiving psychotherapy during the first wave of COVID-19 in the United States but not working in health care.

Study results revealed that among health care workers, those who spoke to their therapists specifically about working in a hospital unit, lack of sleep, or mood issues, were more likely to be diagnosed with anxiety and depression than those who did not discuss these topics.

By contrast, such risks were not seen in workers from other fields who discussed the pandemic or their jobs (with terms such as “team,” “manager,” and “boss”).

“Our findings show that those working on the hospital floor during the most intense moment of the pandemic faced unique challenges on top of their regular job-related stressors which put them at high risk for serious mental health concerns,” said study lead author Matteo Malgaroli, PhD, research assistant professor, Department of Psychiatry, NYU Langone Health.

Being published on October 24, 2023, in JMIR AI, the report is the first application of NLP to identify markers of psychological distress in health care workers, according to Dr Malgaroli.

 For the study, the team collected data from men and women throughout the United States who sought teletherapy between March and July 2020. The researchers then used an NLP program to review session transcripts during the first 3 weeks of treatment.

Among the findings, health care workers shared 4 conversation themes around practicing medicine: virus-related fears, working on the hospital floor and intensive care units, patients and masks, and health care roles. Meanwhile, therapy sessions’ transcripts from those working in other fields only contained one topic about the pandemic and one related to their jobs.

Although the overall heightened risk for anxiety and depression among those who discussed working in a hospital was small (3.6%), the study authors say they expect the model to capture additional signs of distress as more data are added.

“These results suggest that natural language processing may one day become an effective screening tool for detecting and tracking anxiety and depression symptoms,” said study senior author psychiatrist Naomi Simon, MD, professor, Department of Psychiatry, NYU Langone Health.

Also a vice chairperson in the Department of Psychiatry at NYU Langone Health, Dr Simon notes that another potential future direction for using this approach could be to provide health care workers a way to confidentially record themselves answering brief questions. These responses could then be analyzed using NLP algorithms to calculate risk for mental health conditions, such as depression or anxiety disorders. This feedback would then confidentially be provided to the healthcare worker using the tool, who might be prompted to seek help.

The researchers caution that the report only captured the mental state of patients early in their treatment. As a result, the team next plans to explore how the discussion topics change over time as therapy progresses.

Funding for the study was provided by National Institutes of Health (grants KL2TR001446, R44MH124334, and R01MH125179). Further research support was provided by the American Foundation for Suicide Prevention, the U.S. Department of Defense, and the Patient-Centered Outcomes Research Institute. Talkspace, a mobile psychotherapy company, provided the data for the analysis but was not otherwise involved in the study.

Dr Simon consults for biotechnology companies Axovant Sciences and Genomind, as well as for pharmaceutical companies Springworks Therapeutics, Praxis Therapeutics, and Aptinyx, and the information services company Wolters Kluwer. She also has spousal equity in G1 Therapeutics, which develops cancer treatments. The terms and conditions of these arrangements are being managed in accordance with the policies of NYU Langone.

In addition to Drs Malgaroli and Simon, another NYU investigator involved in the study was Emma Jennings, BS. Other study investigators included Emily Tseng, MS, and Tanzeem Choudhury, PhD (Cornell University, Ithaca, New York), and Thomas Hull, PhD, (Talkspace, New York City).

Media Inquiries:

Shira Polan

Phone: 212-404-4279

shira.polan@nyulangone.org

Original article:

Malgaroli M, Tseng E, Hull TD, Jennings E, Choudhury TK, Simon NM. Association of Health Care Work With Anxiety and Depression During the COVID-19 Pandemic: Structural Topic Modeling Study.

JMIR AI 2023.

DOI:  10.2196/47223

URL: https://ai.jmir.org/2023/1/e47223/

About JMIR Publications
JMIR Publications is a leading, born-digital, open-access publisher of 30+ academic journals and other innovative scientific communication products that focus on the intersection of health and technology. Its flagship journal, the Journal of Medical Internet Research, is the leading digital health journal globally in content breadth and visibility, and it is the largest journal in the medical informatics field.

To learn more about JMIR Publications, please visit https://www.jmirpublications.com or connect with us via YouTubeFacebookTwitterLinkedIn, or Instagram.

If you are interested in learning more about promotional opportunities, please contact us at communications@jmir.org

The content of this communication is licensed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, published by JMIR Publications, is properly cited. JMIR is a registered trademark of JMIR Publications.