Symposia
Technology/Digital Health
Torrey Creed, Ph.D. (she/her/hers)
Associate Professor
University of Pennsylvania
Philadelphia, PA, United States
Roisin Slevin, BS
Research Project Manager
Lyssn.io
Seattle, WA, United States
Amber Calloway, Ph.D. (she/her/hers)
Assistant Professor
University of Pennsylvania
Philadelphia, PA, United States
Brian Pace, PhD (he/him/his)
Director of Clinical AI
Lyssn.io
Seattle, WA, United States
Jordan Pruett, PhD
Data Scientist
Lyssn.io
Seattle, WA, United States
Zac Imel, Ph.D.
Chief Science Officer
Lyssn.io
Salt Lake City, UT, United States
David Atkins, PhD (he/him/his)
CEO
Lyssn.io
Seattle, WA, United States
Research has demonstrated the efficacy and cost-effectiveness of Cognitive Behavioral Therapy (CBT) for problems ranging from depression to psychosis to pain. Despite billions of dollars spent to disseminate CBT, community access remains limited. The effect of training on delivery of evidence-based practice wears off over time without ongoing performance-based feedback, and yet access to expert-led training and feedback are rare. This limitation may be most acutely true for providers in under resourced contexts. These clinicians serve clients with less access to social supports and higher rates of adverse social determinants of health and may have particularly limited access to training and performance feedback. Using technology instead of humans to train clinicians and provide feedback could promote scalability, sustainability, and more effective allocation of limited available resources.
In collaboration with subject matter experts, we developed a series of video didactics, sample dialogues, and other instruction material to create a virtual CBT training platform. At the end of each skill module, video vignettes of standardized patients present an example similar to a key therapy moment. The learner’s task is to verbally model the skill in the learning module in response to the video vignette. AI offers immediate feedback about whether the skill was on- or off-target along with tips for high-quality responses, and learners can retry until they have successfully learned the skill.
To create and train the AI models, 1509 unique learner responses were gathered in response to the skill modules. In the first round of coding to develop the initial models, responses were gathered to demonstrate presenting the cognitive model (n=336), as well as catching (n=303), checking (n=270), and changing a thought (n=262). All responses were coded by CBT experts, and a subset (n=144) were double coded (ICCs ranging from .49-.92). Iterative model improvements are underway and will be discussed.
Findings indicate that ML models can reliably identify skillful delivery of fundamental CBT skills, offering potential for scalable, efficient and accessible training in CBT and other evidence-based practices.