Dissemination & Implementation Science
Developing Scalable Knowledge Resources for Effective Implementation of Evidence-Based Tools
Eliza Swindell, B.A.
Project Manager
University of California Los Angeles
Los Angeles, California, United States
When assessing and treating youth with anxiety and depression, mental health providers have access to more evidence-based approaches than ever before. For example, Higa-McMillan et al. (2016) identified thirty well supported evidence-based treatments for youth with anxiety, and Weersing et al. (2017) found nine for youth with depression, with most treatments focusing on core principles of Cognitive Behavioral Therapy. There is an even greater number of choices for evidence-based assessments of these disorders in youth (Spence, 2017).
As more and better assessment and treatment approaches emerge, there has not been an equivalent growth in resources to support clinicians’ effective implementation of these tools. Treatment manuals and assessment instruments are highly complex resources, and their appropriate use often requires extensive training and ongoing user support or consultation.
Our current strategies to support the scaling of these evidence-based practices (e.g., user guides, websites, human staff and volunteers, and subject matter experts) are generally insufficient to meet growing demands due to fragmentation, high monetary and temporal costs, and a relative scarcity of domain experts. In response to this problem, our research team is working on scalable knowledge resources that are machine-driven but also highly accurate in providing answers to user questions in implementation scenarios. Current AI approaches using large language models are currently insufficient for these purposes, as user questions are often too detailed to be answered without a curated data source. We therefore see the potential of data-driven approaches such as those involved in Computable Biomedical Knowledge (CBK) resources (McCusker et al., 2022) to support implementation of evidence-based assessment and treatment.
In this study, we report on early-phase design of a CBK resources for assessing anxiety and depression in youth. Utilizing data collected from the NIMH-funded project supporting worldwide implementation of the Revised Child Anxiety and Depression Scale (RCADS), we began with an analysis of user inquiries, seeking to classify their answerability by human or machine supports.
Our sample consisted of over 1,000 emails from users of the RCADS received over 2.5 years. Human coders will classify the resource required to respond to the inquiry: 1 (information on the RCADS website), 2 (RCADS staff member), 3 (expert consultation) and use case(s) mentioned in the inquiry: clinical use, research use, translation, derivative creation, scoring, and norms. This coding will support empirical examination of questions such as: Which are the most commonly required resources and mentioned use cases? Do inquiries with more use cases tend to require more resources to answer?
Additionally, there is an emerging need for guidance on how to administer and score the RCADS to gender-diverse youth. To elucidate the presence of this need, we will conduct exploratory analyses to determine how many inquiries mention this issue.
Here, we demonstrate a first step toward building scalable knowledge resources to support implementation of evidence-based tools: characterizing the use cases and questions for which users are telling us resources need to be built.