Assessment
Akram J. Yusuf, B.S.
Graduate Student
Clinical Psychology Program, University of Maryland, College Park
Greenbelt, Maryland, United States
Hide Okuno, M.A., Other
Graduate Student
Clinical Child Psychology Program University of Kansas
Kansas City, Kansas, United States
Andres De Los Reyes, Ph.D.
Professor
Clinical Psychology Program, University of Maryland, College Park
Washington, DC, United States
Objective:
Best practices when assessing youth mental health involve administering instruments to multiple informants, often youth, their parents, and their teachers. These instruments may be used to make high-stakes decisions about the services youth might receive including making diagnoses and treatment planning. However, these practices result in assessments that produce discrepant results, such that any two instruments rarely align in their conclusions about youth and the services they may need. These discrepancies create challenges for clinicians faced with integrating data and making evidence-based decisions, largely because the most commonly used strategies for integrating data (e.g., composite scores, latent variable models) treat all discrepant results as if they have no value, even when the evidence indicates that they do have value (De Los Reyes, 2024). To address these challenges, the Satellite Model guides users to strategically select informants whose discrepant results contain valid data (Kraemer et al., 2003). Emerging work supports the validity of this strategy (Makol et al., 2025). Yet, users of the Satellite Model require guidance on how to interpret the scores it produces to facilitate decision-making. We applied receiver operating characteristics (ROC; Youngstrom, 2014) procedures to detect clinical cutoffs to use when interpreting integrated scores produced by the Satellite Model, using data from the Adolescent Brain Cognitive Development (ABCD) Study.
Methods:
The ABCD Study includes 8212 youth (Mage=12.5; SD=0.51; 52.5% Male) at the 1-year follow-up assessment with complete data for parent, youth, and teacher forms of the Achenbach System of Empirically Based Assessments (ASEBA). Each form produces standardized T scores to integrate using the Satellite Model. The model’s integrated scores include a trait score (i.e., common variance amongst informants), a context score (i.e., behavior specific to home vs. school) and a perspective score (i.e., behavior specific to what observers rate about youth vs. what youth self-rate). ROC procedures involve detecting specific trait, context, and perspective scores that distinguish youth on relevant indices linked to clinical decision-making (e.g., tests of executive functioning, mental health diagnoses, school grades). The ABCD Study includes a host of these variables, and ROC procedures involve exploring validity criteria that are amenable for creating binary, discrete categories to differentiate youth and detect clinical cutoffs. With clinical cutoffs for the Satellite Model’s scores, users would be able to detect youth with elevated overall clinical severity (i.e., trait score), as well as youth with elevated clinical severity that is specific to home or school (i.e., context score) or specific to a given vantage point (i.e., perspective score).
Conclusion
By detecting clinical cutoffs that are grounded in well-established criterion variables and a validated integrative strategy, this study contributes knowledge about decision-making in youth mental health assessments. In turn, professionals will have clinical cutoffs to guide them in their application of the Satellite Model in decision-making about services for youth mental health.