We’re excited to have Adi Ganesan, a PhD researcher at Stony Brook University, Penn, and Vanderbilt, on the show. We’ll talk about how large language models LLMs) are being tested and used in psychology, citing examples from mental health research. Fun fact: Adi was Sid's research partner during his Ph.D. program.
We talk about how large language models LLMs) are being tested and used in psychology, citing examples from mental health research. Fun fact: Adi was Sid's research partner during his Ph.D. program.
The intersection of artificial intelligence and mental health represents one of the most promising but challenging areas of research in computational social science. In this discussion, Adi Ganesan shared critical insights from is research about the role language models can play in psychological research and therapeutic settings.
Contrary to the hyperbolic claims that language models can solve virtually any problem, Ganesan emphasizes that large language models (LLMs) have specific limitations when applied to mental health contexts. These systems excel at certain tasks but struggle with others in ways that mirror human clinical limitations. For instance, LLMs demonstrate difficulty detecting psychomotor retardation or agitation from text alone – a limitation that parallels challenges human clinicians face when restricted to written communication without visual or auditory cues. Interestingly, the models also tend to be oversensitive in detecting suicidality from language, potentially flagging concerning content that may not represent genuine risk.
One of the most significant challenges with current language models in therapeutic contexts is their eagerness to jump directly to problem-solving rather than spending time understanding the patient's situation. As Ganesan points out, citing research from Tim Althoff's group at the University of Washington, these models often bypass the critical exploration phase that human therapists prioritize. Instead of taking time to comprehend the patient's perspective and develop rapport, AI systems frequently rush to provide solutions – a behavior that undermines the therapeutic process and potentially prevents patients from developing their own problem-solving skills.
Despite these limitations, Ganesan identifies several promising applications for language models in mental health settings. Cognitive reframing, a therapeutic technique that helps patients identify and challenge negative thought patterns, presents a particularly valuable opportunity. Current practice typically involves patients completing worksheet exercises independently, which can be cognitively demanding during periods of distress. LLMs could assist by helping patients identify thought traps and suggest alternative perspectives, functioning as an accessible tool between therapy sessions. Additionally, these systems show potential as note-taking assistants during clinical sessions, freeing therapists to focus more fully on patient interaction rather than documentation.
The discussion also explores the complex challenge of evaluating language models for mental health applications. Unlike many commercial deployments that prioritize engagement metrics, responsible implementation requires multifaceted evaluation frameworks addressing effectiveness, privacy, security, and appropriate levels of engagement. Ganesan highlights the importance of staged evaluation approaches, drawing parallels to the development of self-driving technologies with clearly defined capability levels. Before deploying systems directly with patients, researchers should first evaluate models against archived data, then with expert clinicians simulating patient interactions, and finally in carefully controlled trials with real patients.
Perhaps most intriguing is the exploration of theory of mind – the ability to understand others' mental states and perspectives – in language models. While current systems can pass simple theory of mind tests, they struggle with the rich contextual understanding required in therapeutic settings. Developing models that can truly comprehend a patient's unique perspective, history, and needs remains a significant challenge but represents a crucial direction for future research.
As we continue to explore the potential of AI in mental health, responsible development requires us to acknowledge both the promising capabilities and inherent limitations of these systems, ensuring they augment rather than replace the human connection at the heart of effective therapy.
Connect with them to comment on your favorite topics: