Tundra Space

Tundra Space

Clinical Research Directory

Browse clinical research sites, groups, and studies.

Back to Studies
NOT YET RECRUITING
NCT07022769
NA

Testing an AI Large Language Model Tool for Cognitive Debiasing in Musculoskeletal Care

Sponsor: University of Texas at Austin

View on ClinicalTrials.gov

Summary

The goal of this clinical trial is to find out whether using an artificial intelligence (AI) tool called a Large Language Model (LLM) can help patients think more clearly about their symptoms and improve their trust and experience during a visit to a musculoskeletal specialist. The study will answer two main questions: 1. Does an LLM-guided checklist that encourages patients to reflect on their beliefs about their symptoms improve their trust in the clinician (measured using the TRECS-7 scale)? 2. Does the checklist improve how patients feel about their consultation overall? Participants will be randomly assigned to one of two groups: * One group will receive an LLM-guided checklist that helps them think more flexibly about their condition. * The other group will receive an LLM-generated likely diagnosis and brief explanation of their symptoms. In both groups, the information from the AI tool will be shared with both the patient and the clinician before the consultation. Patients in the debiasing (intervention) group will: * Complete a short set of questions with help from a researcher * Receive a simple summary from the AI that reflects their beliefs and gently challenges any unhelpful thinking * Attend their regular specialist appointment * Complete a short survey afterwards capturing their thoughts, experience and basic demographics Patients in the diagnosis-only (control) group will: * Describe their symptoms to the AI LLM * Receive a likely diagnosis and short explanation based on this description * Attend their regular specialist appointment * Complete a short survey afterwards capturing their thoughts, experience and basic demographics

Official title: Comparison of a Large Language Model (LLM)-Facilitated Cognitive Debiasing Strategy Versus LLM-Generated Diagnostic Feedback Alone in Musculoskeletal Specialty Care: A Randomized Controlled Trial

Key Details

Gender

All

Age Range

18 Years - Any

Study Type

INTERVENTIONAL

Enrollment

150

Start Date

2025-06-23

Completion Date

2025-12-31

Last Updated

2025-06-26

Healthy Volunteers

No

Interventions

BEHAVIORAL

LLM-facilitated cognitive debiasing aid

As part of the intervention, patients first respond to a series of questions about their beliefs regarding their symptoms (e.g., "What's usually behind these symptoms?"), with responses transcribed verbatim via tablet. These responses are input into a Large Language Model (LLM), which generates a brief, supportive summary of the patient's beliefs, shared back with the patient to encourage self-awareness and reflection. Patients are then invited to consider prompts such as, "What emotions or circumstances might be influencing your thinking?" with their reflections again transcribed. The LLM analyzes these reflections to identify potential signs of emotional distress or maladaptive beliefs, and this output is again provided to the patient. The LLM summary of identified maladaptive beliefs is then also shown to the clinician ahead of the consultation to support more tailored, empathetic communication.

Locations (1)

Dell Medical School, University of Texas at Austin

Austin, Texas, United States