Tundra Space

Tundra Space

Clinical Research Directory

Browse clinical research sites, groups, and studies.

Back to Studies
RECRUITING
NCT07328815
NA

Mitigating Automation Bias in Physician-LLM Diagnostic Reasoning Using Behavioral Nudges

Sponsor: Lahore University of Management Sciences

View on ClinicalTrials.gov

Summary

The goal of this randomized controlled trial is to evaluate whether behavioral nudges can reduce automation bias, the uncritical acceptance of automated output, in physicians using large language models (LLM) like ChatGPT-5.1 for clinical decision-making. The main question it aims to answer is: Does a dual-mechanism behavioral nudge intervention (baseline accuracy anchoring plus case-specific color-coded confidence signals) reduce physicians' uncritical acceptance of incorrect LLM recommendations? Researchers will compare physicians who receive LLM recommendations along with a behavioral nudge to those who receive LLM recommendations without the nudge to assess if the nudge reduces automation bias. Participants will: * Evaluate six clinical vignettes accompanied by LLM-generated recommendations (half containing deliberate, clinically significant errors). * Control group: Be able to view LLM recommendations in standard format without the nudge. * Treatment group: Be able to view ChatGPT's diagnostic accuracy on standard medical datasets as an initial anchor, then receive color-coded confidence signals alongside each recommendation (e.g., red for low confidence). * Have their responses evaluated by blinded reviewers using an expert-developed assessment rubric to detect uncritical acceptance of erroneous information.

Key Details

Gender

All

Age Range

Any - Any

Study Type

INTERVENTIONAL

Enrollment

50

Start Date

2026-01-17

Completion Date

2026-08

Last Updated

2026-03-31

Healthy Volunteers

Yes

Conditions

Interventions

OTHER

Behavioral Nudge Intervention

Participants in the treatment group will receive a behavioral nudge intervention embedded in the LLM recommendations interface that presents two synchronized cognitive cues when the LLM panel is expanded: (1) an anchoring cue displaying ChatGPT's baseline diagnostic accuracy on standard medical datasets at the top of the panel to set realistic expectations before viewing the specific recommendation, and (2) a selective attention cue located immediately below, which shows the LLM recommendation alongside a case-specific and color-coded confidence signal. This signal is categorized as red when the mean ensemble confidence falls below the established baseline accuracy, flagging high-uncertainty cases that demand critical evaluation; orange when confidence meets or exceeds the baseline but remains below 100%, intended to prevent complacency and maintain active clinical scrutiny; and green for a 100% ensemble consensus, though standard cautionary warnings still apply to guard against.

Locations (1)

Lahore University of Management Sciences

Lahore, Punjab Province, Pakistan