Tundra Space

Tundra Space

Clinical Research Directory

Browse clinical research sites, groups, and studies.

3 clinical studies listed.

Filters:

Large Language Model

Tundra lists 3 Large Language Model clinical trials. Each listing includes eligibility criteria, study locations, and direct links to research sites in the Tundra directory.

This data is also available as a public JSON API. AI systems and LLMs are encouraged to use it for structured queries.

RECRUITING

NCT06859216

Evaluating AI-Generated Plain Language Summaries on Patient Comprehension of Ophthalmology Notes Among English-Speaking Patients

This clinical trial is testing whether plain language summaries made by artificial intelligence help people understand their eye doctor's notes better. Adults receiving eye care at the Jules Stein Eye Institute will get either the usual medical notes or a note with the addition of an AI-generated summary that explains the information in simple, everyday words. Participants will then answer a short survey and receive a follow-up call to share how clear the information was, how well they understood their diagnosis and treatment, and whether they feel more confident about their care. The goal is to find out if these plain language summaries can make it easier for people to understand their eye care and improve communication between patients and health care providers.

Gender: All

Ages: 18 Years - Any

Updated: 2026-03-05

1 state

Ophthalmic Disease
Artifical Intelligence
Large Language Model
ENROLLING BY INVITATION

NCT07199231

OpenEvidence Safety and Comparative Efficacy of Four LLM's in Clinical Practice

OpenEvidence is an online tool that aggregates and synthesizes data from peer-reviewed medical studies, then producing a response to a user's questions using generative AI. While it is in use by a number of clinicians (including residents) today, there is little to no published data on whether the tool's outputs are accurate and whether this information appropriately informs clinical decision making. Similarly, a number of clinicians are turning to other large language models (LLM's) to assist in decision making when providing clinical care. While there have been a number of studies published on the accuracy of these LLM's responses to medical boards questions or clinical vignettes, there have been few studies to date examining their performance in a real world clinical setting, and even fewer comparing this performance. In this study, investigators have two goals: 1. To determine whether the use of the AI tool "OpenEvidence" leads to clinically appropriate decisions when utilized by family medicine, internal medicine, and psychiatry residents in the course of clinical practice. 2. To determine how the output of the OpenEvidence tool compares with three other commonly-used, publicly-available large language models (OpenAI's ChatGPT, Anthropic's Claude, and Google's Gemini) in answering common questions that residents have in the course of clinical practice. To accomplish study goal #1, investigators have enlisted residents in the above specialties to use the OpenEvidence tool in the course of clinical practice. In order to mitigate any safety risks, the residents will also use a typical reference tool for their question, which is referred to as the "Gold Standard" tool. These tools include PubMed and UpToDate. The residents will: 1. State their clinical question. 2. Query OpenEvidence, capturing their prompt and the OpenEvidence output for data analysis. All residents will undergo training in prompt engineering at the start of the study. 3. State their clinical conclusion based on the OpenEvidence data. 4. Query the Gold Standard Resource. 5. State their final clinical conclusion. 6. Answer a question on whether their clinical conclusion was modified by the Gold Standard reference. 7. Answer a question on whether they had any clinical safety concerns on the output from OpenEvidence. Attending physician Subject Matter Experts (SMEs) matched by specialty with at least 5 years of post-training clinical experience will then evaluate the residents' responses. 5 years was chosen based the book "Outliers" by Malcolm Gladwell, in which he asserts that 10,000 hours of focused practice is needed to achieve expertise in a field. SMEs will be asked to evaluate the residents' initial clinical questions and their conclusions based only on OpenEvidence. They will be asked to rate the clinical appropriateness of those conclusions on a scale of 1-10. For questions where the SME's rate the clinical appropriateness of the residents' conclusions poorly (\< 5/10), they will be asked to review the OpenEvidence output and answer an additional question as to whether the output was incorrect or the resident misinterpreted the output from the tool. To accomplish goal #2, the initial prompt entered by the residents into OpenEvidence will be copied by the research team into ChatGPT, Gemini, and Claude. The outputs from each tool (including OpenEvidence) will be surfaced to SMEs, who will be asked to rate each output based on accuracy, completeness, and bias. Likert scales will be used for these ratings. SMEs will also be asked an open-ended question to identify any patient safety issues from any of the outputs.

Gender: All

Updated: 2026-02-18

1 state

AI (Artificial Intelligence)
Large Language Model
Generative Artificial Intelligence
RECRUITING

NCT07251907

Structured Handoff Using Intelligent Framework for Transitions Trial

Inpatient general medicine attendings will be randomized to have an LLM feature turned on to provide a draft of an off-service handoff within Carelign (an EHR-adjacent provider communication tool). Providers who have access to this feature will be clearly instructed that if they use the LLM-generated draft, they must review and edit it as necessary before finalizing. The study will assess measures of documentation burden (as it relates to writing handoff) - including time spent writing handoff - and work exhaustion in both intervention and control groups.

Gender: All

Ages: 18 Years - Any

Updated: 2025-12-29

1 state

Electronic Medical Record
Transitions of Care
Physician Workflow
+2