Clinical Research Directory
Browse clinical research sites, groups, and studies.
Mirror Speech Entrainment: A Novel Technique for Voice Personalized Speech Entrainment for Nonfluent Aphasia
Sponsor: University of South Florida
Summary
The goal of this clinical trial is to test the use of voice personalization through artificial intelligence (AI) voice cloning on speech entrainment tasks to improve language production of persons with aphasia (PWA). The main question the study aims to answer is: \- What is the impact of personalized voice on speech entrainment in PWA compared to traditional speech entrainment? Speech entrainment is a technique used by speech-language pathologists to improve the speech production of PWA. Traditionally, speech therapists act as the model for participants to speak along with to improve their speech production. This study proposes the use of one's own voice (digitally altered) to improve speech production. The study uses a mobile health approach to administer speech entrainment treatment through a mobile app. * Smartphones with the mobile app pre-installed will be mailed to participants at no cost. * Participants will complete treatment in the comfort of their homes. * The experimental treatments involve: mirror speech entrainment (speaking along to one's own voice) and traditional speech entrainment (speaking along to someone else's voice).
Key Details
Gender
All
Age Range
18 Years - 70 Years
Study Type
INTERVENTIONAL
Enrollment
20
Start Date
2025-05-01
Completion Date
2026-03-01
Last Updated
2025-04-16
Healthy Volunteers
No
Conditions
Interventions
Mirror speech entrainment
Speech entrainment with the user's own voice using auditory-only and auditory-visual modalities.
Traditional speech entrainment: auditory-only
Traditional speech entrainment (speech entrainment using an external agent's voice) using the auditory-only modality (users only listen and speak along to auditory stimuli)
Traditional speech entrainment: auditory-visual
Traditional speech entrainment (speech entrainment using an external agent's voice) using the auditory-visual modality (users listen and speak along to both auditory and visual stimuli (mouth movements)
Locations (1)
University of South Florida
Tampa, Florida, United States