Tundra Space

Tundra Space

Clinical Research Directory

Browse clinical research sites, groups, and studies.

Back to Studies
NOT YET RECRUITING
NCT06081569

Multimodal Deep Learning for the Diagnosis and Assessment of Alzheimer's Disease

Sponsor: First Hospital of China Medical University

View on ClinicalTrials.gov

Summary

Alzheimer's disease (AD) is the most common dementia and has been one of the most expensive diseases with the highest lethality. With the rapid increase of the aging population, more and more burdens will be posed on society and economics. The manifestations of AD are the progressive loss of memory, language and visuospatial function, executive and daily living abilities, and so forth. The Pathophysiological changes of AD occur 10-20 years before the clinical symptoms, while there is still a lack of effective strategy for early diagnosis. Mild cognitive impairment (MCI) is considered to be a transitional state between healthy aging and the clinical diagnosis of dementia and has received increasing attention as a separate diagnostic entity. To make the diagnosis, doctors ought to compressively consider the multimodal medical information including clinical symptoms, neuroimages, neuropsychological tests, laboratory examinations, etc. Multimodal deep learning has risen to this challenge, which could integrate the various modalities of biological information and capture the relationships among them contributing to higher accuracy and efficiency. It has been widely applied in imaging, tumor pathology, genomics, etc. Recently, the studies on AD based on deep learning still mainly focused on multimodal neuroimaging, while multimodal medical information requires comprehensive integration and intellectual analysis. Moreover, studies reveal that some imperceptible symptoms in MCI and the early stage of AD may also play an effective role in diagnosis and assessment, such as gait disorder, facial expression identification dysfunction, and speech and language impairment. However, doctors could hardly detect the slight and complex changes, which could rely on the full mining of the video and audio information by multimodal deep learning. In conclusion, we aim to explore the features of gait disorder, facial expression identification dysfunction, and speech and language impairment in MCI and AD, and analyze their diagnostic efficiency. We would identify the different degrees of dependency on multimodal medical information in diagnosis and finally build an optimal multimodal diagnostic method utilizing the most convenient and economical information. Besides, based on follow-up observations on the changes in multimodal medical information with the progress of AD and MCI, we expect to establish an effective and convenient diagnostic strategy.

Key Details

Gender

All

Age Range

50 Years - 85 Years

Study Type

OBSERVATIONAL

Enrollment

300

Start Date

2023-10-15

Completion Date

2026-10-15

Last Updated

2023-10-13

Healthy Volunteers

Yes

Interventions

DIAGNOSTIC_TEST

gait video; speech video; facial expression video;

The videos of participants' gait, facial expression, and speech will be recorded and analyzed further. Other routine diagnostic tests will also be performed such as imaging of MRI, cognitive scales, etc.