Skip to main navigation Skip to search Skip to main content

Synthesizing images of tau pathology from cross-modal neuroimaging using deep learning

  • Jeyeon Lee
  • , Brian J. Burkett
  • , Hoon-Ki Min
  • , Matthew L. Senjem
  • , Ellen Dicks
  • , Nick Corriveau-Lecavalier
  • , Carly T. Mester
  • , Heather J. Wiste
  • , Emily S. Lundt
  • , Melissa E. Murray
  • , Aivi T. Nguyen
  • , Ross R. Reichard
  • , Hugo Botha
  • , Jonathan Graff-Radford
  • , Leland R. Barnard
  • , Jeffrey L. Gunter
  • , Christopher G. Schwarz
  • , Kejal Kantarci
  • , David S. Knopman
  • , Bradley F. Boeve
  • Val J. Lowe, Ronald C. Petersen, Clifford R. Jack, David T. Jones*
*Corresponding author for this work
  • Mayo Clinic Rochester, MN
  • Hanyang University
  • Mayo Clinic College of Medicine and Science

Research output: Contribution to journalArticleAcademicpeer-review

22 Downloads (Pure)

Abstract

Given the prevalence of dementia and the development of pathology-specific disease-modifying therapies, high-value biomarker strategies to inform medical decision-making are critical. In vivo tau-PET is an ideal target as a biomarker for Alzheimer’s disease diagnosis and treatment outcome measure. However, tau-PET is not currently widely accessible to patients compared to other neuroimaging methods. In this study, we present a convolutional neural network (CNN) model that imputes tau-PET images from more widely available cross-modality imaging inputs. Participants (n = 1192) with brain T1-weighted MRI (T1w), fluorodeoxyglucose (FDG)-PET, amyloid-PET and tau-PET were included. We found that a CNN model can impute tau-PET images with high accuracy, the highest being for the FDG-based model followed by amyloid-PET and T1w. In testing implications of artificial intelligence-imputed tau-PET, only the FDG-based model showed a significant improvement of performance in classifying tau positivity and diagnostic groups compared to the original input data, suggesting that application of the model could enhance the utility of the metabolic images. The interpretability experiment revealed that the FDG- and T1w-based models utilized the non-local input from physically remote regions of interest to estimate the tau-PET, but this was not the case for the Pittsburgh compound B-based model. This implies that the model can learn the distinct biological relationship between FDG-PET, T1w and tau-PET from the relationship between amyloid-PET and tau-PET. Our study suggests that extending neuroimaging’s use with artificial intelligence to predict protein specific pathologies has great potential to inform emerging care models.
Original languageEnglish
Pages (from-to)980-995
Number of pages16
JournalBrain
Volume147
Issue number3
DOIs
Publication statusPublished - 1 Mar 2024
Externally publishedYes

Keywords

  • Alzheimer’s disease
  • FDG PET
  • cross-modality imputation
  • deep learning
  • tau PET

Fingerprint

Dive into the research topics of 'Synthesizing images of tau pathology from cross-modal neuroimaging using deep learning'. Together they form a unique fingerprint.

Cite this