Deep image reconstruction from human brain activity

Sprague Lab

About the Project

We leverage the recent progress made in text-to-image generation for our task, deep image reconstruction. After building a joint-embedding model which maps the patient's fMRI to a shared image-text space, we use a pre-trained decoder (LAFITE) which is able to recreate the stimulus image from the encoded vector. 

  Student Team

  • James Du
  • Michael La
  • Connor Levenson
  • Elise Nguyen

  Mentors

  • Tommy Sprague, Sponsor
  • Sikun Lin, TA

   Presentation
 

About UCSB's Sprague Lab

In the Perception, Cognition, and Action Lab (PCA Lab), we seek to understand how the brain encodes and transforms neural representations of sensory information in service of dynamic behavioral demands. Sometimes different parts of the same scene are more important than others – we may be looking for a friend in the park, or instead watching birds. How do neural representations of an identical scene differ based on our current goals and actions? We tackle this problem with a multi-pronged approach: we design tasks probing different types of visual cognition – such as visual attention, working memory, and decision-making – while quantifying neural representations of stimulus features using computational neuroimaging techniques applied to complementary neural measurements, including fMRI and EEG. Our hope is to better understand how the healthy human brain functions in carefully-controlled situations. We believe this will lead to new avenues for improving normal cognitive function, as well as novel targets for diagnosing and treating psychiatric and neurological conditions which impact visual cognition, including Schizophrenia, autism, Alzheimer’s disease, and generalized anxiety disorder.