albouy philippe imera iea aix marseille universite 2025 2026 scaled

Philippe Albouy

Disciplines: Cognitive ScienceNeuroscience
Title and home institution: Associate professor at Université Laval, Quebec, Canada.
Category of Fellowship: Annual Residency
Chair: Language, communication and the brain – ILCB/Iméra Chair
Research program: Interdisciplinary Explorations
Residency length: September 2025 – July 2026
Currently a resident fellow at Iméra

Research project

Multi-Modal and Multi-Scale Dynamics of Speech/Music Perception and Memory

Project abstract

Language and music represent the most important and cognitively complex uses of sound by the human nervous system, serving as essential elements for communication (Zatorre, Belin et al. 2002). Numerous studies have investigated whether the encoding and memory of music and language rely on separate or overlapping mechanisms (Norman-Haignere, Kanwisher et al. 2015, Flinker, Doyle et al. 2019, Albouy, Benjamin et al. 2020, Giroud, Trebuchon et al. 2020). One proposed mechanism for their separate processing is hemispheric lateralization, where language is predominantly processed by the left hemisphere, and music by the right hemisphere (Zatorre et Belin 2001). However, the precise mechanisms for this specialization are still unclear for several reasons:

Acoustic Cues vs. Categories 


First, it is unclear whether the brain processes speech and music based on domain-general acoustic cues or domain-specific categories. Our prior data (Albouy, Benjamin et al. 2020) suggest that left and right auditory cortices are sensitive to different ranges of spectrotemporal modulations, favoring an acoustical explanation, while others emphasize cognitively-based categories (Norman-Haignere, Kanwisher et al. 2015). This debate has yet to be resolved in part because few experiments use the same stimuli and data to test predictions of each model.

Artificial vs. Realistic Stimuli


The second limit is the reliance on artificial stimuli in previous studies, which may not accurately represent how the brain processes complex sounds in real-life situations, where speech and music often occur together. This limits the generalizability of findings and may oversimplify our models.

Learning and Memory – Single unit vs LFP data


The third limitation relates to the types of electrophysiological signals used to study perception, learning and memory for speech and music in the human brain. So far, electrophysiological research has primarily focused on recording Local Field Potentials (LFPs) from distributed cortical networks (EEG/MEG) and deeper brain regions (SEEG). However, this measure is relatively insensitive to neuronal firing activity (single unit activity) (Agopyan-Miu, Merricks et al. 2023). While several studies have proposed that neural firing is positively correlated with the amplitude of high frequency oscillation (gamma > 40Hz) (Buzsaki, Anastassiou et al. 2012), recent studies have suggested that LFP and neural firing (single unit) might rather contain complementary information (Agopyan-Miu, Merricks et al. 2023). In the context of auditory learning and memory, our understanding of the relationship between single-unit and LFP activity is thus limited, which constrains our ability to fully comprehend how auditory sequences (such as speech and music) are learned and stored in the human brain.

Objectives of the current project


With two complementary research axes, we will address these limitations using a unified experimental approach to study perception and memory of speech and music. In Axis I, we will investigate the perception of speech and music by employing naturalistic stimuli and extract their spectrotemporal acoustic modulations and linguistic/musical categorical features over time. To obtain a complete picture of the brain mechanisms underlying these features, we will conduct a multi-institutional, multi-modal project to acquire electrophysiological recordings for fast oscillatory processes both intracranially in epileptic patients and with OPM-MEG for full cortical mapping in healthy individuals. We will analyze electrophysiological signals recorded through stereo-electroencephalography (SEEG) at CHU de Québec-Université Laval/CERVO (my home institutions) and OPM-MEG data in healthy adults that will be recorded at the Institut de Neurosciences des Systèmes (UMR1106, amU/Inserm) in collaboration with Dr. Benjamin Morillon and Dr. Christian Bénar, thereby providing a unique and complementary data set.

In Axis II, we will investigate the multi-scale mechanisms supporting learning and memory of auditory sequences in the human hippocampus using hybrid macro/micro SEEG (data that will be recorded at CHU de Québec-Université Laval/CERVO during my stay in Marseille). More particularly, we aim to study the role of the hippocampus in associative learning and auditory Working memory (WM), a cognitive function supporting the short-term storage, processing and manipulation of recent information (or retrieved from long-term memory (D’Esposito 2007, Cowan 2008, Baddeley 2010), having an impact for daily tasks, autonomy and quality of life. By examining the interaction between LFPs and the activity of specific “concept cells” (firing once an association is learned) we aim to uncover how auditory memories are retained at the neuronal level.

This research offers a comprehensive approach by integrating multimodal and multiscale of neural data to explore the perception and memory of speech and music, advancing our global understanding of auditory cognition.

Biography

I am associate professor at the School of Psychology of Université Laval, FRQ-S Junior 2 Scholar (equivalent of research chair) and regular researcher at the CERVO Brain Research Center. I obtained a PhD in neuroscience in 2014 from Université Lyon 1 (France) and then joined the Montreal Neurological Institute of McGill University. My research aims to improve our understanding of the cognitive and neural mechanisms underlying how humans perceive, learn and use complex sounds such as speech and music. We characterized the neural networks and oscillatory dynamics supporting the different processing stages of human auditory (speech, music) working memory. We also develop innovative brain stimulation procedures consisting in the modulation of brain oscillations in real time, during task performance, to causally improve human working memory. We recently developed the EEGNet platform that aims to provide access to standardized electroencephalography data and analysis tools for the international community. Finally, since 2021, in collaboration with CHU de Québec-Université Laval I have developed an intracortical electroencephalography (SEEG) platform where I lead SEEG research projects as principal investigator. I have initiated national and international (France, Spain, Germany, China) collaborations that I hope will make Quebec City an emerging center for human SEEG research.

Appels à candidature

Les résidences de recherche que propose l’Iméra, Institut d’études avancées (IEA) d’Aix Marseille Université, s’adressent aux chercheurs confirmés – académiques, scientifiques et/ou artistes. Ces résidences de recherche sont distribuées sur quatre programmes (« Arts & sciences : savoirs indisciplinés », « Explorations interdisciplinaires », « Méditerranée » et « Utopies nécessaires »).