Major
Neuroscience
Anticipated Graduation Year
2022
Access Type
Open Access
Abstract
The intersensory redundancy hypothesis suggests that synchronized, multimodal information recruits infant attention, contributing to better processing and discrimination of stimulus features perceived through both modalities (Bahrick & Lickliter, 2000). This amodal information (e.g., rhythm or rate) is hypothesized to be more readily processed than information unique to a single modality (e.g., voice or appearance). In the current study, we are investigating the impact of intersensory redundancy on infant face processing. Five- and 12-month-old infants are currently being recruited for a study run using the online platform, Lookit. These results will increase our understanding of how infants attend to multimodal faces, which are prevalent in their everyday environment.
Faculty Mentors & Instructors
Maggie Guy, Ph.D., Assistant Professor, Department of Psychology; Asli Bursalioglu, M.A., Graduate Student, Department of Psychology
Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 License.
Infant Face Processing in a Multimodal Context
The intersensory redundancy hypothesis suggests that synchronized, multimodal information recruits infant attention, contributing to better processing and discrimination of stimulus features perceived through both modalities (Bahrick & Lickliter, 2000). This amodal information (e.g., rhythm or rate) is hypothesized to be more readily processed than information unique to a single modality (e.g., voice or appearance). In the current study, we are investigating the impact of intersensory redundancy on infant face processing. Five- and 12-month-old infants are currently being recruited for a study run using the online platform, Lookit. These results will increase our understanding of how infants attend to multimodal faces, which are prevalent in their everyday environment.