about
A neural basis for interindividual differences in the McGurk effect, a multisensory speech illusion.Perception matches selectivity in the human anterior color centerElectrocorticography Reveals Enhanced Visual Cortex Responses to Visual Speech.Auditory cortex activation to natural speech and simulated cochlear implant speech measured with functional near-infrared spectroscopy.See me, hear me, touch me: multisensory integration in lateral occipital-temporal cortex.Neuroimaging with near-infrared spectroscopy demonstrates speech-evoked activity in the auditory cortex of deaf children following cochlear implantationSurface area accounts for the relation of gray matter volume to reading-related skills and history of dyslexiaDissociation of face-selective cortical responses by attention.fMRI-Guided transcranial magnetic stimulation reveals that the superior temporal sulcus is a cortical locus of the McGurk effectReceptive language organization in high-functioning autismStatistical criteria in FMRI studies of multisensory integration.Touch, sound and vision in human superior temporal sulcus.FMRI group analysis combining effect estimates and their variancesFunctional imaging of human crossmodal identification and object recognition.Temporal lobe white matter asymmetry and language laterality in epilepsy patients.The developmental trajectory of brain-scalp distance from birth through childhood: implications for functional neuroimagingElectrocorticography links human temporoparietal junction to visual perception.Multisensory speech perception without the left superior temporal sulcus.A Causal Inference Model Explains Perception of the McGurk Effect and Other Incongruent Audiovisual SpeechA new method for improving functional-to-structural MRI alignment using local Pearson correlation.Perceiving electrical stimulation of identified human visual areasSocial perception in autism spectrum disorders: impaired category selectivity for dynamic but not static images in ventral temporal cortex.Is a single 'hub', with lots of spokes, an accurate description of the neural architecture of action semantics?: Comment on "Action semantics: A unifying conceptual framework for the selective use of multimodal and modality-specific object knowledgeMouth and Voice: A Relationship between Visual and Auditory Preference in the Human Superior Temporal SulcusCausal inference of asynchronous audiovisual speech.Computer-controlled electrical stimulation for quantitative mapping of human cortical function.Distributed representation of single touches in somatosensory and visual cortex.Graded effects of spatial and featural attention on human area MT and associated motion processing areas.Saturation in Phosphene Size with Increasing Current Levels Delivered to Human Visual Cortex.Human MST but not MT responds to tactile stimulation.Neural basis of visually guided head movements studied with fMRI.Parallel visual motion processing streams for manipulable objects and human movements.Relationships between essential cortical language sites and subcortical pathways.Sound enhances touch perception.Frontal cortex selects representations of the talker's mouth to aid in speech perception.Integration of auditory and visual information about objects in superior temporal sulcus.A comparison of visual and auditory motion processing in human cerebral cortex.Continued access to investigational brain implants.Published estimates of group differences in multisensory integration are inflatedA causal inference explanation for enhancement of multisensory integration by co-articulation
P50
Q24634949-AFAC28F2-C4F1-49A0-958D-D0B8DC78A7C2Q28267838-D261A7A4-3878-4FA4-A8A1-090E026CE3D8Q30370871-DFC26609-57C6-401B-B1AB-6B11B7661776Q30417670-3AC788B2-976F-41B6-A85E-9086917D7CC1Q30458106-703B3963-C55A-4E8C-817B-303C52358E79Q30472515-4202F6C4-0BFE-473F-9207-B46CABB74EB9Q30473245-DC918B67-8D43-44A9-B2F1-CA6AEA62B016Q30476653-EC42F6B3-FED1-40C3-ABD7-377CF733CF21Q30479959-97FCFFAB-7A0A-4A93-A045-F29A3F327819Q30480666-AA0ECBD7-C991-4AFB-AA7F-7C394FE5D347Q30482569-BD9BDB3F-CF5D-4B36-A9FA-0909338EDFEEQ30488624-B33B4505-DAD5-4378-88D7-F30FDDE10495Q30522147-FE2D5553-FFAF-4CC1-89EB-8555B3C0A166Q33219847-5B9DD2AD-B5C6-4074-9724-53C00712C72DQ33637314-61F86D4D-E478-465E-A09B-4D12A4D8DBC9Q34034727-C1C4B49B-7783-4451-9AFE-5DD359EC2B02Q36067230-893E77E3-1DAD-4608-89D8-BCA6BDD27B48Q36128307-3CADE98F-E692-4C3A-93DD-AA48D2A582D1Q36282385-C7EA75B4-8F4F-4C4C-9C47-C2332A9EF7B2Q37113940-E6A91C26-AD91-4B8D-BA74-DD63CB278A43Q37146860-DBCD451E-2C8F-4D93-8ED2-F2C8917F5D00Q37390212-51F92118-3895-437B-8EBA-1AA4B48A9B1CQ38437329-85C5C42C-52A7-4033-8B58-92CE1A9EE84AQ42315876-6A469D34-7C0D-4121-BF03-FF7DF5540D8DQ42908979-53FF6C5A-1D83-499F-8BAE-4863B5A0C047Q43553614-4BD900F7-F62F-43D1-A598-BE483B4DD5B7Q45146253-169F8060-F1F0-4BB4-9696-746EA6A92300Q46152763-533FB36D-5580-4D1A-B41E-8431FA8131CDQ47203846-5DD84A34-B4C9-4C23-9999-9C50C4D66D59Q48087796-1A6506C8-DB25-4C6B-A7BA-A6FB18A41ABEQ48377206-FFA84D38-2964-485F-8A38-64B3214B859FQ48640278-0E351E94-4F45-4DBA-9F11-C5E4EBB85DA5Q48656445-D6F29FD2-57EE-4609-AC0E-04030C7A22C1Q48693472-2D6C6651-4105-4B3E-9A6D-7F5430ABB42FQ50334458-F3E02127-7734-4713-8630-E266C61747DCQ51942773-3A5A5B16-1208-4DEF-A996-E79982DFCBD9Q52024585-4113C239-1C58-4A8A-B4EF-FCA52F87F3D5Q53403691-203FC516-0D95-45B3-A5F9-09DF7772E126Q58720372-242B1821-C7B7-4044-9753-081813487687Q60300445-B0FA7132-0349-4DA4-AD99-D877DF0E1A46
P50
description
hulumtues
@sq
researcher
@en
wetenschapper
@nl
հետազոտող
@hy
name
Michael S Beauchamp
@ast
Michael S Beauchamp
@en
Michael S Beauchamp
@es
Michael S Beauchamp
@nl
Michael S Beauchamp
@sl
type
label
Michael S Beauchamp
@ast
Michael S Beauchamp
@en
Michael S Beauchamp
@es
Michael S Beauchamp
@nl
Michael S Beauchamp
@sl
prefLabel
Michael S Beauchamp
@ast
Michael S Beauchamp
@en
Michael S Beauchamp
@es
Michael S Beauchamp
@nl
Michael S Beauchamp
@sl
P106
P21
P31
P496
0000-0002-7599-9934