The processing of audio-visual speech: empirical and neural bases.
about
A neural basis for interindividual differences in the McGurk effect, a multisensory speech illusion.Cochlear implantation (CI) for prelingual deafness: the relevance of studies of brain organization and the role of first language acquisition in considering outcome success.Rapid, generalized adaptation to asynchronous audiovisual speechCue integration in categorical tasks: insights from audio-visual speech perception.Lip-reading aids word recognition most in moderate noise: a Bayesian explanation using high-dimensional feature spaceFrom Mimicry to Language: A Neuroanatomically Based Evolutionary Model of the Emergence of Vocal Language.Multistability in perception: binding sensory modalities, an overviewExperiments on Auditory-Visual Perception of Sentences by Users of Unilateral, Bimodal, and Bilateral Cochlear ImplantsInfluences of selective adaptation on perception of audiovisual speech.The Neural Basis of Speech Perception through Lipreading and Manual Cues: Evidence from Deaf Native Users of Cued Speech.Timing in audiovisual speech perception: A mini review and new psychophysical data.Visual Cortical Entrainment to Motion and Categorical Speech Features during Silent LipreadingIntegration of Partial Information Within and Across Modalities: Contributions to Spoken and Written Sentence RecognitionCan you hear me yet? An intracranial investigation of speech and non-speech audiovisual interactions in human cortex.Auditory object perception: A neurobiological model and prospective review.Visual speech discrimination and identification of natural and synthetic consonant stimuli.Dissociated roles of the inferior frontal gyrus and superior temporal sulcus in audiovisual processing: top-down and bottom-up mismatch detection.Neural pathways for visual speech perception.Nonnative audiovisual speech perception in noise: dissociable effects of the speaker and listener.Audiovisual speech perception at various presentation levels in Mandarin-speaking adults with cochlear implantsThe elicitation of audiovisual steady-state responses: multi-sensory signal congruity and phase effects.Audio-visual speech perception: a developmental ERP investigationHow does visual language affect crossmodal plasticity and cochlear implant success?Temporal context in speech processing and attentional stream selection: a behavioral and neural perspective.Brain responses and looking behavior during audiovisual speech integration in infants predict auditory speech comprehension in the second year of life.Speech through ears and eyes: interfacing the senses with the supramodal brain.The Ease of Language Understanding (ELU) model: theoretical, empirical, and clinical advances.Monkey lipsmacking develops like the human speech rhythm.Audio-visual onset differences are used to determine syllable identity for ambiguous audio-visual stimulus pairs.Psychophysics of the McGurk and other audiovisual speech integration effects.Dynamic changes in superior temporal sulcus connectivity during perception of noisy audiovisual speech.Neural development of networks for audiovisual speech comprehension.From acoustic segmentation to language processing: evidence from optical imaging.Dynamic, rhythmic facial expressions and the superior temporal sulcus of macaque monkeys: implications for the evolution of audiovisual speech.Audiovisual integration of speech in a bistable illusion.Two cortical mechanisms support the integration of visual and auditory speech: a hypothesis and preliminary data.fMR-adaptation indicates selectivity to audiovisual content congruency in distributed clusters in human superior temporal cortexQuantified acoustic-optical speech signal incongruity identifies cortical sites of audiovisual speech processing.The natural statistics of audiovisual speechMultimodal integration of carbon dioxide and other sensory cues drives mosquito attraction to humans.
P2860
Q24634949-3CE26523-DDBA-4864-889C-4370FCA0FB3AQ27015723-538A7DB2-C810-480E-A76A-52FDDA1B0746Q27331621-229480F6-8CF1-46BF-BECB-0BBDB49A776DQ27342140-686FB59E-8BEE-49C7-A596-FD1FA8DB50B5Q27352748-21E9545E-70E5-40F2-9C78-B947AB7E4312Q28597015-B0424F88-4FDF-4C16-B4F9-BAC3E2E3DDAAQ28731972-023BEF80-90F2-4701-81CF-382A6D89049DQ30355257-D8A165C3-1C9A-4D80-BE3A-6530B7714C4DQ30357753-9AFD5686-DAC3-4351-A352-4DA75F0A51CDQ30358411-0536DE5C-A1C5-472F-9FCE-E4D9B421B255Q30364364-6298F194-3DE4-483D-A542-59F0D7DC4335Q30365052-1CE1915C-63DA-4569-B73F-32D16901358DQ30369782-531568FB-3E8E-447E-9B82-71FAD5DACEB6Q30380401-1BA57D4E-7896-4FF1-B99A-28ED56DD736FQ30401625-5D224C0E-E8A7-4FB1-B3DC-BAAC82959296Q30406260-E9279AC5-43CA-4A3A-884B-92F9EEC62EA1Q30414610-BE3E86C1-F245-478D-A1FA-41FE45A68954Q30422799-896D3F37-9371-4AB4-9CC4-955D4921A7C2Q30423192-6516F2E4-AF04-4914-B600-C8F031906B00Q30430627-943CC5E2-D12C-4816-AC1F-12FE80EDE22FQ30433254-64EDA0A7-0996-4F3A-9FD0-953EE1440E5CQ30439778-6E88E51A-420B-4B54-9677-2D2D0587EF85Q30440026-36098E05-CABB-4334-9B90-3A4765DBC408Q30449868-F7F55C7F-154A-48EE-B42A-1FA800073DB3Q30451932-22963BA7-AEC2-452C-8348-6975546B3884Q30452048-24C0D08D-9364-4398-8832-9D68C4C40D79Q30452053-AC9EC0A4-F664-4A16-BD4C-850F8B02D964Q30452854-D103CDAE-6ACF-4AFE-B32D-AEC79B47D873Q30453150-657372D9-4974-49D5-B692-9096882AD4B0Q30465245-6115F898-7196-4EF4-A50E-C719EAC0820EQ30474611-7278AFB3-463E-42F1-ABC6-BE00D532A946Q30474625-472D9ACC-BB2A-4EC9-9088-0FEAEA57EDE3Q30479937-206EA68D-C382-4A09-AC54-C42A4D746637Q30480612-499CA92A-D65D-4EAE-B2C6-DA3D5907FACAQ30481513-55E273F2-D7F0-4187-BC8A-3C47A9C00488Q30482594-9FB802E1-24C2-48CA-B5C5-C2177FDBEEEAQ30483093-0A05C0D9-F702-477A-9100-4EA459A872F8Q30485137-15EAF636-2019-4315-ACD6-33473FC8C2C5Q30488184-13FEC93C-1750-4C83-AD9B-211448F95D5FQ30577430-5227CDA3-6955-4114-9E8B-4E451E73A482
P2860
The processing of audio-visual speech: empirical and neural bases.
description
2008 nî lūn-bûn
@nan
2008 թուականի Մարտին հրատարակուած գիտական յօդուած
@hyw
2008 թվականի մարտին հրատարակված գիտական հոդված
@hy
2008年の論文
@ja
2008年論文
@yue
2008年論文
@zh-hant
2008年論文
@zh-hk
2008年論文
@zh-mo
2008年論文
@zh-tw
2008年论文
@wuu
name
The processing of audio-visual speech: empirical and neural bases
@nl
The processing of audio-visual speech: empirical and neural bases.
@ast
The processing of audio-visual speech: empirical and neural bases.
@en
type
label
The processing of audio-visual speech: empirical and neural bases
@nl
The processing of audio-visual speech: empirical and neural bases.
@ast
The processing of audio-visual speech: empirical and neural bases.
@en
prefLabel
The processing of audio-visual speech: empirical and neural bases
@nl
The processing of audio-visual speech: empirical and neural bases.
@ast
The processing of audio-visual speech: empirical and neural bases.
@en
P2860
P3181
P356
P1476
The processing of audio-visual speech: empirical and neural bases.
@en
P2093
Ruth Campbell
P2860
P304
P3181
P356
10.1098/RSTB.2007.2155
P407
P577
2008-03-01T00:00:00Z