YIN, a fundamental frequency estimator for speech and music.
about
Electrocorticographic Activation within Human Auditory Cortex during Dialog-Based Language and Cognitive TestingVocal learning in a social mammal: Demonstrated by isolation and playback experiments in bats.Social Memory Formation Rapidly and Differentially Affects the Motivation and Performance of Vocal Communication Signals in the Bengalese Finch (Lonchura striata var. domestica).Automatic reconstruction of physiological gestures used in a model of birdsong production.Predicting plasticity: acute context-dependent changes to vocal performance predict long-term age-dependent changesVocal motor changes beyond the sensitive period for song plasticity.An automated procedure for evaluating song imitation.Social context-induced song variation affects female behavior and gene expression.Lesions of an avian basal ganglia circuit prevent context-dependent changes to song variabilityThe effect of instrumental timbre on interval discrimination.Uncovering phenotypes of poor-pitch singing: the Sung Performance Battery (SPB)Shifting Fundamental Frequency in Simulated Electric-Acoustic Listening: Effects of F0 Variation.Fully Automated Assessment of the Severity of Parkinson's Disease from Speech.Improving Speaker Recognition by Biometric Voice DeconstructionAbnormal intelligibility of speech in competing speech and in noise in a frequency region where audiometric thresholds are near-normal for hearing-impaired listeners.The perception of speech modulation cues in lexical tones is guided by early language-specific experienceRobust fundamental frequency estimation in sustained vowels: detailed algorithmic comparisons and information fusion with adaptive Kalman filtering.Measuring ensemble interdependence in a string quartet through analysis of multidimensional performance data.The influence of music-elicited emotions and relative pitch on absolute pitch memory for familiar melodies.A multimodal emotion detection system during human-robot interactionTemporal-envelope reconstruction for hearing-impaired listenersOn the selection of non-invasive methods based on speech analysis oriented to automatic Alzheimer disease diagnosis.Reward-based learning for virtual neurorobotics through emotional speech processingAchieving electric-acoustic benefit with a modulated tone.Shifting fundamental frequency in simulated electric-acoustic listening.Vocal accuracy and neural plasticity following micromelody-discrimination trainingAn avian basal ganglia-forebrain circuit contributes differentially to syllable versus sequence variability of adult Bengalese finch song.Low-frequency speech cues and simulated electric-acoustic hearing.Speech identification based on temporal fine structure cues.Perceptual coherence in listeners having longstanding childhood hearing losses, listeners with adult-onset hearing losses, and listeners with normal hearingSinging ability is rooted in vocal-motor control of pitchLearning the dynamical system behind sensory data.A corroborative study on improving pitch determination by time-frequency cepstrum decomposition using wavelets.Rising tones and rustling noises: Metaphors in gestural depictions of sounds.A fast and accurate frequency estimation algorithm for sinusoidal signal with harmonic components.Role of upper airway dimensions in snore production: acoustical and perceptual findings.Catecholaminergic contributions to vocal communication signals.Speech perception with F0mod, a cochlear implant pitch coding strategy.A novel cost function to estimate parameters of oscillatory biochemical systems.Crowd vocal learning induces vocal dialects in bats: Playback of conspecifics shapes fundamental frequency usage by pups.
P2860
Q27315868-15E61DA4-E847-4DF8-AF5C-B5AC7F694819Q27342434-55B5B118-ED6E-4E19-A7C9-CE901CCCD980Q28601325-267B458C-C26B-4078-AC1E-6AA95E382945Q28601495-09512B3D-2640-44D0-9847-537FD2C6435BQ28610831-D86F6FF2-89D3-45B6-A72C-FDED80FC824AQ28649398-2E314CA5-0545-48ED-A521-6FA2B5A9EAEFQ28657762-94ED6C7A-9C1B-4DC3-B9B7-403C9548716BQ29618760-0B0DEE78-0765-4EC4-B832-AD39B74E6F49Q29618763-85C47535-3328-418D-A7CB-9BEE5D0F429EQ30353901-B8996ECC-4A63-42E6-96E0-320131C3EC6CQ30354852-09F4E7ED-3A35-426F-B569-61B054C3444CQ30366710-43D61CF7-1F5B-4536-8264-04B418D9AC47Q30394668-05407F9E-F597-458A-888E-3E2EED7EF930Q30401507-459983A6-B47B-4972-A7E7-0DC6A91DDE72Q30401846-D7098848-ED48-43B5-81F2-F42B06B1E2C9Q30402964-35ABA07E-B4AC-4B77-BEDB-0660A96B6464Q30413080-8C49555B-CF6F-4B07-8FF2-890DBCEA6551Q30430801-C32A4C18-7E89-4ED6-A9C8-DF148EEA26A4Q30431800-652A2514-EF2E-4948-879F-5C07432AB397Q30445440-CEF74782-201F-414F-B2F9-B938EB650FDEQ30446594-939738E2-DE4B-40BA-8E84-AF5CB1FD0715Q30452525-380628DC-C2A4-4437-8B2E-A7B5B97B6CFEQ30454881-4395B91B-6046-41E9-88A3-23B40953ADE9Q30457854-FED4CB11-A7D4-4D56-97F9-C39A7D57DBC6Q30474309-954A06B2-0CBD-4D37-B3C6-2463D9315744Q30481002-1AADF20D-7AE4-4C7B-9182-7F1598161F3FQ30481299-F7079297-4F87-4A46-9C7A-24212E8B8B45Q30482931-1475D861-12F7-4D09-A08C-EA2E09EB0022Q30483884-2212B38E-AB8E-4BA0-A9C0-42BE2E247D53Q30491926-22966CA2-705C-40AD-89AB-BEE1B67F4AD1Q30562287-F987A376-42E3-4B42-90A2-D840BC81D517Q33526432-CC04DC4A-ACA5-4FE9-83BD-7C56AAEC70D2Q36877366-5A3804B8-2D74-4C0E-AA91-1C1B412D210EQ38657471-92D08318-29A8-4BB7-B664-C9A50F68B698Q39226656-3B4D113B-05FD-4CE8-9982-3E3F6CCABFC2Q39968389-E5165114-20CA-41D1-83A7-A8AE98485C5BQ41109851-A8E245E2-7717-4018-A239-DC0F515B0841Q41389208-C909D1E1-3028-497C-80C4-EA6071A963D6Q42216576-51E80B96-A681-4F67-817E-A15AD8DBA9DCQ43010834-FC21809E-7966-4EAB-A1CB-BA182D3242F0
P2860
YIN, a fundamental frequency estimator for speech and music.
description
2002 nî lūn-bûn
@nan
2002 թուականի Ապրիլին հրատարակուած գիտական յօդուած
@hyw
2002 թվականի ապրիլին հրատարակված գիտական հոդված
@hy
2002年の論文
@ja
2002年論文
@yue
2002年論文
@zh-hant
2002年論文
@zh-hk
2002年論文
@zh-mo
2002年論文
@zh-tw
2002年论文
@wuu
name
YIN, a fundamental frequency estimator for speech and music.
@ast
YIN, a fundamental frequency estimator for speech and music.
@en
type
label
YIN, a fundamental frequency estimator for speech and music.
@ast
YIN, a fundamental frequency estimator for speech and music.
@en
prefLabel
YIN, a fundamental frequency estimator for speech and music.
@ast
YIN, a fundamental frequency estimator for speech and music.
@en
P356
P1476
YIN, a fundamental frequency estimator for speech and music.
@en
P304
P356
10.1121/1.1458024
P577
2002-04-01T00:00:00Z