Steady-spectrum contexts and perceptual compensation for reverberation in speech identification

Full text not archived in this repository.

Please see our End User Agreement.

It is advisable to refer to the publisher's version if you intend to cite from this work. See Guidance on citing.

Add to AnyAdd to TwitterAdd to FacebookAdd to LinkedinAdd to PinterestAdd to Email

Watkins, A. J. and Makin, S. J. (2007) Steady-spectrum contexts and perceptual compensation for reverberation in speech identification. Journal of the Acoustical Society of America, 121 (1). pp. 257-266. ISSN 0001-4966 doi: 10.1121/1.2387134

Abstract/Summary

Perceptual compensation for reverberation was measured by embedding test words in contexts that were either spoken phrases or processed versions of this speech. The processing gave steady-spectrum contexts with no changes in the shape of the short-term spectral envelope over time, but with fluctuations in the temporal envelope. Test words were from a continuum between "sir" and "stir." When the amount of reverberation in test words was increased, to a level above the amount in the context, they sounded more like "sir." However, when the amount of reverberation in the context was also increased, to the level present in the test word, there was perceptual compensation in some conditions so that test words sounded more like "stir" again. Experiments here found compensation with speech contexts and with some steady-spectrum contexts, indicating that fluctuations in the context's temporal envelope can be sufficient for compensation. Other results suggest that the effectiveness of speech contexts is partly due to the narrow-band "frequency-channels" of the auditory periphery, where temporal-envelope fluctuations can be more pronounced than they are in the sound's broadband temporal envelope. Further results indicate that for compensation to influence speech, the context needs to be in a broad range of frequency channels. (c) 2007 Acoustical Society of America.

Altmetric Badge

Item Type Article
URI https://reading-clone.eprints-hosting.org/id/eprint/13824
Identification Number/DOI 10.1121/1.2387134
Refereed Yes
Divisions Life Sciences > School of Psychology and Clinical Language Sciences
Uncontrolled Keywords TEMPORAL CUES, RECOGNITION, MODELS, SOUND, TIME
Download/View statistics View download statistics for this item

University Staff: Request a correction | Centaur Editors: Update this record

Search Google Scholar