TITLE

Evaluation of an Audiovisual-FM System: Investigating the Interaction Between Illumination Level and a Talker's Skin Color on Speech-Reading Performance

AUTHOR(S)
Gagné, Jean-Pierre; Laplante-Lévesque, Ariane; Labelle, Maude; Doucet, Katrine; Potvin, Marie-Christine
PUB. DATE
June 2006
SOURCE
Journal of Speech, Language & Hearing Research;Jun2006, Vol. 49 Issue 3, p628
SOURCE TYPE
Academic Journal
DOC. TYPE
Article
ABSTRACT
A program designed to evaluate the benefits of an audiovisual-frequency modulated (FM) system led to some questions concerning the effects of illumination level and a talker's skin color on speech-reading performance. To address those issues, the speech of a Caucasian female was videotaped under 2 conditions: a light skin color condition and a dark skin color condition. For the latter condition, makeup was applied to the talker's face. For both skin color conditions, the talker was recorded while speaking sentences under 7 different levels of illumination: 2, 3, 4, 16, 60, 256, and 600 footcandles (fc). Fifteen participants completed the speech perception task in a visual-only modality. The results revealed a significant interaction of illumination level and skin color. For the light skin color condition, speech-reading performance improved systematically as the illumination level increased from 3 to 16 fc. For the dark skin color condition, no differences in speech-reading performance were observed between the 2-fc and the 3-fc conditions. However, a large improvement in speech-reading performance was observed as the illumination level increased from 4 fc to 16 fc. It is speculated that in addition to an overall effect of illumination level, the contrast in luminance at the level of the talker's face has an effect on speech-reading performance.
ACCESSION #
21659940

 

Related Articles

  • Audiovisual Investigation of the Loudness-Effort Effect for Speech and Nonspeech Events. Rosenblum, Lawrence D.; Fowler, Carol A. // Journal of Experimental Psychology. Human Perception & Performan;Nov91, Vol. 17 Issue 4, p976 

    A controversial claim in the literature on speech perception is that loudness judgments of speech sounds are based on perceived vocal effort, not on acoustic intensity, which is the presumed basis of loudness judgments of nonspeech sounds. Researchers found that sustained vowels produced with...

  • Can you McGurk yourself? Self-face and self-voice in audiovisual speech. Aruffo, Christopher; Shore, David // Psychonomic Bulletin & Review;Feb2012, Vol. 19 Issue 1, p66 

    We are constantly exposed to our own face and voice, and we identify our own faces and voices as familiar. However, the influence of self-identity upon self-speech perception is still uncertain. Speech perception is a synthesis of both auditory and visual inputs; although we hear our own voice...

  • Masking of speech in young and elderly listeners with hearing loss. Souza, Pamela E.; Turner, Christopher W. // Journal of Speech & Hearing Research;Jun94, Vol. 37 Issue 3, p655 

    Examines the contributions of various properties of background noise to the speech recognition difficulties experienced by young and elderly listeners with hearing loss. Results indicating difficulties in speech recognition for monosyllables are due primarily to the presence of sensorineural...

  • A two-step segmentation method for automatic recognition of speech of persons who are deaf. Abdelhamied, Kadry A.; Waldron, Manjula B. // Journal of Rehabilitation Research & Development;Summer92, Vol. 29 Issue 3, p45 

    Describes the development and use of a two-step word segmentation method for automatic recognition of the speech produced by deaf persons. Incorporation of the segmental and temporal characteristics of deaf speech to achieve accurate recognition; Determination of word boundaries; Norms of deaf...

  • Phonological Similarity Effects in Memory for Serial Order of Cued Speech. Leybaert, Jacqueline; Lechat, Josiane // Journal of Speech, Language & Hearing Research;Oct2001, Vol. 44 Issue 5, p949 

    Experiment I investigated memory for serial order by congenitally, profoundly deaf individuals, 6-22 years old, for words presented via Cued Speech (CS) without sound. CS is a system that resolves the ambiguity inherent in speechreading through the addition of manual cues. The phonological...

  • No, There Is No 150 ms Lead of Visual Speech on Auditory Speech, but a Range of Audiovisual Asynchronies Varying from Small Audio Lead to Large Audio Lag. Schwartz, Jean-Luc; Savariaux, Christophe // PLoS Computational Biology;Jul2014, Vol. 10 Issue 7, p1 

    An increasing number of neuroscience papers capitalize on the assumption published in this journal that visual speech would be typically 150 ms ahead of auditory speech. It happens that the estimation of audiovisual asynchrony in the reference paper is valid only in very specific cases, for...

  • Translator for Sign Language. Epstein, Jeffrey H. // Futurist;Dec98, Vol. 32 Issue 9, p9 

    Focuses on a technology being developed in Japan which may make communication easier between hearing-impaired and normal-hearing individuals. Traditional methods used by hearing-impaired persons to communicate; How the technology works; Background on the technology.

  • Auditory, visual and audiovisual speech intelligibility for sentence-length stimuli: An... Gagne, Jean-Pierre; Querengesser, Carol // Volta Review;Winter95, Vol. 97 Issue 1, p33 

    Investigates the effects of clear speech on speech intelligibility and the perceptual effects of clear speech on auditory, visual and audiovisual speech perception. Identification of acoustic and kinematic properties of speech patterns; Development of programs to optimize the speech...

  • Northwestern University Auditory Test No. 6 in multi-talker babble: A preliminary report. Wilson, Richard H.; Strouse, Anne // Journal of Rehabilitation Research & Development;Jan/Feb2002, Vol. 39 Issue 1, p105 

    Presents a spoken word-recognition task that could be used clinically to evaluate recognition performance of individuals with hearing loss in a background noise. Discussion of speech recognition among individuals with hearing loss; Uses of speech-in-noise data; Analysis of the speech...

Share

Read the Article

Courtesy of VIRGINIA BEACH PUBLIC LIBRARY AND SYSTEM

Sorry, but this item is not currently available from your library.

Try another library?
Sign out of this library

Other Topics