Perceptual Science Series
A Linguistic Timing Model for Animations of American Sign Language
Dr. Matt Huenerfauth
Monday, February 16, 2009, 12:00pm - 07:00pm
Department of Computer Science, The City University of New York (CUNY)
A majority of deaf 18-year-olds in the United States have an English
reading level below that of a typical 10-year-old student, and so
machine translation (MT) software that could translate English text into
American Sign Language (ASL) animations could significantly improve
these individuals' access to information, communication, and services.
Instead of presenting these individuals with English text on computer
screens, information could be presented in the form of animations of
virtual human characters performing ASL.
An important part of English-to-ASL MT software is the "generation"
component, which is responsible for planning and scripting the movements
of the virtual character's arms and body to perform a grammatically
correct and understandable ASL sentence. This talk will discuss our
recent research in which results in the psycholinguistics literature on
the speed and timing of ASL has been used to design software to
calculate more realistic timing of the movements in ASL animations.
We have built algorithms to calculate the time-duration of signs and the
location/length of pauses during an ASL animation. To determine whether
our software can improve the quality of ASL animations, we conducted a
study in which native ASL signers evaluated the ASL animations processed
by our algorithms, and we found that: (1) adding linguistically
motivated pauses and variations in sign-durations improved signers'
performance on a comprehension task and (2) these animations were rated
as more understandable by ASL signers.
This talk will also include an overview of other active research projects
at the Linguistic and Assistive Technologies Laboratory (LATLab) at The
City University of New York.