Learning the meaning of words: A probabilistic computational model
Dr. Suzanne Stevenson
Tuesday, December 09, 2014, 01:00pm - 02:00pm
University of Toronto, Department of Computer Science
An average five-year-old knows 10,000-15,000 words, most of which she’s heard only in ambiguous contexts – that is, when she hears an utterance, the child must determine which of numerous possible concepts is being talked about, and must further figure out which word goes along with which of those meanings. The open-ended nature of the input to children has often been used as an argument for the necessity of innate, language-specific mechanisms that enable them to focus their learning appropriately. More recently, however, a number of researchers have instead claimed that general cognitive abilities should be sufficient to the task of word learning. We’ve developed a computational model that helps to shed light on this debate by demonstrating that word–meaning mappings can be acquired through a general probabilistic learning mechanism. The model incrementally builds up (probabilistic) associations between words and meanings when exposed to naturalistic data of words in context, without the use of special biases or constraints. In this talk, I’ll describe the model along with some of its behaviours that mimic aspects of child word learning, such as fast mapping and the spacing effect.
This is joint work with Afsaneh Fazly, Afra Alishahi, and Aida Nematzadeh.
Note: If you would like to receive email announcements about the colloquium series, please contact the Business Office to have your name added to our announce lists at email@example.com.