For video footage from past you can visit the individual event pages, or go to our YouTube Channel

To filter by event category, click on the event category link in the table below or use the menu on the right.

List of Past Events

Reverse Engineering Common Sense: Modeling Human Intelligence with Probabilistic Programs and Program Induction

Dr. Josh Tenenbaum

Tuesday, March 04, 2014, 01:00pm - 02:00pm

Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences

Copy to My Calendar (iCal) Download as iCal file
 

To understand the roots of common-sense thought, we have begun an ambitious effort to reverse engineer the core cognitive resources and learning mechanisms available to humans from the youngest ages.   We build computational models of these capacities with the twin goals of explaining human thought in more principled, rigorous engineering terms, and engineering more human-like artificial intelligence and machine learning systems.  This talk will focus on two areas in which the intelligence of even very young children goes beyond existing machine systems: (1) Scene understanding, where we can not detect not only objects and their locations, but what is happening, what will happen next, who is doing what to whom and why, in terms of our intuitive theories of physics (forces, masses) and psychology (beliefs, desires, ...); (2) Learning from examples, where just one or a few instances can be sufficient to grasp a new concept and generalize in richer ways than machine learning systems can typically do even with hundreds or thousands of examples, and where intuitive theories (or systems of concepts) can be constructed or revised based on only a small number of brief episodic experiences.  I will show how we are beginning to capture these perception, reasoning and learning abilities in computational terms using techniques based on probabilistic programs and program induction, embedded in a broadly Bayesian framework for inference under uncertainty. 
Some more details: Probabilistic programs are probabilistic generative models defined not over graphs, as in many current machine learning and vision models, but over programs whose execution traces can describe complex causal processes such as those underlying the behavior of physical objects and intentional agents.  Approximate Bayesian inference over these programs, implemented using Monte Carlo methods, is capable of inferring a program's inputs, parameters, or future outputs from partially observed previous outputs.  We show how common-sense physical and psychological scene understanding can be characterized as inference over probabilistic programs for fast approximate graphics rendering from 3D scene descriptions, fast approximate physical simulation of rigid body dynamics, and optimal control of rational agents (including state estimation and motion planning).  This approach can solve a wide range of problems including inferring scene structure from images, predicting physical dynamics and inferring latent physical attributes from static images or short movies, and reasoning about the goals and beliefs of agents from observations of short action traces.  We compare these solutions quantitatively with human judgments, and with the predictions of a range of alternative models.  We also show how learning mechanisms can be described as processes that induce and modify these probabilistic programs, to better explain a learner's observations.  Bayesian concept learning can be seen as constructing a program that generates observed examples of a concept with high probability; a prior on concepts can be captured by a "program-generating program".  The development of intuitive theories can be understood as a process of constructing or modifying more complex, hierarchically structured sets of programs, akin to building a domain-specific programming language. 

Dr. Josh Tenenbaum