Perceptual Science Series

Local Planning for Continuous Markov Decision Processes

Graduate Student Talk-Ari Weinstein

Monday, April 15, 2013, 12:00pm - 07:00pm

Rutgers University, Department of Computer Science

Copy to My Calendar (iCal) Download as iCal file
 

In this talk, algorithms that create and refine plans in order to maximize a numeric reward over time are discussed.  One of the ways this problem can be formalized is in terms of reinforcement learning (RL), which has traditionally been restricted to discrete domains containing a small number of states and actions.  Here, we consider domains that violate traditional assumptions, being both high dimensional and continuous.  When working in continuous domains, accepted practice is to discretize the continuous dimensions and plan in that discrete MDP.  Instead, a number of planners that function natively in continuous domains are proposed.  Both theoretically and empirically, it is shown that algorithms designed to operate natively in continuous domains are simpler to use while providing higher quality results, more efficiently.

Graduate Student Talk-Ari Weinstein