prediction and control with function approximation

This Prediction and Control with Function Approximation course strongly builds on the fundamentals of Courses 1 and 2, and learners should have completed these before starting this course.  Learners should also be comfortable with probabilities & expectations, basic linear algebra, basic calculus, Python 3.0 (at least 1 year), and  implementing algorithms from pseudocode.

In Prediction and Control with Function Approximation training course, you will learn how to solve problems with large, high-dimensional, and potentially infinite state spaces. You will see that estimating value functions can be cast as a supervised learning problem---function approximation---allowing you to build agents that carefully balance generalization and discrimination in order to maximize reward. We will begin this journey by investigating how our policy evaluation or prediction methods like Monte Carlo and TD can be extended to the function approximation setting. You will learn about feature construction techniques for RL, and representation learning via neural networks and backprop. We conclude this course with a deep-dive into policy gradient methods; a way to learn policies directly without learning a value function. In this course you will solve two continuous-state control tasks and investigate the benefits of policy gradient methods in a continuous-action environment.

By attending Prediction and Control with Function Approximation workshop, Participants will:

  • Understand how to use supervised learning approaches to approximate value functions
  • Understand objectives for prediction (value estimation) under function approximation
  • Implement TD with function approximation (state aggregation), on an environment with an infinite state space (continuous state space)
  • Understand fixed basis and neural network approaches to feature construction 
  • Implement TD with neural network function approximation in a continuous state environment
  • Understand new difficulties in exploration when moving to function approximation
  • Contrast discounted problem formulations for control versus an average reward problem formulation
  • Implement expected Sarsa and Q-learning with function approximation on a continuous state control task
  • Understand objectives for directly estimating policies (policy gradient objectives)
  • Implement a policy gradient method (called Actor-Critic) on a discrete state environment

COURSE AGENDA

  • Moving to Parameterized Functions
  • Generalization and Discrimination
  • Framing Value Estimation as Supervised Learning
  • The Value Error Objective
  • Introducing Gradient Descent
  • Gradient Monte for Policy Evaluation
  • State Aggregation with Monte Carlo
  • Semi-Gradient TD for Policy Evaluation
  • Comparing TD and Monte Carlo with State Aggregation
  • Doina Precup: Building Knowledge for AI Agents with Reinforcement Learning
  • The Linear TD Update
  • The True Objective for TD
  • Coarse Coding
  • Generalization Properties of Coarse Coding
  • Tile Coding
  • Using Tile Coding in TD
  • What is a Neural Network?
  • Non-linear Approximation with Neural Networks
  • Deep Neural Networks
  • Gradient Descent for Training Neural Networks
  • Optimization Strategies for NNs
  • David Silver on Deep Learning + RL = AI?
  • Episodic Sarsa with Function Approximation
  • Episodic Sarsa in Mountain Car
  • Expected Sarsa with Function Approximation
  • Exploration under Function Approximation
  • Average Reward: A New Way of Formulating Control Problems
  • Satinder Singh on Intrinsic Rewards
  • Learning Policies Directly
  • Advantages of Policy Parameterization
  • The Objective for Learning Policies
  • The Policy Gradient Theorem
  • Estimating the Policy Gradient
  • Actor-Critic Algorithm
  • Actor-Critic with Softmax Policies
  • Demonstration with Actor-Critic
  • Gaussian Policies for Continuous Actions