Q-learning vs temporal-difference vs model-based reinforcement learning

StationaryTraveller picture StationaryTraveller · Dec 9, 2015 · Viewed 15.2k times · Source

I'm in a course called "Intelligent Machines" at the university. We were introduced with 3 methods of reinforced learning, and with those we were given the intuition of when to use them, and I quote:

  1. Q-Learning - Best when MDP can't be solved.
  2. Temporal Difference Learning - best when MDP is known or can be learned but can't be solved.
  3. Model-based - best when MDP can't be learned.

Are there any good examples explaining when to choose one method over the other?

Answer

Simon picture Simon · Dec 14, 2015

Temporal Difference is an approach to learning how to predict a quantity that depends on future values of a given signal. It can be used to learn both the V-function and the Q-function, whereas Q-learning is a specific TD algorithm used to learn the Q-function. As stated by Don Reba, you need the Q-function to perform an action (e.g., following an epsilon-greedy policy). If you have only the V-function you can still derive the Q-function by iterating over all the possible next states and choosing the action which leads you to the state with the highest V-value. For examples and more insights, I recommend the classic book from Sutton and Barto.

In model-free RL you don't learn the state-transition function (the model) and you can rely only on samples. However, you might be interested also in learning it, for example because you cannot collect many samples and want to generate some virtual ones. In this case we talk about model-based RL. Model-based RL is quite common in robotics, where you cannot perform many real simulations or the robot will break. This is a good survey with many examples (but it only talks about policy search algorithms). For another example have a look at this paper. Here the authors learn - along with a policy - a Gaussian process to approximate the forward model of the robot, in order to simulate trajectories and to reduce the number of real robot interaction.