Policy Gradients in Keras

simeon picture simeon · Nov 5, 2016 · Viewed 7.5k times · Source

I've been trying to build a model using 'Deep Q-Learning' where I have a large number of actions (2908). After some limited success with using standard DQN: (https://www.cs.toronto.edu/~vmnih/docs/dqn.pdf), I decided to do some more research because I figured the action space was too large to do effective exploration.

I then discovered this paper: https://arxiv.org/pdf/1512.07679.pdf where they use an actor-critic model and policy gradients, which then led me to: https://arxiv.org/pdf/1602.01783.pdf where they use policy gradients to get much better results then DQN overall.

I've found a few sites where they have implemented policy gradients in Keras, https://yanpanlau.github.io/2016/10/11/Torcs-Keras.html and https://oshearesearch.com/index.php/2016/06/14/kerlym-a-deep-reinforcement-learning-toolbox-in-keras/ however I'm confused how they are implemented. In the former (and when I read the papers) it seems like instead of providing an input and output pair for the actor network, you provide the gradients for the all the weights and then use the network to update it, whereas, in the latter they just calculate an input-output pair.

Have I just confused myself? Am I just supposed to be training the network by providing an input-output pair and use the standard 'fit', or do I have to do something special? If it's the latter, how do I do it with the Theano backend? (the examples above use TensorFlow).

Answer

Mo K picture Mo K · May 18, 2017

TL;DR

  1. Learn how to implement custom loss functions and gradients using Keras.backend. You will need it for more advanced algorithms and it's actually much easier once you get the hang of it
  2. One CartPole example of using keras.backend could be https://gist.github.com/kkweon/c8d1caabaf7b43317bc8825c226045d2 (though its backend used Tensorflow but it should be very similar if not the same)

Problem

When playing,

the agent needs a policy that is basically a function that maps a state into a policy that is a probability for each action. So, the agent will choose an action according to its policy.

i.e, policy = f(state)

When training,

Policy Gradient does not have a loss function. Instead, it tries to maximize the expected return of rewards. And, we need to compute the gradients of log(action_prob) * advantage

  1. advantage is a function of rewards.
    • advantage = f(rewards)
  2. action_prob is a function of states and action_taken. For example, we need to know which action we took so that we can update parameters to increase/decrease a probability for the action we took.
    • action_prob = sum(policy * action_onehot) = f(states, action_taken)

I'm assuming something like this

  • policy = [0.1, 0.9]
  • action_onehot = action_taken = [0, 1]
  • then action_prob = sum(policy * action_onehot) = 0.9

Summary

We need two functions

  • update function: f(state, action_taken, reward)
  • choose action function: f(state)

You already know it's not easy to implement like typical classification problems where you can just model.compile(...) -> model.fit(X, y)

However,

  • In order to fully utilize Keras, you should be comfortable with defining custom loss functions and gradients. This is basically the same approach the author of the former one took.

  • You should read more documentations of Keras functional API and keras.backend

Plus, there are many many kinds of policy gradients.

  • The former one is called DDPG which is actually quite different from regular policy gradients
  • The latter one I see is a traditional REINFORCE policy gradient (pg.py) which is based on Kapathy's policy gradient example. But it's very simple for example it only assumes only one action. That's why it could have been implemented somehow using model.fit(...) instead.

References