Deep Q Learning
Last updated
Last updated
On the previous chapter we learned about the "old school" Q learning, we used matrices to represent our Q tables. This somehow implies that you at least know how many states (rows) you have on your environment, the problem is that sometimes this is not true.
On this Q table we can say the the expectation of wining the game by taking the action 1 at state 3 is 80. Bigger Q values means that we expect to win.
Also we learned that reinforcement learning is about learning how to behave on some environment where our only feedback is some sparse and time delayed "labels" called rewards. They are time delayed because there are cases where the environment will only tell if your action was good or bad some time after you actually moved.
Considering our environment "Markovian" which means that our state encode everything needed to take a decision. We define an episode (game) as finite set of states, actions and rewards.
For instance here "" means the reward that we take by taking the action on state . An episode always finish on an end state (Game over).
Policy is defined as a "map" from states to actions, is the reinforcement learning objective to find the optimal policy. An optimal policy can be derived from a Q function.
We define return as the sum of all immediate rewards on an episode. Which means the sum of rewards until the episode finish.
To give some sort of flexibility we add to our total reward a parameter called gamma(). If gamma=0 all future rewards will be discarded, if gamma=1 all future rewards will be considered.
We define the function Q(s,a) as maximum expected "Future Discounted Reward" that we can get if we take an action "a" on state "s", and continue optimally from that point until the end of the episode. When we say "continue optimally" means that we continue choosing actions from the policy derived from Q.
Bellow we show how to derive a policy from the Q function, basically we want the action that gives the biggest Q value on state "s":
The function "" will search for the action "a" that maximizes Q(s,a) You can think that the Q function is "the best possible score at the end of the game after performing action a in state s". So if you have the Q function you have everything needed to win all games!.
If we choose an action that "ALWAYS" maximize the "Discounted Future Reward", you are acting greedy. This means that you are not exploring and you could miss some better actions. This is called exploration-exploitation problem. To solve this we use an algorithm called , where a small probability will choose a completely random action from time to time.
Basically we get the Q function by experience, using an iterative process called bellman equation.
The Deep Q learning is about using deep learning techniques to represent the Q table. Is also a kind of recipe to use Q learning on games.
The first thing on this recipe is to get our input, as we may imagine we take information directly form the screen. To add some notion of time we actually get 4 consecutive screens.
Get the images
Rescale to 84x84
Convert to 8 bit grayscale.
Now we have 84x84x256x4 "stuff" as our environment state, this means 7 million states. To find structure on this data we will need a convolution neural network!.
Now we will give those 84x84x4 tensor to an CNN, this model will have one output for each actions which will represent a Q-value for each possible action. So for each forward propagation we will have all possible Q values for a particular state encoded on the screen. It's valid to point out that this is not a classification problem but a regression, our Q-value output is a scalar number not a class.
On the Atari paper this CNN had the following structure:
Notice that there is not pooling layer, this is done because want to retain the spatial information.
As out CNN is a regression model our loss function will be a "squared error loss"
Now to iterate and find the real Q value using deep learning we must follow:
Do a forward pass for the current state "" (screens) to get all possible Q values
Do a forward pass on the new state and find the action (Action with biggest Q value)
Set Q-value target for action to r + γmax a’ Q(s’, a’) (use the max calculated in step 2). For all other actions, set the Q-value target to the same as originally returned from step 1, making the error 0 for those outputs.
Use back-propagation and mini-batches stochastic gradient descent to update the network.
Unfortunately, we need to solve some problems with this approach. But before talking about the possible solutions let's check which are the problems.
Exploration-Exploitation issue: This is easy, we just add some random action along the way. (just use )
Local-Minima: During the training we're going to have a lot of screens that are high correlated, this may guide your network to learn just a replay of the episode. To solve this we need somehow to shuffle the input mini-batch with some other playing data. (Of course of the same game but at different time)
As mentioned we need to break the similarities of continuous frames during our update step. To do this we will store all game experiences during the episode inside a "replay memory" then during training we will take random mini-batches of this memory. Also you can add some human-experience by adding on the replay memory some human episodes.
A complete tensorflow example can be found here.
In terms of algorithm. On the beginning the estimated Q(s,a) will be wrong but with time, the experiences with the environment, will give a true "r" and this reward will slowly shape our estimation to the true Q(s,a).