Artificial Inteligence
  • Preface
  • Introduction
  • Machine Learning
    • Linear Algebra
    • Supervised Learning
      • Neural Networks
      • Linear Classification
      • Loss Function
      • Model Optimization
      • Backpropagation
      • Feature Scaling
      • Model Initialization
      • Recurrent Neural Networks
        • Machine Translation Using RNN
    • Deep Learning
      • Convolution
      • Convolutional Neural Networks
      • Fully Connected Layer
      • Relu Layer
      • Dropout Layer
      • Convolution Layer
        • Making faster
      • Pooling Layer
      • Batch Norm layer
      • Model Solver
      • Object Localization and Detection
      • Single Shot Detectors
        • Yolo
        • SSD
      • Image Segmentation
      • GoogleNet
      • Residual Net
      • Deep Learning Libraries
    • Unsupervised Learning
      • Principal Component Analysis
      • Generative Models
    • Distributed Learning
    • Methodology for usage
      • Imbalanced/Missing Datasets
  • Artificial Intelligence
    • OpenAI Gym
    • Tree Search
    • Markov Decision process
    • Reinforcement Learning
      • Q_Learning_Simple
      • Deep Q Learning
      • Deep Reinforcement Learning
    • Natural Language Processing
      • Word2Vec
  • Appendix
    • Statistics and Probability
      • Probability
        • Markov Chains
        • Random Walk
    • Lua and Torch
    • Tensorflow
      • Multi Layer Perceptron MNIST
      • Convolution Neural Network MNIST
      • SkFlow
    • PyTorch
      • Transfer Learning
      • DataLoader and DataSets
      • Visualizing Results
Powered by GitBook
On this page
  • Introduction
  • What is a State
  • Markovian Property
  • Environment
  • Solving MDPs with Dynamic Programming

Was this helpful?

  1. Artificial Intelligence

Markov Decision process

PreviousTree SearchNextReinforcement Learning

Last updated 5 years ago

Was this helpful?

Introduction

Markov Decision process(MDP) is a framework used to help to make decisions on a stochastic environment. Our goal is to find a policy, which is a map that gives us all optimal actions on each state on our environment.

MDP is somehow more powerful than simple planning, because your policy will allow you to do optimal actions even if something went wrong along the way. Simple planning just follow the plan after you find the best strategy.

What is a State

Consider state as a summary (then called state-space) of all information needed to determine what happens next. There are 2 types of state space:

  • World-State: Normally huge, and not available to the agent.

  • Agent-State: Smaller, have all variables needed to make a decision related to the agent expected utility.

Markovian Property

Basically you don't need past states to do a optimal decision, all you need is the current state sss. This is because you could encode on your current state everything you need from the past to do a good decision. Still history matters...

Environment

To simplify our universe imagine the grid world, here your agent objective is to arrive on the green block, and avoid the red block. Your available actions are: ↑,↓,←,→\uparrow,\downarrow,\leftarrow,\rightarrow↑,↓,←,→

The problem is that we don't live on a perfect deterministic world, so our actions could have have different outcomes:

For instance when we choose the up action we have 80% probability of actually going up, and 10% of going left or right. Also if you choose to go left or right you have 80% chance of going left and 10% going up or down.

Here are the most important parts:

  • States: A set of possible states SSS

  • Model: T(s,a,s′)∴P(s′∣s,a)T(s,a,s') \therefore P(s'|s,a)T(s,a,s′)∴P(s′∣s,a) Probability to go to state s′s's′ when you do the action aaa while you were on state sss, is also called transition model.

  • Action: A(s)A(s)A(s), things that you can do on a particular state sss

  • Reward: R(s)R(s)R(s), scalar value that you get for been on a state.

  • Policy: Π(s)→a\Pi(s)\rightarrow aΠ(s)→a, our goal, is a map that tells the optimal action for every state

  • Optimal policy: Π∗(s)→a\Pi^*(s)\rightarrow aΠ∗(s)→a, is a policy that maximize your expected reward R(s)R(s)R(s)

In reinforcement learning we're going to learn a optimal policy by trial and error.

Solving MDPs with Dynamic Programming

As stated earlier MDPs are the tools for modelling decision problems, but how we solve them? In order to solve MDPs we need Dynamic Programming, more specifically the Bellman equation.

But first what is dynamic programming? Basically it's a method that divides a problem into simpler sub-problems easier to solve, it's just really a divide and conquer strategy.

Dynamic programming is both a mathematical optimization method and a computer programming method, but both of them follow this divide and conquer mechanism. But on mathematics it's often used as an optimization tool. On programming is often implemented with recursion and is used on problems like find the shortest path on a graph and generation sequences.

Also you may find a term called memoization which is something that computer people use to improve the performance of those divide-and-conquer algorithms by memorizing sub problems that were already calculated.