Artificial Intelligence

In computer science, an ideal "intelligent" machine is a flexible rational agent that perceives its environment and takes actions that maximize its chance of success at some goal.

What means rational

Here we will try to explain what means acting rational:

  • Maximally achieve pre-defined goals. (Expected utility)

  • Goals expressed in terms of utility (non dimensional scalar value that represent "happiness")

  • Be rational means maximize your expected utility

Have a rational (for us intelligent) decision, is only related to the quality of the decision(utility), not the process that lead to that decision, for instance:

  • You gave the right answer because you brute-force all possible outcomes

  • You gave the right answer because you used a fancy state of the art method

Basically you don't care about the method just the decision.

As the AI field evolve what was considered intelligent, is not considered anymore after someone discover how to do it.

A system is rational if he does the right thing with the information available.

Why we only care to be rational

Basically we don't understand how our brain work, even with the current advances on neuro-science, and the advances on computation power. We don't know how the brain works and take decisions. The only cool thing that we know about the brain, is that to do good decisions, we need memory and simulation.

Central AI problems:

  1. Reasoning

  2. Knowledge

  3. Planning: Make prediction about their actions, and choose the one that

  4. Learning

  5. Perception

  6. Natural Language Processing (NLP)

Agents

It's a system (ie: software program) which observes the world through sensors and acts upon an environment using actuators. It directs it's activity towards achieving goals. Intelligent agents may also learn or use knowledge to achieve their goals.

Type of agents

There are a lot of agents types, but here we're going to separate them in 2 classes

  • Reflex Agents: Don't care about future effects of it's actions (Just use a if-table)

  • Planning Agents: Simulate actions consequences before committing to them, using a model of the world.

Both reflex and planning agents can be rational, again we just care about the result action, if the action maximize it's expected utility, then it is rational.

we need to identify which type of agent is needed to have a rational behavior.

A reflex or planning agent can be sub-optimal, but normally planning is a good idea to follow.

On this book we're going to see a lot of tools used to address planning. For instance searching is a kind of tool for planning.

Search problem

A search problem finds a solution which is a sequence of actions (a plan) that transform the start state to the goal state.

Types of search:

  • Uninformed Search: Keep searching everywhere until a solution is found

  • Informed Search: Has kind of "information" saying if we're close or not to the solution (Heuristic)

A search problem consist on the following things:

  1. A State space: Has the states needed to do planning

  2. A successor function: For any state x, return a set of states reachable from x with one action

  3. A start state and goal test: Gives initial point and how to check when planning is over

Example our objective is to travel from Arad, to Bucharest

Above you have the world state, we don't need so many details, we only need the cities, how the connect and the distances.

On this search problem we detect the following properties:

  • State space: The cities (The only variable pertinent to this search problem)

  • Start state: Arad

  • Successor Function: Go to adjacent city with cost as distance.

Consider the map above as a graph, it contains nodes, that don't repeat and how they connect along with the costs of it's connection.

On way to do planning is convert the state space graph to a search Tree, then use some algorithms that search for a goal state on the tree. Here we observe the following things:

  • The start state will be the tree root node

  • Node children represent the possible outcomes for each state.

The problem is that both Search Trees, or State space graphs can be to big to fit inside the computer, for instance the following state space graph has a infinite search tree.

What to do on those cases, basically you don't keep on memory all the possible solutions of the tree or graph, you navigate the tree for a finite amount of depth.

For instance, look to the state graph of the Romania, we start on Arad (Start state)

  • Arad has 3 possible child nodes, Sibiu, Timisoara and Zerind

  • We choose the leftmost child node Sibiu

  • Then we choose the leftmost child node Arad, which is bad, so we try Fagaras, then Oradea, etc...

The problem is that if one of the tree branches is infinite, or is to big and has no solution we will keep looking on it's branch until the end.

At the point that we choose Sibiu on the first step, we need to keep the other possible nodes (Timisoara, and Zerind) this is called the tree fringe.

Important ideas to keep in mind on tree search:

  • First of all tree search is a mechanism used for planning

  • Planning means that you have a model of the world, if the model is bad your solution will also be bad

  • Fringe: Or cache of other possible solutions

  • How to explore the current branch

Branching Factor

Branching factor is the number children on each node on a tree.

Old problems like tic-tac-toe or other simple problems can be solved with a search tree or some sort of optimized tree algorithm. But games like chess or go has a huge branching factor, so you could not process them in real-time. On the animation bellow we compare the chess and go branching factor.

Next Chapter

On the next chapter we will explore more about trees.