/output:index.html
Week 1: Uninformed Search
ProfessorDevin Balkcom
devin@cs.dartmouth.edu
office: Sudikoff 206
The goal of this course is to give a perspective on one of the most misunderstood areas of computer science.
You'll write a lot of code, too.
We will look at several standard problems and algorithms.
(How A.I. is viewed.) Over-emphasis on:
There are many good ideas here, but beware of snake oil.
States and nodes are often created 'on the fly'.
Can be tricky. One (perhaps inefficient) way: internally generate some superset of the possible actions from a state. An `is_legal` function can then be used on the hypothetical state; if the state is legal, then the `get_actions` adds the action to a list.
In order to reconstruct the path to the goal after the goal has been found, some book-keeping has to be done as the search progresses.
A node data structure is an object that contains:
frontier = new queue
pack start state into a node
add node to frontier
while frontier is not empty:
get current_node from the frontier
get current_state from current_node
if current_state is the goal:
backchain from current_node and return solution
for each child of current_state:
pack child state into a node, with backpointer to current_node
add the node to the frontier
When done, backchain.
frontier = new queue
pack start_state into a node
add node to frontier
explored = new set
add start_state to explored
while frontier is not empty:
get current_node from the frontier
get current_state from current_node
if current_state is the goal:
backchain from current_node and return solution
for each child of current_state:
if child not in explored:
add child to explored
pack child state into a node, with backpointer to current_node
add the node to the frontier
return failure
What is the state for each of these problems? How many dimensions (variables) would you use to represent the state of each problem?

(Figure from Wikipedia.)

(Figure from Wikipedia.)
depth_first_search(start_state, goal_test_fn, transition_fn,
get_legal_actions_function)
frontier = new stack
pack start_state into a node
add node to frontier
explored = new set
add start_state to explored
while frontier is not empty:
get current_node from the frontier
get current_state from current_node
if current_state is the goal:
backchain from current_node and return solution
for each child of current_state:
if child not in explored:
add child to explored
pack child state into node, add backpointer
add the node to the frontier
return failure
Finite state space of size $n$:
Infinite state space with reachable goal:
In finite state spaces, BFS typically gives shorter paths. In infinite state spaces, DFS fails.
We have seen memoizing BFS and DFS.
Can we avoid building complete explored set for DFS?
Path-checking: keep track of states on the current path only, and don't loop.
Does not eliminate redundant paths. Consider large number of loop-free paths in a graph: $O(b^d)$. Ouch.
Memoizing and path-checking DFS fail in infinite spaces.
Simple solution: bound depth of search, $l$.
When would you use memoizing DFS?
When would you use path-checking DFS?
Short paths, and good memory characteristics, but:
$n$: state space size, $b$: branching factor,
$d$: depth of goal, $m$: length of longest path
What are the time and space complexities of BFS, DFS-mem, DFS-pc, and IDS, in terms of $b$, $d$, $m$?
$n$: state space size, $b$: branching factor,
$d$: depth of goal, $m$: length of longest path
| Alg. | Opt | Time | Space |
|---|---|---|---|
| BFS | y | $O(\min(n, b^d))$ | $O(\min(n, b^d))$ |
| DFS-mem | n | $O(\min(n, b^m))$ | $O(\min(n, b^m))$ |
| DFS-pc | n | $O(\min(m^n, m b^m))$ | $O(m)$ |
| IDS | y | $O(\min(d^n, m b^d))$ | $O(d)$ |
Sometimes you can search backwards:
But why?
This can matter a lot in continuous-state environments. (Refrigerator-mover's problem.)
$(d/2)^2 + (d/2)^2 = d^2 / 2 < d^2$
"Twice" as fast on a graph in "2" dimensions.