Machine Learning for Artificial Intelligence (CIS 479)

Self-improvement of machine performance

Negative features of human learning

  1. slow
  2. inefficient
  3. expensive
  4. no copy process
  5. no-visible inspectable internal representation
  6. learning strategy is a function of knowledge

Learning Strategy

  1. rote learning
  2. learning by instruction
  3. learning by deduction {general rule}
  4. learning by analogy
  5. learning by induction {specification ® rule}

Two general approaches

  1. Numerical {fiddle with coefficients}
  2. Structural approaches {explore links}

Samuel’s Checkers Player

Learning Module

  1. Role learning- move for specific word configuration
  1. Search depth doubled
  2. Optimal direction when several alternatives exist
  3. Allows means of forgetting
  1. Learning Generalization
  2. Signature Table

Quilan –ID3

Classification of rules

Lenat AM/Euriska

AM

  1. Select a concept to evaluate and then generate examples of it.
  2. Check examples for regularity if found then update the interestingness factor for that concept.
  1. Update interestingness
  2. Create new concept
  3. Create new connectives
  1. Propagates knowledge gains to other concepts in system

Induction Heuristics

Want agent procedure to use concept of square

  1. require-link(model has link and near miss does not)
  2. forbid-link(near miss has link and model does not)
  3. climb-tree(isa inheritance- look for common ancestor)
  4. Enlarge-set(create a new class)
  5. Drop-link(incorrect model link)
  6. close (pick numeric value)

Specialize [make model more specific]

  1. match evolving model to example and match corresponding parts
  2. determine existence of single important difference between model and near miss
  3. if model has link and near miss does not then require link
  4. if near miss has link not in model then forbid link
  5. otherwise ignore example

Generalize [match more permissive]

  1. Match evolving model to get corresponding parts
  2. For each difference determine types

Learning Procedure [induce]

  1. let description of first example be initial description (cannot be non-example)
  2. For subsequent examples

Felicity Conditions

  1. wait and see
  2. no-altering principle
  3. learning in small steps

Learning by recording cases

  1. Consistency heuristic (attribute feature of previously observed object)
  2. k-d tree (k-dimensional tree), nearest neighbor in feature space

k-d tree is decision tree

it is a semantic tree in which

  1. Each node is connected to a set of possible answers
  2. Each-node leaf is connected to a test that splits it set of possible answers into subsets
  3. Each branch carries a particular tree which results in subset of another node.

Building decision Tree

to divide cases into set

  1. if only one case then stop
  2. if first division then divide vertically use vertical dimension for comparison else
  3. pick axis different from next higher level

  4. Constructing only the axes of comparison
    1. Find the average position of two middle objects
    2. Construct average threshold
    3. Construct decision tree test that compares unknown axes items against threshold
    4. note positions of middle objects in axis comparison
    5. call these values upper and lower bounds
  5. divide set objectives based on threshold, relative their position
  6. divide object in each subset