The course covers basic elements from different areas of Artificial
Intelligence.
All reading references are from the text book:
S. Russell and P. Norvig. "Artificial Intelligence: A Modern
Approach". Third Edition. Prentice Hall, 2010
Bold indicates topics that must be known well
-
History and Philosophical aspects of AI
-
Lecture 1 (Class 1)
-
Introduction
-
What is AI, History of AI, Application Examples [1.1, 1.3, 1.4; pp 1-33]
-
Intelligent Agents [pp. 34-63]
-
Enviroments and their properties [2.1, 2.3]
-
Rationality [2.2]
-
Perception action cycle and AI programs [2.4]
-
Problem solving by searching [pp. 64-119]
-
Lecture 2 (Class 2)
-
Problem Solving Agents [3.1, 3.3]
-
Examples [3.2]
-
Uninformed search, Breadth-first search, Uniform-cost search,
Depth-first search, Depth-limited search, Iterative deepening search, Bidirectional Search [3.4]
-
Informed (Heuristic) Search [3.5, 3.6]
-
greedy search
-
Astar search
-
Memory bounded Astar
-
Uncertain knowledge and probabilistic reasoning
-
Lecture 3 (Class 4)
-
Probability and probability calculus [13.1-13.5, pp 480-499]
-
The Wumpus world probabilistic reasoning example [7.2, 7.3, 13.6 pp 236-243, 499-502]
-
Lecture 4 (Class 5)
-
Bayesian Networks, Semantics, Hybrid variables [14.1-14.3, pp 510-522]
-
Lecture 5 (Class 7)
-
Exact inference by enumeration and by variable elimination
[14.4]. Complexity.
-
Stochastic inference [14.5]
-
prior sampling
-
rejection sampling
-
likelihood weighting
-
Markov Chain Monte Carlo
-
Lecture 6 (Class 8)
-
Graphical models for sequential data
-
Markov processes [15.1]
-
Inference: filtering and prediction, smoothing, most likely explanation [15.2]
-
Hidden Markov models [14.3]
-
Kalman filters [14.4]
-
Dynamic Bayesian Networks and Particle Filtering [15.5]
-
Machine learning
-
Supervised Learning [18.2]
-
Lecture 7 (Class 10)
-
Decision Trees [18.3]
-
k-Nearest Neighbor [18.8]
-
Linear Models [18.6]
-
Non-parametric regression [18.8]
-
Lecture 8 (Class 11)
-
Artificial Neural Networks [18.7]
-
Other issues: evaluation [18.5], learning theory [18.6], ensemble
methods [18.10]
-
Lecture 9 (Class 13)
-
Learning parameters in Graphical Models [20.2]
-
Bayesian learning (small data set) [20.1, 20.2, 20.2.4]
-
Maximum likelihood learning (large data set) [20.1, 20.2.1, 20.2.2, 20.2.3]
-
Naive Bayes' Networks
-
Unsupervised Learning
-
k-means algorithm [see wikipedia]
-
Expectation Maximization algorithm [20.3.1]
-
Lecture 10 (Class 14)
-
Markov Decision Processes [17.1]
-
Value iteration algorithm [17.2.1, 17.2.2]
-
Reinforcement learning [21.1]
-
Passive RL [21.2.1], Termporal-difference learning [21.2.3, Sutton & Barto, 6.1 ]
-
Active RL, Q-learning [21.3]
-
Games and Adversarial Search
-
Minimax algorithm [5.2]
-
Alpha-beta pruning [5.3]
-
Cutoffs and evaluation functions [5.3]
-
Expectimax and Expectiminimax [5.5]
-
Knowledge representation and logic reasoning
-
Propositional logic [ch. 6]
-
First Order Logic [ch. 8]
-
Inference in propositional logic [ch. 5]
-
Applications:
-
Natural Language Processing [22.1]
-
Spam classifier [22.2]
-
Speech recognition [23.5]
-
Machine Translation
-
Statistical machine translation [23.4]
-
Rule based machine translation (grammars) [23.1-23.3.1]