Syllabus
Lecture 1
-
introduction [B2 sc2.1]
-
linear regression and linear models [B1 sc3.1; B1 sc1.1-1.4] (In R: ?lm)
-
gradient descent, Newton-Raphson (batch and sequential) (In R: ?optim)
-
least squares method [B6, sc5.1-5.10]
-
k-nearest neighbor [B2, 1-2.4; B3, 3.1.3; B6, 5.1-5.10]
-
curse of dimensionality [B1 sc1.4]
-
regularized least squares (aka, shrinkage or ridge regr.) [B1 sc3.1.4]
-
locally weighted linear regression [B2, sc6.1.1]
-
model selection [B1 sc1.3; sc3.1; B2 sc7.1-7.3, sc7.10-7.11]
Lecture 2
-
probability theory [B2 sc1.2]
-
probability interpretation [B1, sc1.1-1.4, sc3.1; B2, sc7.1-7.3, sc7.10-7.11]
-
maximum likelihood approach [B1 sc1.2.5]
-
Bayesian approach and application in linear regression [B1 sc1.2.6, 2.3, 3.3, ex. 3.8]
Lecture 3
-
linear models for classification
-
logistic regression [B1 sc2.1, ]
-
multinomial (logistic) regression [B1 sc2.2]
-
generalized linear models [B1 sc2.4] (In R: ?glm)
-
decision theory [B1 sc1.5]
Lecture 4
-
neural networks
-
perceptron algorithm [B1 5.1]
-
multi-layer perceptrons [B1 sc5.2-5.3, sc5.5; B2 ch11] (in R:
library(nnet); ?nnet)
Lecture 5
-
generative algorithms
-
Gaussian discriminant analysis [B1 sc4.2]
-
naive Bayes (in R: library(e1071); ?naiveBayes)
Lecture 6
-
linear methods for classification [B2 ch4]
-
linear regressions of indicator function via least squares
-
logistic regression (max conditional likelighood)
-
Gaussian discriminant and linear discrimninant analysis (in R:
library(MASS); ?lda, ?plot.lda)
-
perceptron
-
optimal separating hyperplanes (in R: library(e1071); ?svm)
Lecture 7
-
kernels and support vector machines [B2 sc2.8.2, ch6, sc12.1-12.3.4; B1 sc2.5, sc7-7.1.5]
Lecture 8
-
learning theory [B1 sc1.6, sc7.1.5]
Lecture 9
-
probabilistic graphical models
-
Discrete [B1 sc8.1]
-
Linear Gaussian [B1 sc8.1]
-
Mixed Variables
-
Conditional Independence [B1 sc8.2, wikipedia]
Lecture 10
-
probabilistic graphical models, Markov Random Fields [B1 sc8.3]
Lecture 11
-
probabilistic graphical models, Inference
-
Exact
-
Chains [B1 sc8.4]
-
Polytree [B1 sc8.4]
-
Approximate [B4 sc14.5]
Lecture 12
-
k-means, mixtures models, EM algorithm [B1 ch9; B2 ch14.1-14.5]
-
hidden Markov models [B1 sc13.1]
Lecture 13
-
bagging, boosting [B1 sc14.1-14.3]
-
tree based methods [B1 sc1.6, 14.4; B2 sc9.2]
Author: Marco Chiarandini
<marco@imada.sdu.dk>
Date: 2011-07-29 11:19:03 CEST
HTML generated by org-mode 6.21b in emacs 23
|