Prepare the following exercises for discussion in class on Thursday, December 1.
Decision Tree
x y −(x/y) · log(x/y) x y −(x/y) · log(x/y) 1 2 0.50 1 5 0.46 1 3 0.53 2 5 0.53 2 3 0.39 3 5 0.44 1 4 0.50 4 5 0.26 3 4 0.31
Nearest Neighbor
Perceptron
nnet
from package nnet
, rpart
from package
rpart
, knn
from package class
, the glm
function (check example in ?predict.glm
). Look at the examples
of these methods by ?function
. nnet
uses one hidden
layer. To implement the single layer perceptron you may try to use the
following lines for stochastic gradient descent with the needed
changes:sigma <- function(w,point) { x <- c(point,1) sign(w %*% x) } w.0 <- c(runif(1),runif(1),runif(1)) w.t <- w.0 for (j in 1:1000) { i <-sample(1:50,1) # or (j-1)%%50 + 1 diff <- y[i,3] - sigma(w.t, c(x[i,1],x[i,2])) w.t <- w.t+0.2*diff * c(x[i,1],x[i,2],1) } |
Test also the batch version of gradient descent.
More data to analyse are available at UCI Machine Learning Repository.