4.6 Lab: Logistic Regression, LDA, QDA, and KNN 165 4.6.6 An Application to Caravan Insurance Data Finally, we will apply the KNN approach to the Caravan data set, which is part of the ISLR library....

1 answer below »
I attached here the section 4.6.6 from the textbook that the professor mentioned on the details


4.6 Lab: Logistic Regression, LDA, QDA, and KNN 165 4.6.6 An Application to Caravan Insurance Data Finally, we will apply the KNN approach to the Caravan data set, which is part of the ISLR library. This data set includes 85 predictors that measure demographic characteristics for 5,822 individuals. The response variable is Purchase, which indicates whether or not a given individual purchases a caravan insurance policy. In this data set, only 6% of people purchased caravan insurance. > dim(Caravan ) [1] 5822 86 > attach (Caravan ) > summary (Purchase ) No Yes 5474 348 > 348/5822 [1] 0.0598 Because the KNN classifier predicts the class of a given test observation by identifying the observations that are nearest to it, the scale of the variables matters. Any variables that are on a large scale will have a much larger effect on the distance between the observations, and hence on the KNN classifier, than variables that are on a small scale. For instance, imagine a data set that contains two variables, salary and age (measured in dollars and years, respectively). As far as KNN is concerned, a difference of $1,000 in salary is enormous compared to a difference of 50 years in age. Conse- quently, salary will drive the KNN classification results, and age will have almost no effect. This is contrary to our intuition that a salary difference of $1, 000 is quite small compared to an age difference of 50 years. Further- more, the importance of scale to the KNN classifier leads to another issue: if we measured salary in Japanese yen, or if we measured age in minutes, then we’d get quite different classification results from what we get if these two variables are measured in dollars and years. A good way to handle this problem is to standardize the data so that all standardize variables are given a mean of zero and a standard deviation of one. Then all variables will be on a comparable scale. The scale() function does just scale() this. In standardizing the data, we exclude column 86, because that is the qualitative Purchase variable. > standardized.X=scale(Caravan [,-86]) > var(Caravan [,1]) [1] 165 > var(Caravan [,2]) [1] 0.165 > var( standardized.X[,1]) [1] 1 > var( standardized.X[,2]) [1] 1 Now every column of standardized.X has a standard deviation of one and a mean of zero. 166 4. Classification We now split the observations into a test set, containing the first 1,000 observations, and a training set, containing the remaining observations. We fit a KNN model on the training data using K = 1, and evaluate its performance on the test data. > test =1:1000 > train.X=standardized.X[-test ,] > test.X=standardized.X[test ,] > train.Y=Purchase [-test] > test.Y=Purchase [test] > set.seed (1) > knn.pred=knn (train .X,test.X,train .Y,k=1) > mean(test.Y!= knn.pred) [1] 0.118 > mean(test.Y!=" No") [1] 0.059 The vector test is numeric, with values from 1 through 1, 000. Typing standardized.X[test,] yields the submatrix of the data containing the ob- servations whose indices range from 1 to 1, 000, whereas typing standardized.X[-test,] yields the submatrix containing the observations whose indices do not range from 1 to 1, 000. The KNN error rate on the 1,000 test observations is just under 12%. At first glance, this may ap- pear to be fairly good. However, since only 6% of customers purchased insurance, we could get the error rate down to 6% by always predicting No regardless of the values of the predictors! Suppose that there is some non-trivial cost to trying to sell insurance to a given individual. For instance, perhaps a salesperson must visit each potential customer. If the company tries to sell insurance to a random selection of customers, then the success rate will be only 6%, which may be far too low given the costs involved. Instead, the company would like to try to sell insurance only to customers who are likely to buy it. So the overall error rate is not of interest. Instead, the fraction of individuals that are correctly predicted to buy insurance is of interest. It turns out that KNN with K = 1 does far better than random guessing among the customers that are predicted to buy insurance. Among 77 such customers, 9, or 11.7%, actually do purchase insurance. This is double the rate that one would obtain from random guessing. > table(knn .pred ,test.Y) test.Y knn .pred No Yes No 873 50 Yes 68 9 > 9/(68+9) [1] 0.117 Using K = 3, the success rate increases to 19%, and with K = 5 the rate is 26.7%. This is over four times the rate that results from random guessing. It appears that KNN is finding some real patterns in a difficult data set! 4.6 Lab: Logistic Regression, LDA, QDA, and KNN 167 > knn.pred=knn (train .X,test.X,train .Y,k=3) > table(knn .pred ,test.Y) test.Y knn .pred No Yes No 920 54 Yes 21 5 > 5/26 [1] 0.192 > knn.pred=knn (train .X,test.X,train .Y,k=5) > table(knn .pred ,test.Y) test.Y knn .pred No Yes No 930 55 Yes 11 4 > 4/15 [1] 0.267 As a comparison, we can also fit a logistic regression model to the data. If we use 0.5 as the predicted probability cut-off for the classifier, then we have a problem: only seven of the test observations are predicted to purchase insurance. Even worse, we are wrong about all of these! However, we are not required to use a cut-off of 0.5. If we instead predict a purchase any time the predicted probability of purchase exceeds 0.25, we get much better results: we predict that 33 people will purchase insurance, and we are correct for about 33% of these people. This is over five times better than random guessing! subset =-test) Warning message : > glm.pred=rep ("No " ,1000) > glm.pred[glm .probs >.5]=" Yes " > table(glm .pred ,test.Y) test.Y glm .pred No Yes No 934 59 Yes 7 0 > glm.pred=rep ("No " ,1000) > glm.pred[glm .probs >.25]=" Yes" > table(glm .pred ,test.Y) test.Y glm .pred No Yes No 919 48 Yes 22 11 > 11/(22+11) [1] 0.333 > glm.fits=glm(Purchase∼.,data=Caravan ,family =binomial , : fitted probabilities numerically 0 or 1 occurred > glm.probs =predict ( ,Caravan [test ,], type=" response ")glm.fits glm.fits Part 1: Utility functions You'll be asked to do the same stuff for different datasets in this assignment, so write a few utility functions you can use to make your life easier. Write one called split which will take a dataframe and a fraction as arguments and return a training/testing split with fraction of the dataset in the training part and 1-fraction in the testing part. It's convenient to return a list that has train and validation elements containing each dataframe. For example: iono.split = split(Ionosphere, 0.8) summary(iono.split$train) # training set summary(iono.split$validation) # validation set Finally write a function knn.accuracies() which will take a training/validation split returned from split() and a value for kmax. The function will return a vector containing the accuracy of knn for values of k from 1 to kmax, inclusive. Since you'll be doing the same thing 10 times for kNN, it makes sense to use a for loop and automate the process as much as possible. If you haven't already, look at the example in Sec. 4.6.6 to see that process in action. Do some research (i.e. Google) about what a typical value of k should be for a dataset, and use that value or one slightly above it for kmax whenever you use this function in the rest of the assignment.
Answered 1 days AfterMar 07, 2021

Answer To: 4.6 Lab: Logistic Regression, LDA, QDA, and KNN 165 4.6.6 An Application to Caravan Insurance Data...

Kshitij answered on Mar 09 2021
151 Votes
rcode.pdf
R Notebook
require("class")
## Loading required package: class
require("datasets")
data("airquality")
str(airquality)
## 'data.frame': 153 obs. of 6 variables:
## $ Ozone : int 41 36 12 18 NA
28 23 19 8 NA ...
## $ Solar.R: int 190 118 149 313 NA NA 299 99 19 194 ...
## $ Wind : num 7.4 8 12.6 11.5 14.3 14.9 8.6 13.8 20.1 8.6 ...
## $ Temp : int 67 72 74 62 56 66 65 59 61 69 ...
## $ Month : int 5 5 5 5 5 5 5 5 5 5 ...
## $ Day : int 1 2 3 4 5 6 7 8 9 10 ...
head(airquality)
## Ozone Solar.R Wind Temp Month Day
## 1 41 190 7.4 67 5 1
## 2 36 118 8.0 72 5 2
## 3 12 149 12.6 74 5 3
## 4 18 313 11.5 62 5 4
## 5 NA NA 14.3 56 5 5
## 6 28 NA 14.9 66 5 6
airquality$Day<- NULL
head(airquality)
## Ozone Solar.R Wind Temp Month
## 1 41 190 7.4 67 5
## 2 36 118 8.0 72 5
## 3 12 149 12.6 74 5
## 4 18 313 11.5 62 5
## 5 NA NA 14.3 56 5
## 6 28 NA 14.9 66 5
col1<- mapply(anyNA,airquality) # apply function anyNA() on all columns of airquality dataset
col1
## Ozone Solar.R Wind Temp Month
## TRUE TRUE FALSE FALSE FALSE
# Impute monthly mean in Ozone
for (i in 1:nrow(airquality)){
if(is.na(airquality[i,"Ozone"])){
airquality[i,"Ozone"]<- mean(airquality[which(airquality[,"Month"]==airquality[i,"Month"]),"Ozone"],na.rm = TRUE)
}
# Impute monthly mean in Solar.R
1
if(is.na(airquality[i,"Solar.R"])){
airquality[i,"Solar.R"]<- mean(airquality[which(airquality[,"Month"]==airquality[i,"Month"]),"Solar.R"],na.rm = TRUE)
}
}
#Normalize the predictor attributes so that no particular attribute has more impact on clustering algorithm than others.
normalize<- function(x){
return((x-min(x))/(max(x)-min(x)))
}
airquality[,1:4]<- normalize(airquality[,1:4]) # replace contents of dataset with normalized values
class<- data.frame("month"=airquality$Month)
names(class)= "Month"
airquality[,5]<- NULL #remove "Month" from airquality
head(airquality)
## Ozone Solar.R Wind Temp
## 1 0.12012012 0.5675676 0.01921922 0.1981982
## 2 0.10510511 0.3513514 0.02102102 0.2132132
## 3 0.03303303 0.4444444 0.03483483 0.2192192
## 4...
SOLUTION.PDF

Answer To This Question Is Available To Download

Related Questions & Answers

More Questions »

Submit New Assignment

Copy and Paste Your Assignment Here