Q1
Using a little bit of algebra, prove that (4.2) is equivalent to (4.13). In other words, the logistic function representation and logit representation for the logit regression model are equivalent.
Q2
It was stated in the text that classifying an observation to the class for which (4.12) is largest is equivalent to classifying an observation to the class for which (4.13) is largest.Prove that this is the case.
In other words, under the assumption that the observations in the th class are drawn from a distribution, the Bayes' classifier assigns an observation to the class for which the discriminant function is maximized(贝叶斯分类器将观察值分配给能使得判别函数最大化的类).
As the log function is monotonally increasing, it is equivalent to finding for which
As the last term is independant of , we may restrict ourselves in finding for which is largest. The term in is independant of , so it remains to find for which is largest
Q3
Suppose that we have K classes, and that if an observation belongs to the th class then X comes from a one-dimensional normal distribution, . Recall that the density function for the one-dimensional normal distribution is given in (4.11). Prove that in this case, the Bayes' classifier is not linear. Argue that it is in fact quadratic.
Finding for which is largest is equivalent to finding for which is largest.This last expression is obviously not linear in .
Q4
When the number of features is large, there tends to be a deterioration in the performance of KNN.This phenomenon is known as the curse of dimensionality, and it ties into the fact that non-parametric approaches often perform poorly when is large.
4.a
Suppose that we have a set of observations, each with measurements on feature, . On average, what fraction of the available observations will we use to make the prediction?
It is clear that if then the observations we will use are in the interval and consequently represents a length of 0.10 which represents a fraction of .
If , then we will use observations in the interval which represents a fraction of
By a similar argument we conclude that if , then the fraction of observations we will use is . To compute the average fraction we will use to make the prediction we have to evaluate the following expression: the fraction of available observations we will use to make the prediction is .
4.b
Now suppose that we have a set of observations, each with measurements on features, and . We assume that are uniformly distributed on . We wish to predict a test observation’s response using only observations that are within of the range of and within of the range of closest to that test observation. On average, what fraction of the available observations will we use to make the prediction?
If we assume that to be independent, the fraction of available observations we will use to make the prediction is:
4.c
Now suppose that we have a set of observations on features. Again the observations are uniformly distributed on each feature, and again each feature ranges in value from 0 to 1. We wish to predict a test observation’s response using observations within the of each feature’s range that is closest to that test observation. What fraction of the available observations will we use to make the prediction?
With the same argument than and , we may conclude that the fraction of available observations we will use to make the prediction is .
4.d
Using your answers to parts - , argue that a drawback of KNN when p is large is that there are very few training observations “near” any given test observation.
As we saw in , the fraction of available observations we will use to make the prediction is with pp the number of features. So when , we have
4.e
Now suppose that we wish to make a prediction for a test observation by creating a p-dimensional hypercube centered around the test observation that contains, on average, 10 % of the training observations. For , and 100, what is the length of each side of the hypercube? Comment on your answer.
For , we have , for , we have , and for , we have .
Q5
We now examine the differences between LDA and QDA.
5.a
If the Bayes decision boundary is linear, do we expect LDA or QDA to perform better on the training set? On the test set?
If the Bayes decision boundary is linear, we expect QDA to perform better on the training set because its higher flexibility may yield a closer fit. On the test set, we expect LDA to perform better than QDA, because QDA could overfit the linearity on the Bayes decision boundary.
5.b
If the Bayes decision boundary is non-linear, do we expect LDA or QDA to perform better on the training set? On the test set?
If the Bayes decision boundary is non-linear, we expect QDA to perform better both on the training and test sets.
5.c
In general, as the sample size n increases, do we expect the test prediction accuracy of QDA relative to LDA to improve, decline, or be unchanged? Why?
Roughly speaking, QDA(which is more flexible than LDA and so has higher variance) is recommended if the training set is very large, so that the variance of the classifier is not a major concern.
5.d
True or False: Even if the Bayes decision boundary for a given problem is linear, we will probably achieve a superior test error rate using QDA rather than LDA because QDA is flexible enough to model a linear decision boundary. Justify your answer.
False, with fewer sample points, the variance from using a more flexible method such as QDA, may lead to overfit, which in turns may lead to an inferior test error rate.
Q6
Suppose we collect data for a group of students in a statistics class with variables = hours studied, = undergrad GPA, and = receive an A. We fit a logistic regression and produce estimated coefficient, .
6.a
Estimate the probability that a student who studies for 40h and has an undergrad GPA of 3.5 gets an A in the class.
6.b
How many hours would the student in part (a) need to study to have a chance of getting an A in the class?
Q7
Suppose that we wish to predict whether a given stock will issue a dividend this year (“Yes” or “No”) based on X, last year’s percent profit. We examine a large number of companies and discover that the mean value of X for companies that issued a dividend was , while the mean for those that didn’t was . In addition, the variance of X for these two sets of companies was . Finally, 80 % of companies issued dividends. Assuming that X follows a normal distribution, predict the probability that a company will issue a dividend this year given that its percentage profit was last year.
If suffices to plug in the parameters and values in the equation for , we getting so the probability that a company will issue a dividend this year given that its percentage return was last year is 0.752
Q8
Suppose that we take a data set, divide it into equally-sized training and test sets, and then try out two different classification procedures. First we use logistic regression and get an error rate of on the training data and on the test data. Next we use 1-nearest neighbors and get an average error rate (averaged over both test and training data sets) of . Based on these results, which method should we prefer to use for classification of new observations? Why?
We have an average error rate of wich implies a test error rate of for KNN which is greater than the test error rate for logistic regression of . So, it is better to choose logistic regression because of its lower test error rate.