diff --git a/source/classification2.md b/source/classification2.md index a370fa15..6eb8c8b3 100755 --- a/source/classification2.md +++ b/source/classification2.md @@ -1527,7 +1527,7 @@ glue("n_neighbors_min", "{:0.0f}".format(accuracies_grid["n_neighbors"].min())) ``` At first glance, this is a bit surprising: the performance of the classifier -has not changed much at all despite tuning the number of neighbors! For example, our first model +has not changed much despite tuning the number of neighbors! For example, our first model with $K =$ 3 (before we knew how to tune) had an estimated accuracy of {glue:text}`cancer_acc_1`%, while the tuned model with $K =$ {glue:text}`best_k_unique` had an estimated accuracy of {glue:text}`cancer_acc_tuned`%. diff --git a/source/regression1.md b/source/regression1.md index 973d14bd..ad55e17d 100755 --- a/source/regression1.md +++ b/source/regression1.md @@ -818,7 +818,7 @@ chapter. To assess how well our model might do at predicting on unseen data, we will assess its RMSPE on the test data. To do this, we first need to retrain the KNN regression model on the entire training data set using $K =$ {glue:text}`best_k_sacr` -neighbors. Fortunately we do not have to do this ourselves manually; `scikit-learn` +neighbors. As we saw in {numref}`Chapter %s ` we do not have to do this ourselves manually; `scikit-learn` does it for us automatically. To make predictions with the best model on the test data, we can use the `predict` method of the fit `GridSearchCV` object. We then use the `mean_squared_error`