Skip to content

Commit

Permalink
minor polish to eval on test set
Browse files Browse the repository at this point in the history
  • Loading branch information
trevorcampbell committed Nov 14, 2023
1 parent d472553 commit beed8a1
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 2 deletions.
2 changes: 1 addition & 1 deletion source/classification2.md
Original file line number Diff line number Diff line change
Expand Up @@ -1527,7 +1527,7 @@ glue("n_neighbors_min", "{:0.0f}".format(accuracies_grid["n_neighbors"].min()))
```

At first glance, this is a bit surprising: the performance of the classifier
has not changed much at all despite tuning the number of neighbors! For example, our first model
has not changed much despite tuning the number of neighbors! For example, our first model
with $K =$ 3 (before we knew how to tune) had an estimated accuracy of {glue:text}`cancer_acc_1`%,
while the tuned model with $K =$ {glue:text}`best_k_unique` had an estimated accuracy
of {glue:text}`cancer_acc_tuned`%.
Expand Down
2 changes: 1 addition & 1 deletion source/regression1.md
Original file line number Diff line number Diff line change
Expand Up @@ -818,7 +818,7 @@ chapter.
To assess how well our model might do at predicting on unseen data, we will
assess its RMSPE on the test data. To do this, we first need to retrain the
KNN regression model on the entire training data set using $K =$ {glue:text}`best_k_sacr`
neighbors. Fortunately we do not have to do this ourselves manually; `scikit-learn`
neighbors. As we saw in {numref}`Chapter %s <classification2>` we do not have to do this ourselves manually; `scikit-learn`
does it for us automatically. To make predictions with the best model on the test data,
we can use the `predict` method of the fit `GridSearchCV` object.
We then use the `mean_squared_error`
Expand Down

0 comments on commit beed8a1

Please sign in to comment.