This is the code of the KNN algorithm, using bagging. 🚀
The k-nearest neighbors algorithm, also known as KNN or k-NN, is a non-parametric, supervised learning classifier, which uses proximity to make classifications or predictions about the grouping of an individual data point. While it can be used for either regression or classification problems, it is typically used as a classification algorithm, working off the assumption that similar points can be found near one another.
- IBM, (unknown). What is the k-nearest neighbors algorithm? | IBM. https://www.ibm.com/topics/knn.
Bagging, also known as bootstrap aggregation, is the ensemble learning method that is commonly used to reduce variance within a noisy dataset. In bagging, a random sample of data in a training set is selected with replacement—meaning that the individual data points can be chosen more than once. After several data samples are generated, these weak models are then trained independently, and depending on the type of task—regression or classification, for example—the average or majority of those predictions yield a more accurate estimate.
- IBM Cloud Education, (2021). What is Bagging | IBM. https://www.ibm.com/cloud/learn/bagging.
If you what to know more about this, contact me on Twitter (@luischavez_713) and I would be happy to help you. Also I speak spanish.
Made with ❤️ by Fer Chávez 😊