Modules of the Fields Institute PhD-level graduate consistents of introductory topics in the mathematics of machine learning, such as deep learning, automatic differentiation, non-convex optimization, probabilistic modelling, stochastic variations, compression ility, probabilistic inference, generative models, adversarial robustness, reinforcement learning and statistical learning theory from visitors and staff at Fields and Vector Institute. Case study applications to problems in medicine, finance, and manufacturing processes are explored in foundational depths through optimal mathematical expressions.
Undergraduate level probability, statistics, multivariable Calculus, linear algebra
Introductory machine learning references:
- Neural Networks and Deep Learning; Michael Nielsen.
- Deep Learning; Ian Goodfellow, Yoshua Bengio, and Aaron Courville
Lecture 12. Understanding Machine Learning by Shai Shalev-Shwartz and Shai Ben-David.
- Chapters 2-6, 26, 28 give the fundamentals of empirical risk minimization
- Chapters 12-14, 26 give the fundamentals of regularized loss minimization for convex problems.
- I. Introduction to Deep Learning
- II. Automatic Differentiation
- III. Optimization
- IV. Probabilistic Modeling
- V. Stochastic Variations
- VI. Probabilistic Inference and Generative Models
- VII. Reinforcement Learning
- VIII. Regularization and Adversarial Robustness
- IX. Latent Variable Analysis with application in advanced manufacturing processes
- ~~X. Machine Learning in Finance~~
- XI. Deep Learning
- XII. Unsupervised Learning with Application in Medicine
- XIII. Statistical Learning Theory and Compressibility
- XIV. Non-convex Optimization in ML
- XV. Machine Learning in Health