About slides on ML, DL, AI, Optimization
Recently, I have been adding slides on subjects of Machine Learning, Deep Learning, Linear Algebra, Probability, etc. However, I still need to complete several of them as for example, Deep Learning needs to have slides on:
- Autoencoders
- Reinforcement Learning
- Generative Deep Networks
- Second Order Methods
Nevertheless, there are applications that I consider way more interesting because of the architectures applied there for
- Attention in Natural Language Processing,
- Customer Churning,
- 3D reconstruction,
- Metric Learning,
- etc.
Further, I am planning to add some slides on AI. However, I need to select topics that are relevant for our times. Given that many things on the “old” AI are not interesting anymore because they have been surpassed by other solutions. For example, the subject of
- Bayesian Networks,
- Reinforcement Learning
- Approximated Dynamic Programming
are still valid on these times.
Finally, but not least important, and something I been trying to generate for a long time, is a series of slides on optimization and its applications in AI. I have several ideas, based on certain works by Bottou and Jin [1,2], on how to select the subjects for the optimization section.
- Bottou, Léon, Frank E. Curtis, and Jorge Nocedal. “Optimization methods for large-scale machine learning.” Siam Review 60.2 (2018): 223-311.
- Jin, Chi, Praneeth Netrapalli, and Michael I. Jordan. “Accelerated gradient descent escapes saddle points faster than gradient descent.” arXiv preprint arXiv:1711.10456 (2017).