IL4008 Machine Learning Syllabus:

IL4008 Machine Learning Syllabus – Anna University PG Syllabus Regulation 2021

COURSE OBJECTIVES:

 To understand the concepts and mathematical foundations of machine learning and types of problems tackled by machine learning.
 To explore the different supervised learning techniques including ensemble methods
 To outline different aspects of unsupervised learning and reinforcement learning
 To outline the role of probabilistic methods for machine learning
 To understand the basic concepts of neural networks and deep learning

UNIT I INTRODUCTION AND MATHEMATICAL FOUNDATIONS

What is Machine Learning? Need –History – Definitions – Applications – Advantages, Disadvantages & Challenges -Types of Machine Learning Problems – Mathematical Foundations – Linear Algebra & Analytical Geometry -Probability and Statistics -Vector Calculus & Optimization -Information theory.

UNIT II SUPERVISED LEARNING

Introduction-Discriminative and Generative Models -Linear Regression -Least Squares -Under fitting / Over-fitting -Cross-Validation – Lasso Regression-Classification -Logistic Regression Gradient Linear Models -Support Vector Machines –Kernel Methods -Instance based Methods – K-Nearest Neighbours – Tree based Methods –Decision Trees –ID3 – CART – Ensemble Methods –Random Forest – Evaluation of Classification Algorithms.

UNIT III UNSUPERVISED LEARNING AND REINFORCEMENT LEARNING

Introduction – Clustering Algorithms -K – Means – Hierarchical Clustering – Cluster Validity – Dimensionality Reduction –Introduction -Principal Component Analysis – Recommendation Systems – EM algorithm. Reinforcement Learning – Elements -Model based Learning – Temporal Difference Learning.

UNIT IV PROBABILISTIC METHODS FOR LEARNING

Introduction -Naïve Bayes Algorithm -Maximum Likelihood -Maximum Apriori -Bayesian Belief Networks -Probabilistic Modelling of Problems -Inference in Bayesian Belief Networks – Probability Density Estimation – Sequence Models – Markov Models – Hidden Markov Models.

UNIT V NEURAL NETWORKS AND DEEP LEARNING

Neural Networks – Biological Motivation- Perceptron – Multi-layer Perceptron – Feed Forward Network – Back Propagation-Activation and Loss Functions- Limitations of Machine Learning – Deep Learning – introduction – Convolution Neural Networks – Recurrent Neural Networks – LSTM- Use cases.

TOTAL: 45 PERIODS

COURSE OUTCOMES:

CO1: Understand and outline problems for each type of machine learning
CO2: Design a Decision tree and Random forest for an application
CO3: Implement Probabilistic Discriminative and Generative algorithms for an application and analyze the results.
CO4: Use a tool to implement typical Clustering algorithms for different types of applications.
CO5: Design and implement an HMM for a Sequence Model type of application.

REFERENCES:

1. Probabilistic Machine Learning: An Introduction by Kevin Murphy, MIT Press 2022.
2. Kevin Murphy, “Machine Learning: A Probabilistic Perspective”, MIT Press, 2012
3. Peter Flach, “Machine Learning: The Art and Science of Algorithms that Make Sense of Data”, First Edition, Cambridge University Press, 2012.
4. Stephen Marsland, “Machine Learning – An Algorithmic Perspective”, Chapman and Hall/CRC Press, Second Edition, 2014
5. EthemAlpaydin, “Introduction to Machine Learning”, Third Edition, Adaptive Computation and Machine Learning Series, MIT Press, 2014
6. Tom M Mitchell, “Machine Learning”, McGraw Hill Education, 2013
7. Shai Shalev-Shwartz and Shai Ben-David, “Understanding Machine Learning: From Theory to Algorithms”, Cambridge University Press, 2015