DS4017 Machine Learning and Deep Learning Syllabus:
DS4017 Machine Learning and Deep Learning Syllabus โ Anna University PG Syllabus Regulation 2021
COURSE OBJECTIVES:
๏ท To study in various learning techniques
๏ท To develop the appropriate machine learning techniques.
๏ท To Understand the basics concepts of deep learning.
๏ท To Understanding CNN and RNN to model for real world applications.
๏ท To Understand the various challenges involved in designing deep learning algorithms for varied applications.
UNIT I CONCEPT LEARNING AND DECISION-TREE LEARNING
Machine learning -Basics of Machine Learning applications-Learning Associations Classification-Regression-Unsupervised Learning-Reinforcement Learning-Supervised learning Regression-Model Selection and Generalization. Concept Learning -Finding a maximally specific hypothesis โ Version Spaces and Candidate elimination Algorithm โInductive Bias Decision Tree Learning โ Decision Tree representation โProblems for Decision Tree Learning โ Hypothesis Search space โ Inductive Bias in Decision Tree Learning โ Issues in Decision Tree Learning.
UNIT II CLUSTERING AND REINFORCEMENT LEARNING
Similarity-Based Clustering-Unsupervised learning problems-Hierarchical Agglomerative Clustering (HAC)-Single-link, complete-link, group-average similarity- k-Means and Mixtures of Gaussians-Flat clustering k-Means algorithms-Mixture of Gaussian model-EM-algorithm for mixture of Gaussian model domain. Learning task โ Q learning โ The Q function โ Algorithm for Q learning โconvergence โ experimentation strategies โ updating sequence โNon deterministic rewards and actions โTemporal difference learning.
UNIT III INTRODUCTON TO DEEP LEARNING
Biological Neuron, Idea of computational units, McCullochโPitts unit and Thresholding logic, Linear Perceptron, Perceptron Learning Algorithm, Linear separability. Convergence theorem for Perceptron Learning Algorithm. Feed forward Networks: Multilayer Perceptron,
Back propagation, Radial basis function networks.
UNIT IV CONVOLUTIONAL AND RECURRENT NEURAL NETWORKS
Convolutional Networks: The Convolution Operation โ Variants of the Basic Convolution Function -Structured Outputs โ Data Types โ Efficient Convolution Algorithms โ Random or Unsupervised Features- LeNet, AlexNet. Recurrent Neural Networks: Bidirectional RNNs โ Deep Recurrent Networks Recursive Neural Networks โ The Long Short-Term Memory and Gated RNNs, Autoencoders.
UNIT V DEEP GENERATIVE MODELS
Deep Generative Models: Boltzmann Machines โ Restricted Boltzmann Machines โ Introduction to MCMC and Gibbs Sampling- gradient computations in RBMs โ Deep Belief Networks- Deep Boltzmann Machines. Applications: Large-Scale Deep Learning โ Computer โ Speech Recognition โ Natural Language Processing.
PRACTICAL EXERCISES: 30 PERIODS
1. Development of k- nearest neighbors algorithm for classification of image data.
2. Implementation of k-means clustering algorithm for binary and multi-class classification of image data.
3. Development of expectation maximization (EM) algorithm for binary classification of the data and find the probabilities, means and variances of the respective classes.
4. Implement principle component analysis (PCA) technique on 2-D data and determine the Eigen vectors. Plot PCA space of the first two PCs.
5. Implement linear discriminant analysis (LDA) technique for data classification.
6. Design a feature map of a given data using convolution and pooling operation of convolutional neural network (CNN).
7. Implementation of AND/OR/NOT Gate using Single Layer Perceptron
8. Implement the finite words classification system using Back-propagation algorithm
9. construct a Bayesian network considering medical data
10. Use of machine learning and deep learning techniques for solving image related problems
COURSE OUTCOMES:
CO1: Acquire Knowledge in various learning techniques like decision tree, Analytical, Inductive and Reinforced learning.
CO2: Development of techniques in information science applications and appropriate machine learning techniques.
CO3: Understanding the basics concepts of deep learning.
CO4: Understanding of CNN and RNN to model for real world applications.
CO5:Understanding the various challenges involved in designing deep learning algorithms for varied applications.
TOTAL:75 PERIODS
REFERENCES:
1. Ethem Alpaydin, โIntroduction to Machine Learningโ, The MIT Press, September 2014,ISBN 978-0-262-02818-9
2. Mitchell, Tom, โMachine Learningโ, New York, McGraw-Hill, First Edition, 2017.
3. Ian GoodFellow,YoshuaBengio,AaronCourville ,โDeep Learning (Adaptive Computation and Machine Learning series)โ,MIT Press 2016.
4. Stephen Marshland, โMachine Learning: An Algorithmic Perspectiveโ, Chapman & Hall/CRC 2009.
5. MehryarMohri, AfshinRostamizadeh, AmeetTalwalkar, โFoundations of Machine Learningโ,MIT Press (MA) 2012.
6. Bengio, Yoshua. โLearning deep architectures for AI.โ Foundations and trends in Machine Learning, now publishers Inc.,2009.
7. N.D.Lewis, โDeep Learning Made Easy with R: A Gentle Introduction for Data Scienceโ,January 2016.