DS4017 Machine Learning and Deep Learning Syllabus:

DS4017 Machine Learning and Deep Learning Syllabus – Anna University PG Syllabus Regulation 2021

COURSE OBJECTIVES:

 To study in various learning techniques
 To develop the appropriate machine learning techniques.
 To Understand the basics concepts of deep learning.
 To Understanding CNN and RNN to model for real world applications.
 To Understand the various challenges involved in designing deep learning algorithms for varied applications.

UNIT I CONCEPT LEARNING AND DECISION-TREE LEARNING

Machine learning -Basics of Machine Learning applications-Learning Associations Classification-Regression-Unsupervised Learning-Reinforcement Learning-Supervised learning Regression-Model Selection and Generalization. Concept Learning -Finding a maximally specific hypothesis – Version Spaces and Candidate elimination Algorithm –Inductive Bias Decision Tree Learning – Decision Tree representation –Problems for Decision Tree Learning – Hypothesis Search space – Inductive Bias in Decision Tree Learning – Issues in Decision Tree Learning.

UNIT II CLUSTERING AND REINFORCEMENT LEARNING

Similarity-Based Clustering-Unsupervised learning problems-Hierarchical Agglomerative Clustering (HAC)-Single-link, complete-link, group-average similarity- k-Means and Mixtures of Gaussians-Flat clustering k-Means algorithms-Mixture of Gaussian model-EM-algorithm for mixture of Gaussian model domain. Learning task – Q learning – The Q function – Algorithm for Q learning –convergence – experimentation strategies – updating sequence –Non deterministic rewards and actions –Temporal difference learning.

UNIT III INTRODUCTON TO DEEP LEARNING

Biological Neuron, Idea of computational units, McCulloch–Pitts unit and Thresholding logic, Linear Perceptron, Perceptron Learning Algorithm, Linear separability. Convergence theorem for Perceptron Learning Algorithm. Feed forward Networks: Multilayer Perceptron,
Back propagation, Radial basis function networks.

UNIT IV CONVOLUTIONAL AND RECURRENT NEURAL NETWORKS

Convolutional Networks: The Convolution Operation – Variants of the Basic Convolution Function -Structured Outputs – Data Types – Efficient Convolution Algorithms – Random or Unsupervised Features- LeNet, AlexNet. Recurrent Neural Networks: Bidirectional RNNs – Deep Recurrent Networks Recursive Neural Networks – The Long Short-Term Memory and Gated RNNs, Autoencoders.

UNIT V DEEP GENERATIVE MODELS

Deep Generative Models: Boltzmann Machines – Restricted Boltzmann Machines – Introduction to MCMC and Gibbs Sampling- gradient computations in RBMs – Deep Belief Networks- Deep Boltzmann Machines. Applications: Large-Scale Deep Learning – Computer – Speech Recognition – Natural Language Processing.

PRACTICAL EXERCISES: 30 PERIODS

1. Development of k- nearest neighbors algorithm for classification of image data.
2. Implementation of k-means clustering algorithm for binary and multi-class classification of image data.
3. Development of expectation maximization (EM) algorithm for binary classification of the data and find the probabilities, means and variances of the respective classes.
4. Implement principle component analysis (PCA) technique on 2-D data and determine the Eigen vectors. Plot PCA space of the first two PCs.
5. Implement linear discriminant analysis (LDA) technique for data classification.
6. Design a feature map of a given data using convolution and pooling operation of convolutional neural network (CNN).
7. Implementation of AND/OR/NOT Gate using Single Layer Perceptron
8. Implement the finite words classification system using Back-propagation algorithm
9. construct a Bayesian network considering medical data
10. Use of machine learning and deep learning techniques for solving image related problems

COURSE OUTCOMES:

CO1: Acquire Knowledge in various learning techniques like decision tree, Analytical, Inductive and Reinforced learning.
CO2: Development of techniques in information science applications and appropriate machine learning techniques.
CO3: Understanding the basics concepts of deep learning.
CO4: Understanding of CNN and RNN to model for real world applications.
CO5:Understanding the various challenges involved in designing deep learning algorithms for varied applications.

TOTAL:75 PERIODS

REFERENCES:

1. Ethem Alpaydin, “Introduction to Machine Learning”, The MIT Press, September 2014,ISBN 978-0-262-02818-9
2. Mitchell, Tom, “Machine Learning”, New York, McGraw-Hill, First Edition, 2017.
3. Ian GoodFellow,YoshuaBengio,AaronCourville ,”Deep Learning (Adaptive Computation and Machine Learning series)”,MIT Press 2016.
4. Stephen Marshland, “Machine Learning: An Algorithmic Perspective”, Chapman & Hall/CRC 2009.
5. MehryarMohri, AfshinRostamizadeh, AmeetTalwalkar, “Foundations of Machine Learning”,MIT Press (MA) 2012.
6. Bengio, Yoshua. “Learning deep architectures for AI.” Foundations and trends in Machine Learning, now publishers Inc.,2009.
7. N.D.Lewis, “Deep Learning Made Easy with R: A Gentle Introduction for Data Science”,January 2016.