This information is indicative and can be subject to change.
Topics in machine learning B
Teacher: Sonia Vanier
E-mail: [email protected]
ECTS: 2.5
Evaluation: Final exam
Previsional Place and time: 9 sessions (2h per session)
Prerequisites: Attended both the first and second semester learning courses and familiar with programming.
Aim of the course:
Syllabus:
Optimization approaches for machine learning
Nearest neighbor search:
• linear scan, kd-trees, k-nearest neighbors, application to retrieval in databases
Unsupervised learning
• k-means: objective function, Lloyd’s algorithm, initialization (random, k-means++), choosing k (silhouette)
• hierarchical clustering
• density estimation(1h): parametric (Gaussians, mixtures), nonparametric (histograms , kernels), noise filtering, clustering (DBSCAN)
Supervised learning
• Mathematical framework: loss function, risk minimization, regularization, Bayes’ classifier and consistency
• k-NN classifier and regressor: universal consistency (Stone’s theorem), limitations
• evaluation: confusion matrix, accuracy/precision/recall/F1, ROC/AUC, cross- validation
• linear models for regression: quadratic loss and ordinary linear regression, basis functions, kernel trick
• linear models for classification: logistic regression (logistic loss), Support Vector Machines (hinge loss), kernel trick again
→ show implementation using libSVM
Feature extraction
• A glimpse at feature design: descriptors for images/3d shapes/text/graphs
• dimensionality reduction: curse of dimensionality, linear discriminant analysis (again), PCA
→ show implementation using eigen::SVD
Neural networks : Perceptron, MLP, back-propagation, a glimpse at various classes of networks (CNNs, RNNs, LSTMs, etc.)
Topics in machine learning B
Teacher: Sonia Vanier
E-mail: [email protected]
ECTS: 2.5
Evaluation: Final exam
Previsional Place and time: 9 sessions (2h per session)
Prerequisites: Attended both the first and second semester learning courses and familiar with programming.
Aim of the course:
Syllabus:
Optimization approaches for machine learning
Nearest neighbor search:
• linear scan, kd-trees, k-nearest neighbors, application to retrieval in databases
Unsupervised learning
• k-means: objective function, Lloyd’s algorithm, initialization (random, k-means++), choosing k (silhouette)
• hierarchical clustering
• density estimation(1h): parametric (Gaussians, mixtures), nonparametric (histograms , kernels), noise filtering, clustering (DBSCAN)
Supervised learning
• Mathematical framework: loss function, risk minimization, regularization, Bayes’ classifier and consistency
• k-NN classifier and regressor: universal consistency (Stone’s theorem), limitations
• evaluation: confusion matrix, accuracy/precision/recall/F1, ROC/AUC, cross- validation
• linear models for regression: quadratic loss and ordinary linear regression, basis functions, kernel trick
• linear models for classification: logistic regression (logistic loss), Support Vector Machines (hinge loss), kernel trick again
→ show implementation using libSVM
Feature extraction
• A glimpse at feature design: descriptors for images/3d shapes/text/graphs
• dimensionality reduction: curse of dimensionality, linear discriminant analysis (again), PCA
→ show implementation using eigen::SVD
Neural networks : Perceptron, MLP, back-propagation, a glimpse at various classes of networks (CNNs, RNNs, LSTMs, etc.)