Introduction to Machine Learning & SVM - PDF
Document Details
Uploaded by Deleted User
UNC Greensboro
Dr. Qianqian Tong
Tags
Related
- COMP9517 Computer Vision 2024 Term 2 Week 4 Pattern Recognition Part 2 PDF
- Supervised Learning - Naive Bayes & Support Vector Machines PDF
- Supervised Learning - Naive Bayes & Support Vector Machines PDF
- Supervised Learning Lecture 6 PDF
- Multiple Choice Questions PDF
- Lecture 8: Support Vector Machine ISyE 521 Fall 2022 PDF
Summary
These lecture notes provide an introduction to machine learning, covering topics like supervised learning, unsupervised learning, reinforcement learning, and support vector machines (SVMs). They also discuss the basic principles of machine learning with practical examples.
Full Transcript
CS 405/605 Data Science Dr. Qianqian Tong Introduction to ML: agenda ML basis Linear Regression Classification: SVM (kernel) Decision Tree & random forest Validation Dimensionality - PCA Clustering: Kmeans Visualization Introduction to ML Traditional Program...
CS 405/605 Data Science Dr. Qianqian Tong Introduction to ML: agenda ML basis Linear Regression Classification: SVM (kernel) Decision Tree & random forest Validation Dimensionality - PCA Clustering: Kmeans Visualization Introduction to ML Traditional Programming Machine Learning Introduction to ML Machine learning is about predicting the future based on the past. Introduction to ML Types of Machine Learning Algorithms Supervised (inductive) learning Training data includes desired outputs Classification Regression/Prediction Unsupervised learning Training data does not include desired outputs Semi-supervised learning Training data includes a few desired outputs Reinforcement learning Rewards from sequence of actions Introduction to ML Types of Machine Learning Algorithms Supervised (inductive) learning Training data includes desired outputs Classification Regression/Prediction Unsupervised learning Training data does not include desired outputs Semi-supervised learning Training data includes a few desired outputs Reinforcement learning Rewards from sequence of actions Introduction to ML Introduction to ML Unsupervised learning: given data, i.e. examples, but no labels Clustering: Grouping similar instances Introduction to ML Supervised Versus Un-Supervised learning Introduction to ML Reinforcement Learning Given a *sequence* of examples/states and a *reward* after completing that sequence, learn to predict the action to take in for an individual example/state Introduction to ML Tens of thousands of machine learning algorithms Hundreds new every year Every machine learning algorithm has three components: Representation ----- Evaluation ----- Optimization Representation Evaluation Optimization - What is the model design landscape? - How is the model doing? - How can we get better models? Decision trees Accuracy Combinatorial optimization Sets of rules / Logic programs Precision and recall Greedy search Instances Squared error Convex optimization Graphical models (Bayes/Markov nets) Likelihood Gradient descent Neural networks Posterior probability Nonconvex optimization Support vector machines Cost / Utility Stochastic gradient methods Model ensembles Margin Constrained optimization... Entropy Linear programming K-L divergence... Introduction to ML Growth of Machine Learning Machine learning is preferred approach to Speech recognition, Natural language processing Computer vision Medical outcomes analysis This trend is accelerating Robot control Improved machine learning algorithms Web search Improved data capture, networking, faster computers Finance Software too complex to write by hand New sensors / IO devices Social Networks Big Data Scikit-learn Supervised Learning https://scikit-learn.org/stable/supervised_learning.html Un-Supervised Learning https://scikit-learn.org/stable/unsupervised_learning.html Implement a model: from sklearn.linear_model import LinearRegression Train the model: model.fit( ) Predict: model.predict( ) Visualize the outcome: plt.plot( ) Introduction to Machine Learning https://github.com/q-tong/CS405-605-Data- Science/blob/main/Fall2023/Lecture/4.Machine%20Learning/Machine_learning/0- Machine_Learning_Overview.ipynb Introduction to Scikit-Learn: Machine Learning with Python https://github.com/q-tong/CS405-605-Data- Science/blob/main/Fall2023/Lecture/4.Machine%20Learning/Machine_learning/1- Machine-Learning-Intro.ipynb Basic Principles of Machine Learning https://github.com/q-tong/CS405-605-Data- Science/blob/main/Fall2023/Lecture/4.Machine%20Learning/Machine_learning/2- Basic-Principles.ipynb SVM: Support Vector Machines Supervised learning Classification (categorical) Regression (continuous) SVM Key Concepts Hyperplane: A decision boundary that separates different classes. Support Vectors: Data points closest to the hyperplane. Margin: Distance between the hyperplane and the support vectors. SVM Key Concepts Hyperplane: A decision boundary that separates different classes. Support Vectors: Data points closest to the hyperplane. Margin: Distance between the hyperplane and the support vectors. Maximizing the Margin SVM - Maximizing the Margin What support vector machine does is to not only draw a line, but consider a *region* about the line of some given width. SVM – Scikit-learn To better visualize what's happening here, let's create a quick convenience function that will plot SVM decision boundaries for us: Going further: Kernel Methods Where SVM gets incredibly exciting is when it is used in conjunction with kernels. To motivate the need for kernels, let's look at some data which is not linearly separable: The *kernel trick* in SVM transforms 2-d data into a 3-d space using a *kernel* in such a 3-d space a linear hyperplane can be used to separate classes Radial basis function (rbf) Use rbf: If we plot this along with our data, We can see that with this additional dimension, the data becomes trivially linearly separable! This is a relatively simple kernel; SVM has a more sophisticated version of this kernel built-in to the process. This is accomplished by using kernel='rbf', short for radial basis function: In-Class Exercise: SVM Implementation with the Iris Dataset Objective: Implement an SVM classifier using the Iris dataset and visualize the decision boundaries. (15 minutes) Steps: Import the necessary libraries. Load the Iris dataset. Split the dataset into training and testing subsets. Implement an SVM classifier and train it on the training subset. Evaluate the classifier on the testing subset and print the accuracy. Visualize the decision boundaries using appropriate plotting tools. Example code::