Summary

This document provides an introduction to machine learning, covering topics such as linear regression, polynomial regression, and the different types of machine learning (supervised, unsupervised, and reinforcement). It includes explanations of key concepts, and a discussion of various evaluation metrics.

Full Transcript

Machine Learning Session 1 Agenda: 1 Introduction to Machine Learning 2 Linear Regression 3 Polynomial Regression Introduction to Machine Learning What is Machine Learning? Machine Learning is a branch of artificial intelligence (AI) that focuses...

Machine Learning Session 1 Agenda: 1 Introduction to Machine Learning 2 Linear Regression 3 Polynomial Regression Introduction to Machine Learning What is Machine Learning? Machine Learning is a branch of artificial intelligence (AI) that focuses on building systems that can learn from and make decisions based on data. Instead of being explicitly programmed to perform a task, ML models identify patterns and learn from the data they are provided. Types of Machine Learning Supervised Learning: Models are trained on labeled data (i.e., input-output pairs), making predictions based on this training. Examples include classification (e.g., spam detection) and regression (e.g., predicting house prices). Unsupervised Learning: Models learn from unlabeled data by finding hidden patterns or intrinsic structures in input data. Examples include clustering (e.g., customer segmentation) and association (e.g., market basket analysis). Reinforcement Learning: Models learn by interacting with an environment and receiving rewards or penalties, like teaching a robot to navigate a maze. Supervised vs. Unsupervised Types of Machine Learning Key Differences Classification: The goal is to divide data into different categories using a boundary (e.g., identifying spam emails from non-spam emails). Regression: The goal is to predict continuous numerical values by fitting a line or curve to the data (e.g., predicting house prices). Linear Regression Linear Regression Linear regression is one of the simplest and most commonly used predictive modeling techniques. It models the relationship between a dependent (target) variable and one or more independent (predictor) variables by fitting a linear equation to observed data. Linear Regression The objective of linear regression is to identify the line of best fit, known as the regression line, that reduces the difference between the predicted values and the actual data points as much as possible. Simple Linear Regression Simple linear regression uses one independent variable to predict a dependent variable. The relationship is modeled by a straight line Multiple Linear Regression Multiple linear regression extends simple linear regression by using two or more independent variables to predict a dependent variable. The equation is Multiple Linear Regression Polynomial Regression Polynomial regression is a type of regression analysis where the relationship between the independent variable x and the dependent variable y is modeled as an n-degree polynomial. Unlike linear regression, which fits a straight line, polynomial regression fits a curve to the data. Key Characteristics 1. Linear Model with Modifications Although polynomial regression includes non-linear terms, the overall model is still considered linear in the parameters, because the coefficients ( , 𝑏1b 1 , etc.) are linear. These additional terms allow the model to fit curves to the data, improving accuracy when the relationship between the variables is non-linear. 2. Non-Linear Dataset Polynomial regression is often used for datasets where the relationship between the independent variable (X) and the dependent variable (Y) is non-linear. In the simple linear regression example, the model fits a straight line, but this does not always capture the true relationship. The polynomial model better fits the data by allowing for curves. Key Characteristics Cost Function The cost function assesses the performance of the linear regression model by measuring the difference between the predicted values and the actual observed values 𝑦. Mean Squared Error (MSE): Different Evaluation Metrics Mean Squared Error (MSE): Measures the average of the squared differences between actual and predicted values. Commonly used because it effectively penalizes larger errors and is simple to calculate. Different Evaluation Metrics Root Mean Squared Error (RMSE): The square root of MSE. It represents error in the same units as the original data, making it easier to interpret. Different Evaluation Metrics Mean Absolute Error (MAE): Measures the average absolute difference between the actual and predicted values. Less sensitive to outliers compared to MSE, providing a more robust evaluation metric. Gradient Descent Purpose of Gradient Descent Cost Function and Partial Derivatives The cost function used in Gradient Descent is the Mean Squared Error (MSE), represented as: Parameter Update Rules Gradient Descent iteratively adjusts the parameters using the following update rules: Here, 𝛼 represents the learning rate, a small positive value that controls the size of each step taken in the opposite direction of the gradient to reach the minimum cost. Gradient Descent Gradient Descent Iterative Process 1. Finding Global Minimum The objective of Gradient Descent is to achieve the global minimum of the cost function. During each iteration, parameters are updated based on the computed gradient, guiding the process toward the lowest point on the cost function curve. 2. Effect of Learning Rate (α) The learning rate controls the size of the steps taken toward the minimum. A small learning rate leads to slow convergence, while a large learning rate might cause overshooting or fluctuation around the minimum. Experimentation is usually necessary to determine an optimal learning rate that achieves stable and efficient convergence.

Use Quizgecko on...
Browser
Browser