Stanford ML Lecture Notes PDF

Document Details

Uploaded by Deleted User

Stanford University

2023

Andrew Ng and Tengyu Ma

Tags

machine learning artificial intelligence supervised learning

Summary

Stanford CS229 lecture notes from June 11, 2023, covering supervised learning, classification, logistic regression, generalized linear models, and generative learning algorithms. Topics include neural networks, backpropagation, and matrix analysis.

Full Transcript

CS229 Lecture Notes Andrew Ng and Tengyu Ma June 11, 2023 Contents I Supervised learning 5 1 Linear regression...

CS229 Lecture Notes Andrew Ng and Tengyu Ma June 11, 2023 Contents I Supervised learning 5 1 Linear regression 8 1.1 LMS algorithm.......................... 9 1.2 The normal equations....................... 13 1.2.1 Matrix derivatives..................... 13 1.2.2 Least squares revisited.................. 14 1.3 Probabilistic interpretation.................... 15 1.4 Locally weighted linear regression (optional reading)...... 17 2 Classification and logistic regression 20 2.1 Logistic regression........................ 20 2.2 Digression: the perceptron learning algorithm......... 23 2.3 Multi-class classification..................... 24 2.4 Another algorithm for maximizing `(θ)............. 27 3 Generalized linear models 29 3.1 The exponential family...................... 29 3.2 Constructing GLMs........................ 31 3.2.1 Ordinary least squares.................. 32 3.2.2 Logistic regression.................... 33 4 Generative learning algorithms 34 4.1 Gaussian discriminant analysis.................. 35 4.1.1 The multivariate normal distribution.......... 35 4.1.2 The Gaussian discriminant analysis model....... 38 4.1.3 Discussion: GDA and logistic regression........ 40 4.2 Naive bayes (Option Reading).................. 41 4.2.1 Laplace smoothing.................... 44 4.2.2 Event models for text classification........... 46 1 CS229 Spring 20223 2 5 Kernel methods 48 5.1 Feature maps........................... 48 5.2 LMS (least mean squares) with features............. 49 5.3 LMS with the kernel trick.................... 49 5.4 Properties of kernels....................... 53 6 Support vector machines 59 6.1 Margins: intuition......................... 59 6.2 Notation (option reading).................... 61 6.3 Functional and geometric margins (option reading)...... 61 6.4 The optimal margin classifier (option reading)......... 63 6.5 Lagrange duality (optional reading)............... 65 6.6 Optimal margin classifiers: the dual form (option reading).. 68 6.7 Regularization and the non-separable case (optional reading). 72 6.8 The SMO algorithm (optional reading)............. 73 6.8.1 Coordinate ascent..................... 74 6.8.2 SMO............................ 75 II Deep learning 79 7 Deep learning 80 7.1 Supervised learning with non-linear models........... 80 7.2 Neural networks.......................... 84 7.3 Modules in Modern Neural Networks.............. 92 7.4 Backpropagation......................... 98 7.4.1 Preliminaries on partial derivatives........... 99 7.4.2 General strategy of backpropagation.......... 102 7.4.3 Backward functions for basic modules.......... 105 7.4.4 Back-propagation for MLPs............... 107 7.5 Vectorization over training examples.............. 109 III Generalization and regularization 112 8 Generalization 113 8.1 Bias-variance tradeoff....................... 115 8.1.1 A mathematical decomposition (for regression)..... 120 8.2 The double descent phenomenon................. 121 8.3 Sample complexity bounds (optional readings)......... 126 CS229 Spring 20223 3 8.3.1 Preliminaries....................... 126 8.3.2 The case of finite H.................... 128 8.3.3 The case of infinite H.................. 131 9 Regularization and model selection 135 9.1 Regularization........................... 135 9.2 Implicit regularization effect (optional reading)......... 137 9.3 Model selection via cross validation............... 139 9.4 Bayesian statistics and regularization.............. 142 IV Unsupervised learning 144 10 Clustering and the k-means algorithm 145 11 EM algorithms 148 11.1 EM for mixture of Gaussians................... 148 11.2 Jensen’s inequality........................ 151 11.3 General EM algorithms...................... 152 11.3.1 Other interpretation of ELBO.............. 158 11.4 Mixture of Gaussians revisited.................. 158 11.5 Variational inference and variational auto-encoder (optional reading).............................. 160 12 Principal components analysis 165 13 Independent components analysis 171 13.1 ICA ambiguities.......................... 172 13.2 Densities and linear transformations............... 173 13.3 ICA algorithm........................... 174 14 Self-supervised learning and foundation models 177 14.1 Pretraining and adaptation.................... 177 14.2 Pretraining methods in computer vision............. 179 14.3 Pretrained large language models................ 181 14.3.1 Open up the blackbox of Transformers......... 183 14.3.2 Zero-shot learning and in-context learning....... 186 CS229 Spring 20223 4 V Reinforcement Learning and Control 188 15 Reinforcement learning 189 15.1 Markov decision processes.................... 190 15.2 Value iteration and policy iteration............... 192 15.3 Learning a model for an MDP.................. 194 15.4 Continuous state MDPs..................... 196 15.4.1 Discretization....................... 196 15.4.2 Value function approximation.............. 199 15.5 Connections between Policy and Value Iteration (Optional).. 203 16 LQR, DDP and LQG 206 16.1 Finite-horizon MDPs....................... 206 16.2 Linear Quadratic Regulation (LQR)............... 210 16.3 From non-linear dynamics to LQR............... 213 16.3.1 Linearization of dynamics................ 214 16.3.2 Differential Dynamic Programming (DDP)....... 214 16.4 Linear Quadratic Gaussian (LQG)................ 216 17 Policy Gradient (REINFORCE) 220 Part I Supervised learning 5 6 Let’s start by talking about a few examples of supervised learning prob- lems. Suppose we have a dataset giving the living areas and prices of 47 houses from Portland, Oregon: Living area (feet2 ) Price (1000$s) 2104 400 1600 330 2400 369 1416 232 3000 540...... We can plot this data: housing prices 1000 900 800 700 600 price (in $1000) 500 400 300 200 100 0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000 square feet Given data like this, how can we learn to predict the prices of other houses in Portland, as a function of the size of their living areas? To establish notation for future use, we’ll use x(i) to denote the “input” variables (living area in this example), also called input features, and y (i) to denote the “output” or target variable that we are trying to predict (price). A pair (x(i) , y (i) ) is called a training example, and the dataset that we’ll be using to learn—a list of n training examples {(x(i) , y (i) ); i = 1,... , n}—is called a training set. Note that the superscript “(i)” in the notation is simply an index into the training set, and has nothing to do with exponentiation. We will also use X denote the space of input values, and Y the space of output values. In this example, X = Y = R. To describe the supervised learning problem slightly more formally, our goal is, given a training set, to learn a function h : X 7→ Y so that h(x) is a “good” predictor for the corresponding value of y. For historical reasons, this 7 function h is called a hypothesis. Seen pictorially, the process is therefore like this: Training set Learning algorithm x h predicted y (living area of (predicted price) house.) of house) When the target variable that we’re trying to predict is continuous, such as in our housing example, we call the learning problem a regression prob- lem. When y can take on only a small number of discrete values (such as if, given the living area, we wanted to predict if a dwelling is a house or an apartment, say), we call it a classification problem. Chapter 1 Linear regression To make our housing example more interesting, let’s consider a slightly richer dataset in which we also know the number of bedrooms in each house: Living area (feet2 ) #bedrooms Price (1000$s) 2104 3 400 1600 3 330 2400 3 369 1416 2 232 3000 4 540......... (i) Here, the x’s are two-dimensional vectors in R2. For instance, x1 is the (i) living area of the i-th house in the training set, and x2 is its number of bedrooms. (In general, when designing a learning problem, it will be up to you to decide what features to choose, so if you are out in Portland gathering housing data, you might also decide to include other features such as whether each house has a fireplace, the number of bathrooms, and so on. We’ll say more about feature selection later, but for now let’s take the features as given.) To perform supervised learning, we must decide how we’re going to rep- resent functions/hypotheses h in a computer. As an initial choice, let’s say we decide to approximate y as a linear function of x: hθ (x) = θ0 + θ1 x1 + θ2 x2 Here, the θi ’s are the parameters (also called weights) parameterizing the space of linear functions mapping from X to Y. When there is no risk of 8 9 confusion, we will drop the θ subscript in hθ (x), and write it more simply as h(x). To simplify our notation, we also introduce the convention of letting x0 = 1 (this is the intercept term), so that d X h(x) = θi xi = θT x, i=0 where on the right-hand side above we are viewing θ and x both as vectors, and here d is the number of input variables (not counting x0 ). Now, given a training set, how do we pick, or learn, the parameters θ? One reasonable method seems to be to make h(x) close to y, at least for the training examples we have. To formalize this, we will define a function that measures, for each value of the θ’s, how close the h(x(i) )’s are to the corresponding y (i) ’s. We define the cost function: n 1X J(θ) = (hθ (x(i) ) − y (i) )2. 2 i=1 If you’ve seen linear regression before, you may recognize this as the familiar least-squares cost function that gives rise to the ordinary least squares regression model. Whether or not you have seen it previously, let’s keep going, and we’ll eventually show this to be a special case of a much broader family of algorithms. 1.1 LMS algorithm We want to choose θ so as to minimize J(θ). To do so, let’s use a search algorithm that starts with some “initial guess” for θ, and that repeatedly changes θ to make J(θ) smaller, until hopefully we converge to a value of θ that minimizes J(θ). Specifically, let’s consider the gradient descent algorithm, which starts with some initial θ, and repeatedly performs the update: ∂ θj := θj − α J(θ). ∂θj (This update is simultaneously performed for all values of j = 0,... , d.) Here, α is called the learning rate. This is a very natural algorithm that repeatedly takes a step in the direction of steepest decrease of J. In order to implement this algorithm, we have to work out what is the partial derivative term on the right hand side. Let’s first work it out for the 10 case of if we have only one training example (x, y), so that we can neglect the sum in the definition of J. We have: ∂ ∂ 1 J(θ) = (hθ (x) − y)2 ∂θj ∂θj 2 1 ∂ = 2 · (hθ (x) − y) · (hθ (x) − y) 2 ∂θj d ! ∂ X = (hθ (x) − y) · θi xi − y ∂θj i=0 = (hθ (x) − y) xj For a single training example, this gives the update rule:1  (i) θj := θj + α y (i) − hθ (x(i) ) xj. The rule is called the LMS update rule (LMS stands for “least mean squares”), and is also known as the Widrow-Hoff learning rule. This rule has several properties that seem natural and intuitive. For instance, the magnitude of the update is proportional to the error term (y (i) − hθ (x(i) )); thus, for in- stance, if we are encountering a training example on which our prediction nearly matches the actual value of y (i) , then we find that there is little need to change the parameters; in contrast, a larger change to the parameters will be made if our prediction hθ (x(i) ) has a large error (i.e., if it is very far from y (i) ). We’d derived the LMS rule for when there was only a single training example. There are two ways to modify this method for a training set of more than one example. The first is replace it with the following algorithm: Repeat until convergence { n X  (i) θj := θj + α y (i) − hθ (x(i) ) xj , (for every j) (1.1) i=1 } 1 We use the notation “a := b” to denote an operation (in a computer program) in which we set the value of a variable a to be equal to the value of b. In other words, this operation overwrites a with the value of b. In contrast, we will write “a = b” when we are asserting a statement of fact, that the value of a is equal to the value of b. 11 By grouping the updates of the coordinates into an update of the vector θ, we can rewrite update (1.1) in a slightly more succinct way: n X y (i) − hθ (x(i) ) x(i)  θ := θ + α i=1 The reader can easily verify that the quantity in the summation in the update rule above is just ∂J(θ)/∂θj (for the original definition of J). So, this is simply gradient descent on the original cost function J. This method looks at every example in the entire training set on every step, and is called batch gradient descent. Note that, while gradient descent can be susceptible to local minima in general, the optimization problem we have posed here for linear regression has only one global, and no other local, optima; thus gradient descent always converges (assuming the learning rate α is not too large) to the global minimum. Indeed, J is a convex quadratic function. Here is an example of gradient descent as it is run to minimize a quadratic function. 50 45 40 35 30 25 20 15 10 5 5 10 15 20 25 30 35 40 45 50 The ellipses shown above are the contours of a quadratic function. Also shown is the trajectory taken by gradient descent, which was initialized at (48,30). The x’s in the figure (joined by straight lines) mark the successive values of θ that gradient descent went through. When we run batch gradient descent to fit θ on our previous dataset, to learn to predict housing price as a function of living area, we obtain θ0 = 71.27, θ1 = 0.1345. If we plot hθ (x) as a function of x (area), along with the training data, we obtain the following figure: 12 housing prices 1000 900 800 700 600 price (in $1000) 500 400 300 200 100 0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000 square feet If the number of bedrooms were included as one of the input features as well, we get θ0 = 89.60, θ1 = 0.1392, θ2 = −8.738. The above results were obtained with batch gradient descent. There is an alternative to batch gradient descent that also works very well. Consider the following algorithm: Loop { for i = 1 to n, {  (i) θj := θj + α y (i) − hθ (x(i) ) xj , (for every j) (1.2) } } By grouping the updates of the coordinates into an update of the vector θ, we can rewrite update (1.2) in a slightly more succinct way: θ := θ + α y (i) − hθ (x(i) ) x(i)  In this algorithm, we repeatedly run through the training set, and each time we encounter a training example, we update the parameters according to the gradient of the error with respect to that single training example only. This algorithm is called stochastic gradient descent (also incremental gradient descent). Whereas batch gradient descent has to scan through the entire training set before taking a single step—a costly operation if n is large—stochastic gradient descent can start making progress right away, and 13 continues to make progress with each example it looks at. Often, stochastic gradient descent gets θ “close” to the minimum much faster than batch gra- dient descent. (Note however that it may never “converge” to the minimum, and the parameters θ will keep oscillating around the minimum of J(θ); but in practice most of the values near the minimum will be reasonably good approximations to the true minimum.2 ) For these reasons, particularly when the training set is large, stochastic gradient descent is often preferred over batch gradient descent. 1.2 The normal equations Gradient descent gives one way of minimizing J. Let’s discuss a second way of doing so, this time performing the minimization explicitly and without resorting to an iterative algorithm. In this method, we will minimize J by explicitly taking its derivatives with respect to the θj ’s, and setting them to zero. To enable us to do this without having to write reams of algebra and pages full of matrices of derivatives, let’s introduce some notation for doing calculus with matrices. 1.2.1 Matrix derivatives For a function f : Rn×d 7→ R mapping from n-by-d matrices to the real numbers, we define the derivative of f with respect to A to be:  ∂f ∂f  ∂A11 · · · ∂A1d ∇A f (A) = .......  ..  ∂f ∂f ∂An1 ··· ∂And Thus, the gradient ∇A f (A) is itself an n-by-d matrix,  whose (i, j)-element is A11 A12 ∂f /∂Aij. For example, suppose A = is a 2-by-2 matrix, and A21 A22 the function f : R2×2 7→ R is given by 3 f (A) = A11 + 5A212 + A21 A22. 2 2 By slowly letting the learning rate α decrease to zero as the algorithm runs, it is also possible to ensure that the parameters will converge to the global minimum rather than merely oscillate around the minimum. 14 Here, Aij denotes the (i, j) entry of the matrix A. We then have  3  10A12 ∇A f (A) = 2. A22 A21 1.2.2 Least squares revisited Armed with the tools of matrix derivatives, let us now proceed to find in closed-form the value of θ that minimizes J(θ). We begin by re-writing J in matrix-vectorial notation. Given a training set, define the design matrix X to be the n-by-d matrix (actually n-by-d + 1, if we include the intercept term) that contains the training examples’ input values in its rows:   — (x(1) )T —  — (x(2) )T —  X= .  .. .  (n) T — (x ) — Also, let ~y be the n-dimensional vector containing all the target values from the training set:   y (1)  y (2)  ~y = .. .   .  y (n) Now, since hθ (x(i) ) = (x(i) )T θ, we can easily verify that     (x(1) )T θ y (1) Xθ − ~y = ..  ..  −.  . (x(n) )T θ y (n)   hθ (x(1) ) − y (1) = .. .  . (n) (n) hθ (x ) − y Thus, using the fact that for a vector z, we have that z T z = 2 P i zi : n 1 1X (Xθ − ~y )T (Xθ − ~y ) = (hθ (x(i) ) − y (i) )2 2 2 i=1 = J(θ) 15 Finally, to minimize J, let’s find its derivatives with respect to θ. Hence, 1 ∇θ J(θ) = ∇θ (Xθ − ~y )T (Xθ − ~y ) 2 1 ∇θ (Xθ)T Xθ − (Xθ)T ~y − ~y T (Xθ) + ~y T ~y  = 2 1 ∇θ θT (X T X)θ − ~y T (Xθ) − ~y T (Xθ)  = 2 1 ∇θ θT (X T X)θ − 2(X T ~y )T θ  = 2 1 2X T Xθ − 2X T ~y  = 2 = X T Xθ − X T ~y In the third step, we used the fact that aT b = bT a, and in the fifth step used the facts ∇x bT x = b and ∇x xT Ax = 2Ax for symmetric matrix A (for more details, see Section 4.3 of “Linear Algebra Review and Reference”). To minimize J, we set its derivatives to zero, and obtain the normal equations: X T Xθ = X T ~y Thus, the value of θ that minimizes J(θ) is given in closed form by the equation θ = (X T X)−1 X T ~y.3 1.3 Probabilistic interpretation When faced with a regression problem, why might linear regression, and specifically why might the least-squares cost function J, be a reasonable choice? In this section, we will give a set of probabilistic assumptions, under which least-squares regression is derived as a very natural algorithm. Let us assume that the target variables and the inputs are related via the equation y (i) = θT x(i) + (i) , 3 Note that in the above step, we are implicitly assuming that X T X is an invertible matrix. This can be checked before calculating the inverse. If either the number of linearly independent examples is fewer than the number of features, or if the features are not linearly independent, then X T X will not be invertible. Even in such cases, it is possible to “fix” the situation with additional techniques, which we skip here for the sake of simplicty. 16 where (i) is an error term that captures either unmodeled effects (such as if there are some features very pertinent to predicting housing price, but that we’d left out of the regression), or random noise. Let us further assume that the (i) are distributed IID (independently and identically distributed) according to a Gaussian distribution (also called a Normal distribution) with mean zero and some variance σ 2. We can write this assumption as “(i) ∼ N (0, σ 2 ).” I.e., the density of (i) is given by ((i) )2   (i) 1 p( ) = √ exp −. 2πσ 2σ 2 This implies that (y (i) − θT x(i) )2   (i) (i) 1 p(y |x ; θ) = √ exp −. 2πσ 2σ 2 The notation “p(y (i) |x(i) ; θ)” indicates that this is the distribution of y (i) given x(i) and parameterized by θ. Note that we should not condition on θ (“p(y (i) |x(i) , θ)”), since θ is not a random variable. We can also write the distribution of y (i) as y (i) | x(i) ; θ ∼ N (θT x(i) , σ 2 ). Given X (the design matrix, which contains all the x(i) ’s) and θ, what is the distribution of the y (i) ’s? The probability of the data is given by p(~y |X; θ). This quantity is typically viewed a function of ~y (and perhaps X), for a fixed value of θ. When we wish to explicitly view this as a function of θ, we will instead call it the likelihood function: L(θ) = L(θ; X, ~y ) = p(~y |X; θ). Note that by the independence assumption on the (i) ’s (and hence also the y (i) ’s given the x(i) ’s), this can also be written n Y L(θ) = p(y (i) | x(i) ; θ) i=1 n (y (i) − θT x(i) )2   Y 1 = √ exp −. i=1 2πσ 2σ 2 Now, given this probabilistic model relating the y (i) ’s and the x(i) ’s, what is a reasonable way of choosing our best guess of the parameters θ? The principal of maximum likelihood says that we should choose θ so as to make the data as high probability as possible. I.e., we should choose θ to maximize L(θ). 17 Instead of maximizing L(θ), we can also maximize any strictly increasing function of L(θ). In particular, the derivations will be a bit simpler if we instead maximize the log likelihood `(θ): `(θ) = log L(θ) n (y (i) − θT x(i) )2   Y 1 = log √ exp − i=1 2πσ 2σ 2 n (y (i) − θT x(i) )2   X 1 = log √ exp − i=1 2πσ 2σ 2 n 1 1 1 X (i) = n log √ − 2· (y − θT x(i) )2. 2πσ σ 2 i=1 Hence, maximizing `(θ) gives the same answer as minimizing n 1 X (i) (y − θT x(i) )2 , 2 i=1 which we recognize to be J(θ), our original least-squares cost function. To summarize: Under the previous probabilistic assumptions on the data, least-squares regression corresponds to finding the maximum likelihood esti- mate of θ. This is thus one set of assumptions under which least-squares re- gression can be justified as a very natural method that’s just doing maximum likelihood estimation. (Note however that the probabilistic assumptions are by no means necessary for least-squares to be a perfectly good and rational procedure, and there may—and indeed there are—other natural assumptions that can also be used to justify it.) Note also that, in our previous discussion, our final choice of θ did not depend on what was σ 2 , and indeed we’d have arrived at the same result even if σ 2 were unknown. We will use this fact again later, when we talk about the exponential family and generalized linear models. 1.4 Locally weighted linear regression (optional reading) Consider the problem of predicting y from x ∈ R. The leftmost figure below shows the result of fitting a y = θ0 + θ1 x to a dataset. We see that the data doesn’t really lie on straight line, and so the fit is not very good. 18 4.5 4.5 4.5 4 4 4 3.5 3.5 3.5 3 3 3 2.5 2.5 2.5 y y y 2 2 2 1.5 1.5 1.5 1 1 1 0.5 0.5 0.5 0 0 0 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 x x x Instead, if we had added an extra feature x2 , and fit y = θ0 + θ1 x + θ2 x2 , then we obtain a slightly better fit to the data. (See middle figure) Naively, it might seem that the more features we add, the better. However, there is also a danger in adding too many features: P The rightmost figure is the result of fitting a 5-th order polynomial y = 5j=0 θj xj. We see that even though the fitted curve passes through the data perfectly, we would not expect this to be a very good predictor of, say, housing prices (y) for different living areas (x). Without formally defining what these terms mean, we’ll say the figure on the left shows an instance of underfitting—in which the data clearly shows structure not captured by the model—and the figure on the right is an example of overfitting. (Later in this class, when we talk about learning theory we’ll formalize some of these notions, and also define more carefully just what it means for a hypothesis to be good or bad.) As discussed previously, and as shown in the example above, the choice of features is important to ensuring good performance of a learning algorithm. (When we talk about model selection, we’ll also see algorithms for automat- ically choosing a good set of features.) In this section, let us briefly talk about the locally weighted linear regression (LWR) algorithm which, assum- ing there is sufficient training data, makes the choice of features less critical. This treatment will be brief, since you’ll get a chance to explore some of the properties of the LWR algorithm yourself in the homework. In the original linear regression algorithm, to make a prediction at a query point x (i.e., to evaluate h(x)), we would: 1. Fit θ to minimize i (y (i) − θT x(i) )2. P 2. Output θT x. In contrast, the locally weighted linear regression algorithm does the fol- lowing: 1. Fit θ to minimize i w(i) (y (i) − θT x(i) )2. P 2. Output θT x. 19 Here, the w(i) ’s are non-negative valued weights. Intuitively, if w(i) is large for a particular value of i, then in picking θ, we’ll try hard to make (y (i) − θT x(i) )2 small. If w(i) is small, then the (y (i) − θT x(i) )2 error term will be pretty much ignored in the fit. A fairly standard choice for the weights is4 (x(i) − x)2   (i) w = exp − 2τ 2 Note that the weights depend on the particular point x at which we’re trying to evaluate x. Moreover, if |x(i) − x| is small, then w(i) is close to 1; and if |x(i) − x| is large, then w(i) is small. Hence, θ is chosen giving a much higher “weight” to the (errors on) training examples close to the query point x. (Note also that while the formula for the weights takes a form that is cosmetically similar to the density of a Gaussian distribution, the w(i) ’s do not directly have anything to do with Gaussians, and in particular the w(i) are not random variables, normally distributed or otherwise.) The parameter τ controls how quickly the weight of a training example falls off with distance of its x(i) from the query point x; τ is called the bandwidth parameter, and is also something that you’ll get to experiment with in your homework. Locally weighted linear regression is the first example we’re seeing of a non-parametric algorithm. The (unweighted) linear regression algorithm that we saw earlier is known as a parametric learning algorithm, because it has a fixed, finite number of parameters (the θi ’s), which are fit to the data. Once we’ve fit the θi ’s and stored them away, we no longer need to keep the training data around to make future predictions. In contrast, to make predictions using locally weighted linear regression, we need to keep the entire training set around. The term “non-parametric” (roughly) refers to the fact that the amount of stuff we need to keep in order to represent the hypothesis h grows linearly with the size of the training set. 4 If x is vector-valued, this is generalized to be w(i) = exp(−(x(i) − x)T (x(i) − x)/(2τ 2 )), or w(i) = exp(−(x(i) − x)T Σ−1 (x(i) − x)/(2τ 2 )), for an appropriate choice of τ or Σ. Chapter 2 Classification and logistic regression Let’s now talk about the classification problem. This is just like the regression problem, except that the values y we now want to predict take on only a small number of discrete values. For now, we will focus on the binary classification problem in which y can take on only two values, 0 and 1. (Most of what we say here will also generalize to the multiple-class case.) For instance, if we are trying to build a spam classifier for email, then x(i) may be some features of a piece of email, and y may be 1 if it is a piece of spam mail, and 0 otherwise. 0 is also called the negative class, and 1 the positive class, and they are sometimes also denoted by the symbols “-” and “+.” Given x(i) , the corresponding y (i) is also called the label for the training example. 2.1 Logistic regression We could approach the classification problem ignoring the fact that y is discrete-valued, and use our old linear regression algorithm to try to predict y given x. However, it is easy to construct examples where this method performs very poorly. Intuitively, it also doesn’t make sense for hθ (x) to take values larger than 1 or smaller than 0 when we know that y ∈ {0, 1}. To fix this, let’s change the form for our hypotheses hθ (x). We will choose 1 hθ (x) = g(θT x) = , 1 + e−θT x where 1 g(z) = 1 + e−z 20 21 is called the logistic function or the sigmoid function. Here is a plot showing g(z): 1 0.9 0.8 0.7 0.6 g(z) 0.5 0.4 0.3 0.2 0.1 0 −5 −4 −3 −2 −1 0 1 2 3 4 5 z Notice that g(z) tends towards 1 as z → ∞, and g(z) tends towards 0 as z → −∞. Moreover, g(z), and hence also h(x), is always bounded between 0 and 1. AsP before, we are keeping the convention of letting x0 = 1, so that θT x = θ0 + dj=1 θj xj. For now, let’s take the choice of g as given. Other functions that smoothly increase from 0 to 1 can also be used, but for a couple of reasons that we’ll see later (when we talk about GLMs, and when we talk about generative learning algorithms), the choice of the logistic function is a fairly natural one. Before moving on, here’s a useful property of the derivative of the sigmoid function, which we write as g 0 : d 1 g 0 (z) = dz 1 + e−z 1 e−z  = −z 2 (1 + e )   1 1 = · 1− (1 + e−z ) (1 + e−z ) = g(z)(1 − g(z)). So, given the logistic regression model, how do we fit θ for it? Following how we saw least squares regression could be derived as the maximum like- lihood estimator under a set of assumptions, let’s endow our classification model with a set of probabilistic assumptions, and then fit the parameters via maximum likelihood. 22 Let us assume that P (y = 1 | x; θ) = hθ (x) P (y = 0 | x; θ) = 1 − hθ (x) Note that this can be written more compactly as p(y | x; θ) = (hθ (x))y (1 − hθ (x))1−y Assuming that the n training examples were generated independently, we can then write down the likelihood of the parameters as L(θ) = p(~y | X; θ) Yn = p(y (i) | x(i) ; θ) i=1 n Y y(i) 1−y(i) = hθ (x(i) ) 1 − hθ (x(i) ) i=1 As before, it will be easier to maximize the log likelihood: n X `(θ) = log L(θ) = y (i) log h(x(i) ) + (1 − y (i) ) log(1 − h(x(i) )) (2.1) i=1 How do we maximize the likelihood? Similar to our derivation in the case of linear regression, we can use gradient ascent. Written in vectorial notation, our updates will therefore be given by θ := θ + α∇θ `(θ). (Note the positive rather than negative sign in the update formula, since we’re maximizing, rather than minimizing, a function now.) Let’s start by working with just one training example (x, y), and take derivatives to derive the stochastic gradient ascent rule:   ∂ 1 1 ∂ `(θ) = y T − (1 − y) T g(θT x) ∂θj g(θ x) 1 − g(θ x) ∂θj   1 1 T T ∂ T = y − (1 − y) g(θ x)(1 − g(θ x)) θ x g(θT x) 1 − g(θT x) ∂θj = y(1 − g(θT x)) − (1 − y)g(θT x) xj  = (y − hθ (x)) xj (2.2) 23 Above, we used the fact that g 0 (z) = g(z)(1 − g(z)). This therefore gives us the stochastic gradient ascent rule  (i) θj := θj + α y (i) − hθ (x(i) ) xj If we compare this to the LMS update rule, we see that it looks identical; but this is not the same algorithm, because hθ (x(i) ) is now defined as a non-linear function of θT x(i). Nonetheless, it’s a little surprising that we end up with the same update rule for a rather different algorithm and learning problem. Is this coincidence, or is there a deeper reason behind this? We’ll answer this when we get to GLM models. Remark 2.1.1: An alternative notational viewpoint of the same loss func- tion is also useful, especially for Section 7.1 where we study nonlinear models. Let `logistic : R × {0, 1} → R≥0 be the logistic loss defined as `logistic (t, y) , y log(1 + exp(−t)) + (1 − y) log(1 + exp(t)). (2.3) > One can verify by plugging in hθ (x) = 1/(1 + e−θ x ) that the negative log- likelihood (the negation of `(θ) in equation (2.1)) can be re-written as −`(θ) = `logistic (θ> x, y). (2.4) Oftentimes θ> x or t is called the logit. Basic calculus gives us that ∂`logistic (t, y) − exp(−t) 1 =y + (1 − y) (2.5) ∂t 1 + exp(−t) 1 + exp(−t) = 1/(1 + exp(−t)) − y. (2.6) Then, using the chain rule, we have that ∂ ∂`logistic (t, y) ∂t `(θ) = − · (2.7) ∂θj ∂t ∂θj = (y − 1/(1 + exp(−t))) · xj = (y − hθ (x))xj , (2.8) which is consistent with the derivation in equation (2.2). We will see this viewpoint can be extended nonlinear models in Section 7.1. 2.2 Digression: the perceptron learning algo- rithm We now digress to talk briefly about an algorithm that’s of some historical interest, and that we will also return to later when we talk about learning 24 theory. Consider modifying the logistic regression method to “force” it to output values that are either 0 or 1 or exactly. To do so, it seems natural to change the definition of g to be the threshold function:  1 if z ≥ 0 g(z) = 0 if z < 0 If we then let hθ (x) = g(θT x) as before but using this modified definition of g, and if we use the update rule  (i) θj := θj + α y (i) − hθ (x(i) ) xj. then we have the perceptron learning algorithn. In the 1960s, this “perceptron” was argued to be a rough model for how individual neurons in the brain work. Given how simple the algorithm is, it will also provide a starting point for our analysis when we talk about learning theory later in this class. Note however that even though the perceptron may be cosmetically similar to the other algorithms we talked about, it is actually a very different type of algorithm than logistic regression and least squares linear regression; in particular, it is difficult to endow the perceptron’s predic- tions with meaningful probabilistic interpretations, or derive the perceptron as a maximum likelihood estimation algorithm. 2.3 Multi-class classification Consider a classification problem in which the response variable y can take on any one of k values, so y ∈ {1, 2,... , k}. For example, rather than classifying emails into the two classes spam or not-spam—which would have been a binary classification problem—we might want to classify them into three classes, such as spam, personal mails, and work-related mails. The label / response variable is still discrete, but can now take on more than two values. We will thus model it as distributed according to a multinomial distribution. In this case, p(y | x; θ) is a distribution over k possible discrete outcomes and is thus a multinomial distribution. Recall that a multinomial distribu- tion involves k numbers φ1 ,... , φk specifying the Pprobability of each of the outcomes. Note that these numbers must satisfy ki=1 φi = 1. We will de- sign a parameterized model that outputs φ1 ,... , φk satisfying this constraint given the input x. We introduce k groups of parameters θ1 ,... , θk , each of them being a vector in Rd. Intuitively, we would like to use θ1> x,... , θk> x to represent 25 φ1 ,... , φk , the probabilities P (y = 1 | x; θ),... , P (y = k | x; θ). However, there are two issues with such a direct approach. First, θj> x is not neces- sarily within [0, 1]. Second, the summation of θj> x’s is not necessarily 1. Thus, instead, we will use the softmax function to turn (θ1> x, · · · , θk> x) into a probability vector with nonnegative entries that sum up to 1. Define the softmax function softmax : Rk → Rk as  exp(t1 )  Pk j=1 exp(tj )  softmax(t1 ,... , tk ) = ..  . (2.9) .  exp(tk ) Pk j=1 exp(tj ) The inputs to the softmax function, the vector t here, are often called log- its. Note that by definition, the output of the softmax function is always a probability vector whose entries are nonnegative and sum up to 1. Let (t1 ,... , tk ) = (θ1> x, · · · , θk> x). We apply the softmax function to (t1 ,... , tk ), and use the output as the probabilities P (y = 1 | x; θ),... , P (y = k | x; θ). We obtain the following probabilistic model: exp(θ1> x)     P (y = 1 | x; θ) Pk  j=1 exp(θj> x) ....  = softmax(t1 , · · · , tk ) =  . (2.10)     ..  exp(θ> x)  P (y = k | x; θ) Pk k > j=1 exp(θj x) exp(ti ) For notational convenience, we will let φi = Pk. More succinctly, the j=1 exp(tj ) equation above can be written as: exp(ti ) exp(θi> x) P (y = i | x; θ) = φi = Pk = Pk >. (2.11) j=1 exp(tj ) j=1 exp(θj x) Next, we compute the negative log-likelihood of a single example (x, y). ! ! exp(ty ) exp(θy> x) − log p(y | x, θ) = − log Pk = − log Pk > j=1 exp(tj ) j=1 exp(θj x) (2.12) Thus, the loss function, the negative log-likelihood of the training data, is given as exp(θy>(i) x(i) ) n ! X `(θ) = − log Pk > (i). (2.13) i=1 j=1 exp(θ j x ) 26 It’s convenient to define the cross-entropy loss `ce : Rk × {1,... , k} → R≥0 , which modularizes in the complex equation above:1 ! exp(ty ) `ce ((t1 ,... , tk ), y) = − log Pk. (2.14) j=1 exp(tj ) With this notation, we can simply rewrite equation (2.13) as n X `(θ) = `ce ((θ1> x(i) ,... , θk> x(i) ), y (i) ). (2.15) i=1 Moreover, conveniently, the cross-entropy loss also has a simple gradient. Let t = (t1 ,... , tk ), and recall φi = Pkexp(t i) exp(t ). By basic calculus, we can derive j=1 j ∂`ce (t, y) = φi − 1{y = i} , (2.16) ∂ti where 1{·} is the indicator function, that is, 1{y = i} = 1 if y = i, and 1{y = i} = 0 if y 6= i. Alternatively, in vectorized notations, we have the following form which will be useful for Chapter 7: ∂`ce (t, y) = φ − ey , (2.17) ∂t where es ∈ Rk is the s-th natural basis vector (where the s-th entry is 1 and all other entries are zeros.) Using Chain rule, we have that ∂`ce ((θ1> x,... , θk> x), y) ∂`(t, y) ∂ti = · = (φi − 1{y = i}) · x. (2.18) ∂θi ∂ti ∂θi Therefore, the gradient of the loss with respect to the part of parameter θi is n ∂`(θ) X (j) = (φi − 1{y (j) = i}) · x(j) , (2.19) ∂θi j=1 (j) exp(θi> x(j) ) where φi = Pk > (j) ) is the probability that the model predicts item i s=1 exp(θs x (j) for example x. With the gradients above, one can implement (stochastic) gradient descent to minimize the loss function `(θ). 1 There are some ambiguity in the naming here. Some people call the cross-entropy loss the function that maps the probability vector (the φ in our language) and label y to the final real number, and call our version of cross-entropy loss softmax-cross-entropy loss. We choose our current naming convention because it’s consistent with the naming of most modern deep learning library such as PyTorch and Jax. 27 60 60 60 50 50 50 40 40 40 30 30 30 f(x) f(x) f(x) 20 20 20 10 10 10 0 0 0 −10 −10 −10 1 1.5 2 2.5 3 3.5 4 4.5 5 1 1.5 2 2.5 3 3.5 4 4.5 5 1 1.5 2 2.5 3 3.5 4 4.5 5 x x x 2.4 Another algorithm for maximizing `(θ) Returning to logistic regression with g(z) being the sigmoid function, let’s now talk about a different algorithm for maximizing `(θ). To get us started, let’s consider Newton’s method for finding a zero of a function. Specifically, suppose we have some function f : R 7→ R, and we wish to find a value of θ so that f (θ) = 0. Here, θ ∈ R is a real number. Newton’s method performs the following update: f (θ) θ := θ −. f 0 (θ) This method has a natural interpretation in which we can think of it as approximating the function f via a linear function that is tangent to f at the current guess θ, solving for where that linear function equals to zero, and letting the next guess for θ be where that linear function is zero. Here’s a picture of the Newton’s method in action: In the leftmost figure, we see the function f plotted along with the line y = 0. We’re trying to find θ so that f (θ) = 0; the value of θ that achieves this is about 1.3. Suppose we initialized the algorithm with θ = 4.5. Newton’s method then fits a straight line tangent to f at θ = 4.5, and solves for the where that line evaluates to 0. (Middle figure.) This give us the next guess for θ, which is about 2.8. The rightmost figure shows the result of running one more iteration, which the updates θ to about 1.8. After a few more iterations, we rapidly approach θ = 1.3. Newton’s method gives a way of getting to f (θ) = 0. What if we want to use it to maximize some function `? The maxima of ` correspond to points where its first derivative `0 (θ) is zero. So, by letting f (θ) = `0 (θ), we can use the same algorithm to maximize `, and we obtain update rule: `0 (θ) θ := θ − 00. ` (θ) (Something to think about: How would this change if we wanted to use Newton’s method to minimize rather than maximize a function?) 28 Lastly, in our logistic regression setting, θ is vector-valued, so we need to generalize Newton’s method to this setting. The generalization of Newton’s method to this multidimensional setting (also called the Newton-Raphson method) is given by θ := θ − H −1 ∇θ `(θ). Here, ∇θ `(θ) is, as usual, the vector of partial derivatives of `(θ) with respect to the θi ’s; and H is an d-by-d matrix (actually, d+1−by−d+1, assuming that we include the intercept term) called the Hessian, whose entries are given by ∂ 2 `(θ) Hij =. ∂θi ∂θj Newton’s method typically enjoys faster convergence than (batch) gra- dient descent, and requires many fewer iterations to get very close to the minimum. One iteration of Newton’s can, however, be more expensive than one iteration of gradient descent, since it requires finding and inverting an d-by-d Hessian; but so long as d is not too large, it is usually much faster overall. When Newton’s method is applied to maximize the logistic regres- sion log likelihood function `(θ), the resulting method is also called Fisher scoring. Chapter 3 Generalized linear models So far, we’ve seen a regression example, and a classification example. In the regression example, we had y|x; θ ∼ N (µ, σ 2 ), and in the classification one, y|x; θ ∼ Bernoulli(φ), for some appropriate definitions of µ and φ as functions of x and θ. In this section, we will show that both of these methods are special cases of a broader family of models, called Generalized Linear Models (GLMs).1 We will also show how other models in the GLM family can be derived and applied to other classification and regression problems. 3.1 The exponential family To work our way up to GLMs, we will begin by defining exponential family distributions. We say that a class of distributions is in the exponential family if it can be written in the form p(y; η) = b(y) exp(η T T (y) − a(η)) (3.1) Here, η is called the natural parameter (also called the canonical param- eter) of the distribution; T (y) is the sufficient statistic (for the distribu- tions we consider, it will often be the case that T (y) = y); and a(η) is the log partition function. The quantity e−a(η) essentially plays the role of a nor- malization constant, that makes sure the distribution p(y; η) sums/integrates over y to 1. A fixed choice of T , a and b defines a family (or set) of distributions that is parameterized by η; as we vary η, we then get different distributions within this family. 1 The presentation of the material in this section takes inspiration from Michael I. Jordan, Learning in graphical models (unpublished book draft), and also McCullagh and Nelder, Generalized Linear Models (2nd ed.). 29 30 We now show that the Bernoulli and the Gaussian distributions are ex- amples of exponential family distributions. The Bernoulli distribution with mean φ, written Bernoulli(φ), specifies a distribution over y ∈ {0, 1}, so that p(y = 1; φ) = φ; p(y = 0; φ) = 1 − φ. As we vary φ, we obtain Bernoulli distributions with different means. We now show that this class of Bernoulli distributions, ones obtained by varying φ, is in the exponential family; i.e., that there is a choice of T , a and b so that Equation (3.1) becomes exactly the class of Bernoulli distributions. We write the Bernoulli distribution as: p(y; φ) = φy (1 − φ)1−y = exp(y log φ + (1 − y) log(1 − φ))     φ = exp log y + log(1 − φ). 1−φ Thus, the natural parameter is given by η = log(φ/(1 − φ)). Interestingly, if we invert this definition for η by solving for φ in terms of η, we obtain φ = 1/(1 + e−η ). This is the familiar sigmoid function! This will come up again when we derive logistic regression as a GLM. To complete the formulation of the Bernoulli distribution as an exponential family distribution, we also have T (y) = y a(η) = − log(1 − φ) = log(1 + eη ) b(y) = 1 This shows that the Bernoulli distribution can be written in the form of Equation (3.1), using an appropriate choice of T , a and b. Let’s now move on to consider the Gaussian distribution. Recall that, when deriving linear regression, the value of σ 2 had no effect on our final choice of θ and hθ (x). Thus, we can choose an arbitrary value for σ 2 without changing anything. To simplify the derivation below, let’s set σ 2 = 1.2 We 2 If we leave σ 2 as a variable, the Gaussian distribution can also be shown to be in the exponential family, where η ∈ R2 is now a 2-dimension vector that depends on both µ and σ. For the purposes of GLMs, however, the σ 2 parameter can also be treated by considering a more general definition of the exponential family: p(y; η, τ ) = b(a, τ ) exp((η T T (y) − a(η))/c(τ )). Here, τ is called the dispersion parameter, and for the Gaussian, c(τ ) = σ 2 ; but given our simplification above, we won’t need the more general definition for the examples we will consider here. 31 then have:   1 1 2 p(y; µ) = √ exp − (y − µ) 2π 2     1 1 2 1 2 = √ exp − y · exp µy − µ 2π 2 2 Thus, we see that the Gaussian is in the exponential family, with η = µ T (y) = y a(η) = µ2 /2 = η 2 /2 √ b(y) = (1/ 2π) exp(−y 2 /2). There’re many other distributions that are members of the exponen- tial family: The multinomial (which we’ll see later), the Poisson (for mod- elling count-data; also see the problem set); the gamma and the exponen- tial (for modelling continuous, non-negative random variables, such as time- intervals); the beta and the Dirichlet (for distributions over probabilities); and many more. In the next section, we will describe a general “recipe” for constructing models in which y (given x and θ) comes from any of these distributions. 3.2 Constructing GLMs Suppose you would like to build a model to estimate the number y of cus- tomers arriving in your store (or number of page-views on your website) in any given hour, based on certain features x such as store promotions, recent advertising, weather, day-of-week, etc. We know that the Poisson distribu- tion usually gives a good model for numbers of visitors. Knowing this, how can we come up with a model for our problem? Fortunately, the Poisson is an exponential family distribution, so we can apply a Generalized Linear Model (GLM). In this section, we will we will describe a method for constructing GLM models for problems such as these. More generally, consider a classification or regression problem where we would like to predict the value of some random variable y as a function of x. To derive a GLM for this problem, we will make the following three assumptions about the conditional distribution of y given x and about our model: 32 1. y | x; θ ∼ ExponentialFamily(η). I.e., given x and θ, the distribution of y follows some exponential family distribution, with parameter η. 2. Given x, our goal is to predict the expected value of T (y) given x. In most of our examples, we will have T (y) = y, so this means we would like the prediction h(x) output by our learned hypothesis h to satisfy h(x) = E[y|x]. (Note that this assumption is satisfied in the choices for hθ (x) for both logistic regression and linear regression. For instance, in logistic regression, we had hθ (x) = p(y = 1|x; θ) = 0 · p(y = 0|x; θ) + 1 · p(y = 1|x; θ) = E[y|x; θ].) 3. The natural parameter η and the inputs x are related linearly: η = θT x. (Or, if η is vector-valued, then ηi = θiT x.) The third of these assumptions might seem the least well justified of the above, and it might be better thought of as a “design choice” in our recipe for designing GLMs, rather than as an assumption per se. These three assumptions/design choices will allow us to derive a very elegant class of learning algorithms, namely GLMs, that have many desirable properties such as ease of learning. Furthermore, the resulting models are often very effective for modelling different types of distributions over y; for example, we will shortly show that both logistic regression and ordinary least squares can both be derived as GLMs. 3.2.1 Ordinary least squares To show that ordinary least squares is a special case of the GLM family of models, consider the setting where the target variable y (also called the response variable in GLM terminology) is continuous, and we model the conditional distribution of y given x as a Gaussian N (µ, σ 2 ). (Here, µ may depend x.) So, we let the ExponentialF amily(η) distribution above be the Gaussian distribution. As we saw previously, in the formulation of the Gaussian as an exponential family distribution, we had µ = η. So, we have hθ (x) = E[y|x; θ] = µ = η = θT x. The first equality follows from Assumption 2, above; the second equality follows from the fact that y|x; θ ∼ N (µ, σ 2 ), and so its expected value is given 33 by µ; the third equality follows from Assumption 1 (and our earlier derivation showing that µ = η in the formulation of the Gaussian as an exponential family distribution); and the last equality follows from Assumption 3. 3.2.2 Logistic regression We now consider logistic regression. Here we are interested in binary classifi- cation, so y ∈ {0, 1}. Given that y is binary-valued, it therefore seems natural to choose the Bernoulli family of distributions to model the conditional dis- tribution of y given x. In our formulation of the Bernoulli distribution as an exponential family distribution, we had φ = 1/(1 + e−η ). Furthermore, note that if y|x; θ ∼ Bernoulli(φ), then E[y|x; θ] = φ. So, following a similar derivation as the one for ordinary least squares, we get: hθ (x) = E[y|x; θ] = φ = 1/(1 + e−η ) T = 1/(1 + e−θ x ) T So, this gives us hypothesis functions of the form hθ (x) = 1/(1 + e−θ x ). If you are previously wondering how we came up with the form of the logistic function 1/(1 + e−z ), this gives one answer: Once we assume that y condi- tioned on x is Bernoulli, it arises as a consequence of the definition of GLMs and exponential family distributions. To introduce a little more terminology, the function g giving the distri- bution’s mean as a function of the natural parameter (g(η) = E[T (y); η]) is called the canonical response function. Its inverse, g −1 , is called the canonical link function. Thus, the canonical response function for the Gaussian family is just the identify function; and the canonical response function for the Bernoulli is the logistic function.3 3 Many texts use g to denote the link function, and g −1 to denote the response function; but the notation we’re using here, inherited from the early machine learning literature, will be more consistent with the notation used in the rest of the class. Chapter 4 Generative learning algorithms So far, we’ve mainly been talking about learning algorithms that model p(y|x; θ), the conditional distribution of y given x. For instance, logistic regression modeled p(y|x; θ) as hθ (x) = g(θT x) where g is the sigmoid func- tion. In these notes, we’ll talk about a different type of learning algorithm. Consider a classification problem in which we want to learn to distinguish between elephants (y = 1) and dogs (y = 0), based on some features of an animal. Given a training set, an algorithm like logistic regression or the perceptron algorithm (basically) tries to find a straight line—that is, a decision boundary—that separates the elephants and dogs. Then, to classify a new animal as either an elephant or a dog, it checks on which side of the decision boundary it falls, and makes its prediction accordingly. Here’s a different approach. First, looking at elephants, we can build a model of what elephants look like. Then, looking at dogs, we can build a separate model of what dogs look like. Finally, to classify a new animal, we can match the new animal against the elephant model, and match it against the dog model, to see whether the new animal looks more like the elephants or more like the dogs we had seen in the training set. Algorithms that try to learn p(y|x) directly (such as logistic regression), or algorithms that try to learn mappings directly from the space of inputs X to the labels {0, 1}, (such as the perceptron algorithm) are called discrim- inative learning algorithms. Here, we’ll talk about algorithms that instead try to model p(x|y) (and p(y)). These algorithms are called generative learning algorithms. For instance, if y indicates whether an example is a dog (0) or an elephant (1), then p(x|y = 0) models the distribution of dogs’ features, and p(x|y = 1) models the distribution of elephants’ features. After modeling p(y) (called the class priors) and p(x|y), our algorithm 34 35 can then use Bayes rule to derive the posterior distribution on y given x: p(x|y)p(y) p(y|x) =. p(x) Here, the denominator is given by p(x) = p(x|y = 1)p(y = 1) + p(x|y = 0)p(y = 0) (you should be able to verify that this is true from the standard properties of probabilities), and thus can also be expressed in terms of the quantities p(x|y) and p(y) that we’ve learned. Actually, if were calculating p(y|x) in order to make a prediction, then we don’t actually need to calculate the denominator, since p(x|y)p(y) arg max p(y|x) = arg max y y p(x) = arg max p(x|y)p(y). y 4.1 Gaussian discriminant analysis The first generative learning algorithm that we’ll look at is Gaussian discrim- inant analysis (GDA). In this model, we’ll assume that p(x|y) is distributed according to a multivariate normal distribution. Let’s talk briefly about the properties of multivariate normal distributions before moving on to the GDA model itself. 4.1.1 The multivariate normal distribution The multivariate normal distribution in d-dimensions, also called the multi- variate Gaussian distribution, is parameterized by a mean vector µ ∈ Rd and a covariance matrix Σ ∈ Rd×d , where Σ ≥ 0 is symmetric and positive semi-definite. Also written “N (µ, Σ)”, its density is given by:   1 1 T −1 p(x; µ, Σ) = exp − (x − µ) Σ (x − µ). (2π)d/2 |Σ|1/2 2 In the equation above, “|Σ|” denotes the determinant of the matrix Σ. For a random variable X distributed N (µ, Σ), the mean is (unsurpris- ingly) given by µ: Z E[X] = x p(x; µ, Σ)dx = µ x The covariance of a vector-valued random variable Z is defined as Cov(Z) = E[(Z − E[Z])(Z − E[Z])T ]. This generalizes the notion of the variance of a 36 real-valued random variable. The covariance can also be defined as Cov(Z) = E[ZZ T ] − (E[Z])(E[Z])T. (You should be able to prove to yourself that these two definitions are equivalent.) If X ∼ N (µ, Σ), then Cov(X) = Σ. Here are some examples of what the density of a Gaussian distribution looks like: 0.25 0.25 0.25 0.2 0.2 0.2 0.15 0.15 0.15 0.1 0.1 0.1 0.05 0.05 0.05 3 3 3 2 2 2 3 3 3 1 2 1 2 1 2 0 1 0 1 0 1 −1 0 −1 0 −1 0 −1 −1 −1 −2 −2 −2 −2 −2 −2 −3 −3 −3 −3 −3 −3 The left-most figure shows a Gaussian with mean zero (that is, the 2x1 zero-vector) and covariance matrix Σ = I (the 2x2 identity matrix). A Gaus- sian with zero mean and identity covariance is also called the standard nor- mal distribution. The middle figure shows the density of a Gaussian with zero mean and Σ = 0.6I; and in the rightmost figure shows one with , Σ = 2I. We see that as Σ becomes larger, the Gaussian becomes more “spread-out,” and as it becomes smaller, the distribution becomes more “compressed.” Let’s look at some more examples. 0.25 0.25 0.25 0.2 0.2 0.2 0.15 0.15 0.15 0.1 0.1 0.1 0.05 0.05 0.05 3 3 3 2 2 2 1 1 1 0 0 0 3 3 3 −1 2 −1 2 −1 2 1 1 1 −2 0 −2 0 −2 0 −1 −1 −1 −3 −2 −3 −2 −3 −2 −3 −3 −3 The figures above show Gaussians with mean 0, and with covariance matrices respectively       1 0 1 0.5 1 0.8 Σ= ; Σ= ; Σ=. 0 1 0.5 1 0.8 1 The leftmost figure shows the familiar standard normal distribution, and we see that as we increase the off-diagonal entry in Σ, the density becomes more 37 “compressed” towards the 45◦ line (given by x1 = x2 ). We can see this more clearly when we look at the contours of the same three densities: 3 3 3 2 2 2 1 1 1 0 0 0 −1 −1 −1 −2 −2 −2 −3 −3 −3 −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3 Here’s one last set of examples generated by varying Σ: 3 3 3 2 2 2 1 1 1 0 0 0 −1 −1 −1 −2 −2 −2 −3 −3 −3 −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3 The plots above used, respectively,       1 -0.5 1 -0.8 3 0.8 Σ= ; Σ= ; Σ=. -0.5 1 -0.8 1 0.8 1 From the leftmost and middle figures, we see that by decreasing the off- diagonal elements of the covariance matrix, the density now becomes “com- pressed” again, but in the opposite direction. Lastly, as we vary the pa- rameters, more generally the contours will form ellipses (the rightmost figure showing an example). As our last set of examples, fixing Σ = I, by varying µ, we can also move the mean of the density around. 0.25 0.25 0.25 0.2 0.2 0.2 0.15 0.15 0.15 0.1 0.1 0.1 0.05 0.05 0.05 3 3 3 2 2 2 3 3 3 1 2 1 2 1 2 0 1 0 1 0 1 −1 0 −1 0 −1 0 −1 −1 −1 −2 −2 −2 −2 −2 −2 −3 −3 −3 −3 −3 −3 38 The figures above were generated using Σ = I, and respectively       1 -0.5 -1 µ= ; µ= ; µ=. 0 0 -1.5 4.1.2 The Gaussian discriminant analysis model When we have a classification problem in which the input features x are continuous-valued random variables, we can then use the Gaussian Discrim- inant Analysis (GDA) model, which models p(x|y) using a multivariate nor- mal distribution. The model is: y ∼ Bernoulli(φ) x|y = 0 ∼ N (µ0 , Σ) x|y = 1 ∼ N (µ1 , Σ) Writing out the distributions, this is: p(y) = φy (1 − φ)1−y   1 1 T −1 p(x|y = 0) = exp − (x − µ0 ) Σ (x − µ0 ) (2π)d/2 |Σ|1/2 2   1 1 T −1 p(x|y = 1) = exp − (x − µ1 )

Use Quizgecko on...
Browser
Browser