The Hundred-Page Machine Learning Book by Andriy Burkov PDF

Summary

This book, "The Hundred-Page Machine Learning Book," introduces fundamental concepts in machine learning, including supervised and unsupervised learning methods. It explains various types of algorithms, and provides a practical understanding of the field for beginners and experienced practitioners. The book can assist with brainstorming at the beginning of a project or to help determine if a technical or business problem is "machine-learnable."

Full Transcript

Andriy Burkov's THE HUNDRED-PAGE MACHINE LEARNING BOOK Preface Let’s start by telling the truth: machines don’t learn. What a typical “learning machine” does, is finding a mathematical formula, which, when applied to a collection of inputs (called “training data”), produces the desired outpu...

Andriy Burkov's THE HUNDRED-PAGE MACHINE LEARNING BOOK Preface Let’s start by telling the truth: machines don’t learn. What a typical “learning machine” does, is finding a mathematical formula, which, when applied to a collection of inputs (called “training data”), produces the desired outputs. This mathematical formula also generates the correct outputs for most other inputs (distinct from the training data) on the condition that those inputs come from the same or a similar statistical distribution as the one the training data was drawn from. Why isn’t that learning? Because if you slightly distort the inputs, the output is very likely to become completely wrong. It’s not how learning in animals works. If you learned to play a video game by looking straight at the screen, you would still be a good player if someone rotates the screen slightly. A machine learning algorithm, if it was trained by “looking” straight at the screen, unless it was also trained to recognize rotation, will fail to play the game on a rotated screen. So why the name “machine learning” then? The reason, as is often the case, is marketing: Arthur Samuel, an American pioneer in the field of computer gaming and artificial intelligence, coined the term in 1959 while at IBM. Similarly to how in the 2010s IBM tried to market the term “cognitive computing” to stand out from competition, in the 1960s, IBM used the new cool term “machine learning” to attract both clients and talented employees. As you can see, just like artificial intelligence is not intelligence, machine learning is not learning. However, machine learning is a universally recognized term that usually refers to the science and engineering of building machines capable of doing various useful things without being explicitly programmed to do so. So, the word “learning” in the term is used by analogy with the learning in animals rather than literally. Who This Book is For This book contains only those parts of the vast body of material on machine learning developed since the 1960s that have proven to have a significant practical value. A beginner in machine learning will find in this book just enough details to get a comfortable level of understanding of the field and start asking the right questions. Practitioners with experience can use this book as a collection of directions for further self-improvement. The book also comes in handy when brainstorming at the beginning of a project, when you try to answer the question whether a given technical or business problem is “machine-learnable” and, if yes, which techniques you should try to solve it. How to Use This Book If you are about to start learning machine learning, you should read this book from the beginning to the end. (It’s just a hundred pages, not a big deal.) If you are interested in a Andriy Burkov The Hundred-Page Machine Learning Book - Draft 3 1 Introduction 1.1 What is Machine Learning Machine learning is a subfield of computer science that is concerned with building algorithms which, to be useful, rely on a collection of examples of some phenomenon. These examples can come from nature, be handcrafted by humans or generated by another algorithm. Machine learning can also be defined as the process of solving a practical problem by 1) gathering a dataset, and 2) algorithmically building a statistical model based on that dataset. That statistical model is assumed to be used somehow to solve the practical problem. To save keystrokes, I use the terms “learning” and “machine learning” interchangeably. 1.2 Types of Learning Learning can be supervised, semi-supervised, unsupervised and reinforcement. 1.2.1 Supervised Learning In supervised learning1 , the dataset is the collection of labeled examples {(xi , yi )}N i=1. Each element xi among N is called a feature vector. A feature vector is a vector in which each dimension j = 1,... , D contains a value that describes the example somehow. That value is called a feature and is denoted as x(j). For instance, if each example x in our collection represents a person, then the first feature, x(1) , could contain height in cm, the second feature, x(2) , could contain weight in kg, x(3) could contain gender, and so on. For all examples in the dataset, the feature at position j in the feature vector always contains the (2) same kind of information. It means that if xi contains weight in kg in some example xi , (2) then xk will also contain weight in kg in every example xk , k = 1,... , N. The label yi can be either an element belonging to a finite set of classes {1, 2,... , C}, or a real number, or a more complex structure, like a vector, a matrix, a tree, or a graph. Unless otherwise stated, in this book yi is either one of a finite set of classes or a real number2. You can see a class as a category to which an example belongs. For instance, if your examples are email messages and your problem is spam detection, then you have two classes {spam, not_spam}. The goal of a supervised learning algorithm is to use the dataset to produce a model that takes a feature vector x as input and outputs information that allows deducing the label for this feature vector. For instance, the model created using the dataset of people could take as input a feature vector describing a person and output a probability that the person has cancer. 1 If a term is in bold, that means that the term can be found in the index at the end of the book. 2A real number is a quantity that can represent a distance along a line. Examples: 0, −256.34, 1000, 1000.2. Andriy Burkov The Hundred-Page Machine Learning Book - Draft 3 1.2.2 Unsupervised Learning In unsupervised learning, the dataset is a collection of unlabeled examples {xi }N i=1. Again, x is a feature vector, and the goal of an unsupervised learning algorithm is to create a model that takes a feature vector x as input and either transforms it into another vector or into a value that can be used to solve a practical problem. For example, in clustering, the model returns the id of the cluster for each feature vector in the dataset. In dimensionality reduction, the output of the model is a feature vector that has fewer features than the input x; in outlier detection, the output is a real number that indicates how x is different from a “typical” example in the dataset. 1.2.3 Semi-Supervised Learning In semi-supervised learning, the dataset contains both labeled and unlabeled examples. Usually, the quantity of unlabeled examples is much higher than the number of labeled examples. The goal of a semi-supervised learning algorithm is the same as the goal of the supervised learning algorithm. The hope here is that using many unlabeled examples can help the learning algorithm to find (we might say “produce” or “compute”) a better model. It could look counter-intuitive that learning could benefit from adding more unlabeled examples. It seems like we add more uncertainty to the problem. However, when you add unlabeled examples, you add more information about your problem: a larger sample reflects better the probability distribution the data we labeled came from. Theoretically, a learning algorithm should be able to leverage this additional information. 1.2.4 Reinforcement Learning Reinforcement learning is a subfield of machine learning where the machine “lives” in an environment and is capable of perceiving the state of that environment as a vector of features. The machine can execute actions in every state. Different actions bring different rewards and could also move the machine to another state of the environment. The goal of a reinforcement learning algorithm is to learn a policy. A policy is a function (similar to the model in supervised learning) that takes the feature vector of a state as input and outputs an optimal action to execute in that state. The action is optimal if it maximizes the expected average reward. Reinforcement learning solves a particular kind of problem where decision making is sequential, and the goal is long-term, such as game playing, robotics, resource management, or logistics. In this book, I put emphasis on one-shot decision making where input examples are independent of one another and the predictions made in the past. I leave reinforcement learning out of the scope of this book. Andriy Burkov The Hundred-Page Machine Learning Book - Draft 4 1.3 How Supervised Learning Works In this section, I briefly explain how supervised learning works so that you have the picture of the whole process before we go into detail. I decided to use supervised learning as an example because it’s the type of machine learning most frequently used in practice. The supervised learning process starts with gathering the data. The data for supervised learning is a collection of pairs (input, output). Input could be anything, for example, email messages, pictures, or sensor measurements. Outputs are usually real numbers, or labels (e.g. “spam”, “not_spam”, “cat”, “dog”, “mouse”, etc). In some cases, outputs are vectors (e.g., four coordinates of the rectangle around a person on the picture), sequences (e.g. [“adjective”, “adjective”, “noun”] for the input “big beautiful car”), or have some other structure. Let’s say the problem that you want to solve using supervised learning is spam detection. You gather the data, for example, 10,000 email messages, each with a label either “spam” or “not_spam” (you could add those labels manually or pay someone to do that for us). Now, you have to convert each email message into a feature vector. The data analyst decides, based on their experience, how to convert a real-world entity, such as an email message, into a feature vector. One common way to convert a text into a feature vector, called bag of words, is to take a dictionary of English words (let’s say it contains 20,000 alphabetically sorted words) and stipulate that in our feature vector: the first feature is equal to 1 if the email message contains the word “a”; otherwise, this feature is 0; the second feature is equal to 1 if the email message contains the word “aaron”; otherwise, this feature equals 0;... the feature at position 20,000 is equal to 1 if the email message contains the word “zulu”; otherwise, this feature is equal to 0. You repeat the above procedure for every email message in our collection, which gives us 10,000 feature vectors (each vector having the dimensionality of 20,000) and a label (“spam”/“not_spam”). Now you have a machine-readable input data, but the output labels are still in the form of human-readable text. Some learning algorithms require transforming labels into numbers. For example, some algorithms require numbers like 0 (to represent the label “not_spam”) and 1 (to represent the label “spam”). The algorithm I use to illustrate supervised learning is called Support Vector Machine (SVM). This algorithm requires that the positive label (in our case it’s “spam”) has the numeric value of +1 (one), and the negative label (“not_spam”) has the value of −1 (minus one). At this point, you have a dataset and a learning algorithm, so you are ready to apply the learning algorithm to the dataset to get the model. SVM sees every feature vector as a point in a high-dimensional space (in our case, space Andriy Burkov The Hundred-Page Machine Learning Book - Draft 5 is 20,000-dimensional). The algorithm puts all feature vectors on an imaginary 20,000- dimensional plot and draws an imaginary 19,999-dimensional line (a hyperplane) that separates examples with positive labels from examples with negative labels. In machine learning, the boundary separating the examples of different classes is called the decision boundary. The equation of the hyperplane is given by two parameters, a real-valued vector w of the same dimensionality as our input feature vector x, and a real number b like this: wx − b = 0, where the expression wx means w(1) x(1) + w(2) x(2) +... + w(D) x(D) , and D is the number of dimensions of the feature vector x. (If some equations aren’t clear to you right now, in Chapter 2 we revisit the math and statistical concepts necessary to understand them. For the moment, try to get an intuition of what’s happening here. It all becomes more clear after you read the next chapter.) Now, the predicted label for some input feature vector x is given like this: y = sign(wx − b), where sign is a mathematical operator that takes any value as input and returns +1 if the input is a positive number or −1 if the input is a negative number. The goal of the learning algorithm — SVM in this case — is to leverage the dataset and find the optimal values w∗ and b∗ for parameters w and b. Once the learning algorithm identifies these optimal values, the model f (x) is then defined as: f (x) = sign(w∗ x − b∗ ) Therefore, to predict whether an email message is spam or not spam using an SVM model, you have to take a text of the message, convert it into a feature vector, then multiply this vector by w∗ , subtract b∗ and take the sign of the result. This will give us the prediction (+1 means “spam”, −1 means “not_spam”). Now, how does the machine find w∗ and b∗ ? It solves an optimization problem. Machines are good at optimizing functions under constraints. So what are the constraints we want to satisfy here? First of all, we want the model to predict the labels of our 10,000 examples correctly. Remember that each example i = 1,... , 10000 is given by a pair (xi , yi ), where xi is the feature vector of example i and yi is its label that takes values either −1 or +1. So the constraints are naturally: wxi − b ≥ +1 if yi = +1, wxi − b ≤ −1 if yi = −1. Andriy Burkov The Hundred-Page Machine Learning Book - Draft 6 x(2) 1 2 = ||w b — || x w 0 = b — x w 1 — = b — x w x(1) b ||w || Figure 1: An example of an SVM model for two-dimensional feature vectors. We would also prefer that the hyperplane separates positive examples from negative ones with the largest margin. The margin is the distance between the closest examples of two classes, as defined by the decision boundary. A large margin contributes to a better generalization, that is how well the model will classify new examples in the future. Toqachieve that, we need PD to minimize the Euclidean norm of w denoted by kwk and given by j=1 (w (j) )2. So, the optimization problem that we want the machine to solve looks like this: Minimize kwk subject to yi (wxi − b) ≥ 1 for i = 1,... , N. The expression yi (wxi − b) ≥ 1 is just a compact way to write the above two constraints. The solution of this optimization problem, given by w∗ and b∗ , is called the statistical model, or, simply, the model. The process of building the model is called training. For two-dimensional feature vectors, the problem and the solution can be visualized as shown in Figure 1. The blue and orange circles represent, respectively, positive and negative examples, and the line given by wx − b = 0 is the decision boundary. Why, by minimizing the norm of w, do we find the highest margin between the two classes? Geometrically, the equations wx − b = 1 and wx − b = −1 define two parallel hyperplanes, as 2 you see in Figure 1. The distance between these hyperplanes is given by kwk , so the smaller Andriy Burkov The Hundred-Page Machine Learning Book - Draft 7 the norm kwk, the larger the distance between these two hyperplanes. That’s how Support Vector Machines work. This particular version of the algorithm builds the so-called linear model. It’s called linear because the decision boundary is a straight line (or a plane, or a hyperplane). SVM can also incorporate kernels that can make the decision boundary arbitrarily non-linear. In some cases, it could be impossible to perfectly separate the two groups of points because of noise in the data, errors of labeling, or outliers (examples very different from a “typical” example in the dataset). Another version of SVM can also incorporate a penalty hyperparameter3 for misclassification of training examples of specific classes. We study the SVM algorithm in more detail in Chapter 3. At this point, you should retain the following: any classification learning algorithm that builds a model implicitly or explicitly creates a decision boundary. The decision boundary can be straight, or curved, or it can have a complex form, or it can be a superposition of some geometrical figures. The form of the decision boundary determines the accuracy of the model (that is the ratio of examples whose labels are predicted correctly). The form of the decision boundary, the way it is algorithmically or mathematically computed based on the training data, differentiates one learning algorithm from another. In practice, there are two other essential differentiators of learning algorithms to consider: speed of model building and prediction processing time. In many practical cases, you would prefer a learning algorithm that builds a less accurate model fast. Additionally, you might prefer a less accurate model that is much quicker at making predictions. 1.4 Why the Model Works on New Data Why is a machine-learned model capable of predicting correctly the labels of new, previously unseen examples? To understand that, look at the plot in Figure 1. If two classes are separable from one another by a decision boundary, then, obviously, examples that belong to each class are located in two different subspaces which the decision boundary creates. If the examples used for training were selected randomly, independently of one another, and following the same procedure, then, statistically, it is more likely that the new negative example will be located on the plot somewhere not too far from other negative examples. The same concerns the new positive example: it will likely come from the surroundings of other positive examples. In such a case, our decision boundary will still, with high probability, separate well new positive and negative examples from one another. For other, less likely situations, our model will make errors, but because such situations are less likely, the number of errors will likely be smaller than the number of correct predictions. Intuitively, the larger is the set of training examples, the more unlikely that the new examples will be dissimilar to (and lie on the plot far from) the examples used for training. 3A hyperparameter is a property of a learning algorithm, usually (but not always) having a numerical value. That value influences the way the algorithm works. Those values aren’t learned by the algorithm itself from data. They have to be set by the data analyst before running the algorithm. Andriy Burkov The Hundred-Page Machine Learning Book - Draft 8 To minimize the probability of making errors on new examples, the SVM algorithm, by looking for the largest margin, explicitly tries to draw the decision boundary in such a way that it lies as far as possible from examples of both classes. The reader interested in knowing more about the learnability and understanding the close relationship between the model error, the size of the training set, the form of the mathematical equation that defines the model, and the time it takes to build the model is encouraged to read about the PAC learning. The PAC (for “probably approximately correct”) learning theory helps to analyze whether and under what conditions a learning algorithm will probably output an approximately correct classifier. Andriy Burkov The Hundred-Page Machine Learning Book - Draft 9 2 Notation and Definitions 2.1 Notation Let’s start by revisiting the mathematical notation we all learned at school, but some likely forgot right after the prom. 2.1.1 Data Structures A scalar is a simple numerical value, like 15 or −3.25. Variables or constants that take scalar values are denoted by an italic letter, like x or a. A vector is an ordered list of scalar values, called attributes. We denote a vector as a bold character, for example, x or w. Vectors can be visualized as arrows that point to some directions as well as points in a multi-dimensional space. Illustrations of three two-dimensional vectors, a = [2, 3], b = [−2, 5], and c = [1, 0] are given in Figure 1. We denote an attribute of a vector as an italic value with an index, like this: w(j) or x(j). The index j denotes a specific dimension of the vector, the position of an attribute in the list. For instance, in the vector a shown in red in Figure 1, a(1) = 2 and a(2) = 3. The notation x(j) should not be confused with the power operator, like this x2 (squared) or x3 (cubed). If we want to apply a power operator, say square, to an indexed attribute of a vector, we write like this: (x(j) )2. (j) (k) A variable can have two or more indices, like this: xi or like this xi,j. For example, in (j) neural networks, we denote as xl,u the input feature j of unit u in layer l. A matrix is a rectangular array of numbers arranged in rows and columns. Below is an example of a matrix with two rows and three columns,   2 4 −3. 21 −6 −1 Matrices are denoted with bold capital letters, such as A or W. Andriy Burkov The Hundred-Page Machine Learning Book - Draft 3 Figure 1: Three vectors visualized as directions and as points. A set is an unordered collection of unique elements. We denote a set as a calligraphic capital character, for example, S. A set of numbers can be finite (include a fixed amount of values). In this case, it is denoted using accolades, for example, {1, 3, 18, 23, 235} or {x1 , x2 , x3 , x4 ,... , xn }. A set can be infinite and include all values in some interval. If a set includes all values between a and b, including a and b, it is denoted using brackets as [a, b]. If the set doesn’t include the values a and b, such a set is denoted using parentheses like this: (a, b). For example, the set [0, 1] includes such values as 0, 0.0001, 0.25, 0.784, 0.9995, and 1.0. A special set denoted R includes all numbers from minus infinity to plus infinity. When an element x belongs to a set S, we write x ∈ S. We can obtain a new set S3 as an intersection of two sets S1 and S2. In this case, we write S3 ← S1 ∩ S2. For example {1, 3, 5, 8} ∩ {1, 8, 4} gives the new set {1, 8}. We can obtain a new set S3 as a union of two sets S1 and S2. In this case, we write S3 ← S1 ∪ S2. For example {1, 3, 5, 8} ∪ {1, 8, 4} gives the new set {1, 3, 4, 5, 8}. 2.1.2 Capital Sigma Notation The summation over a collection X = {x1 , x2 ,... , xn−1 , xn } or over the attributes of a vector x = [x(1) , x(2) ,... , x(m−1) , x(m) ] is denoted like this: n X m X def def xi = x1 + x2 +... + xn−1 + xn , or else: x(j) = x(1) + x(2) +... + x(m−1) + x(m). i=1 j=1 Andriy Burkov The Hundred-Page Machine Learning Book - Draft 4 def The notation = means “is defined as”. 2.1.3 Capital Pi Notation A notation analogous to capital sigma is the capital pi notation. It denotes a product of elements in a collection or attributes of a vector: n Y def xi = x1 · x2 ·... · xn−1 · xn , i=1 where a · b means a multiplied by b. Where possible, we omit · to simplify the notation, so ab also means a multiplied by b. 2.1.4 Operations on Sets A derived set creation operator looks like this: S 0 ← {x2 | x ∈ S, x > 3}. This notation means that we create a new set S 0 by putting into it x squared such that x is in S, and x is greater than 3. The cardinality operator |S| returns the number of elements in set S. 2.1.5 Operations on Vectors The sum of two vectors x + z is defined as the vector [x(1) + z (1) , x(2) + z (2) ,... , x(m) + z (m) ]. The difference of two vectors x − z is defined as [x(1) − z (1) , x(2) − z (2) ,... , x(m) − z (m) ]. def A vector multiplied by a scalar is a vector. For example xc = [cx(1) , cx(2) ,... , cx(m) ]. def Pm A dot-product of two vectors is a scalar. For example, wx = i=1 w(i) x(i). In some books, the dot-product is denoted as w · x. The two vectors must be of the same dimensionality. Otherwise, the dot-product is undefined. The multiplication of a matrix W by a vector x results in another vector. Let our matrix be,  (1,1)  w w(1,2) w(1,3) W=. w(2,1) w(2,2) w(2,3) When vectors participate in operations on matrices, a vector is by default represented as a matrix with one column. When the vector is on the right of the matrix, it remains a column vector. We can only multiply a matrix by vector if the vector has the same number of rows Andriy Burkov The Hundred-Page Machine Learning Book - Draft 5 def as the number of columns in the matrix. Let our vector be x = [x(1) , x(2) , x(3) ]. Then Wx is a two-dimensional vector defined as,    (1,1) (1,2) (1,3)  x(1) w w w x(2)  Wx = w(2,1) w(2,2) w(2,3) x(3)  (1,1) (1)  def w x + w(1,2) x(2) + w(1,3) x(3) = w(2,1) x(1) + w(2,2) x(2) + w(2,3) x(3)  (1)  w x = w(2) x If our matrix had, say, five rows, the result of the product would be a five-dimensional vector. When the vector is on the left side of the matrix in the multiplication, then it has to be transposed before we multiply it by the matrix. The transpose of the vector x denoted as x> makes a row vector out of a column vector. Let’s say,  (1)  x def   x = (2) , then x> = x(1) x(2). x The multiplication of the vector x by the matrix W is given by x> W,     w(1,1) w(1,2) w(1,3) x> W = x(1) x (2) w(2,1) w(2,2) w(2,3)  def  = w(1,1) x(1) + w(2,1) x(2) , w(1,2) x(1) + w(2,2) x(2) , w(1,3) x(1) + w(2,3) x(2) As you can see, we can only multiply a vector by a matrix if the vector has the same number of dimensions as the number of rows in the matrix. 2.1.6 Functions A function is a relation that associates each element x of a set X , the domain of the function, to a single element y of another set Y, the codomain of the function. A function usually has a name. If the function is called f , this relation is denoted y = f (x) (read f of x), the element x is the argument or input of the function, and y is the value of the function or the output. The symbol that is used for representing the input is the variable of the function (we often say that f is a function of the variable x). We say that f (x) has a local minimum at x = c if f (x) ≥ f (c) for every x in some open interval around x = c. An interval is a set of real numbers with the property that any number Andriy Burkov The Hundred-Page Machine Learning Book - Draft 6 Figure 2: A local and a global minima of a function. that lies between two numbers in the set is also included in the set. An open interval does not include its endpoints and is denoted using parentheses. For example, (0, 1) means “all numbers greater than 0 and less than 1”. The minimal value among all the local minima is called the global minimum. See illustration in Figure 2. A vector function, denoted as y = f (x) is a function that returns a vector y. It can have a vector or a scalar argument. 2.1.7 Max and Arg Max Given a set of values A = {a1 , a2 ,... , an }, the operator maxa∈A f (a) returns the highest value f (a) for all elements in the set A. On the other hand, the operator arg maxa∈A f (a) returns the element of the set A that maximizes f (a). Sometimes, when the set is implicit or infinite, we can write maxa f (a) or arg maxa f (a). Operators min and arg min operate in a similar manner. 2.1.8 Assignment Operator The expression a ← f (x) means that the variable a gets the new value: the result of f (x). We say that the variable a gets assigned a new value. Similarly, a ← [a1 , a2 ] means that the vector variable a gets the two-dimensional vector value [a1 , a2 ]. Andriy Burkov The Hundred-Page Machine Learning Book - Draft 7 2.1.9 Derivative and Gradient A derivative f 0 of a function f is a function or a value that describes how fast f grows (or decreases). If the derivative is a constant value, like 5 or −3, then the function grows (or decreases) constantly at any point x of its domain. If the derivative f 0 is a function, then the function f can grow at a different pace in different regions of its domain. If the derivative f 0 is positive at some point x, then the function f grows at this point. If the derivative of f is negative at some x, then the function decreases at this point. The derivative of zero at x means that the function’s slope at x is horizontal. The process of finding a derivative is called differentiation. Derivatives for basic functions are known. For example if f (x) = x2 , then f 0 (x) = 2x; if f (x) = 2x then f 0 (x) = 2; if f (x) = 2 then f 0 (x) = 0 (the derivative of any function f (x) = c, where c is a constant value, is zero). If the function we want to differentiate is not basic, we can find its derivative using the chain rule. For instance if F (x) = f (g(x)), where f and g are some functions, then F 0 (x) = f 0 (g(x))g 0 (x). For example if F (x) = (5x + 1)2 then g(x) = 5x + 1 and f (g(x)) = (g(x))2. By applying the chain rule, we find F 0 (x) = 2(5x + 1)g 0 (x) = 2(5x + 1)5 = 50x + 10. Gradient is the generalization of derivative for functions that take several inputs (or one input in the form of a vector or some other complex structure). A gradient of a function is a vector of partial derivatives. You can look at finding a partial derivative of a function as the process of finding the derivative by focusing on one of the function’s inputs and by considering all other inputs as constant values. For example, if our function is defined as f ([x(1) , x(2) ]) = ax(1) + bx(2) + c, then the partial derivative of function f with respect to x(1) , denoted as ∂x∂f(1) , is given by, ∂f = a + 0 + 0 = a, ∂x(1) where a is the derivative of the function ax(1) ; the two zeroes are respectively derivatives of bx(2) and c, because x(2) is considered constant when we compute the derivative with respect to x(1) , and the derivative of any constant is zero. Similarly, the partial derivative of function f with respect to x(2) , ∂f ∂x(2) , is given by, ∂f = 0 + b + 0 = b. ∂x(2) The gradient of function f , denoted as ∇f is given by the vector [ ∂x∂f(1) , ∂x∂f(2) ]. The chain rule works with partial derivatives too, as I illustrate in Chapter 4. Andriy Burkov The Hundred-Page Machine Learning Book - Draft 8 0.6 0.6 0.5 0.5 0.4 0.4 0.3 0.3 pmf pdf 0.2 0.2 0.1 0.1 Area = 1.0 0.0 0.0 1 2 3 4 0 2 4 6 8 x x (a) (b) Figure 3: A probability mass function and a probability density function. 2.2 Random Variable A random variable, usually written as an italic capital letter, like X, is a variable whose possible values are numerical outcomes of a random phenomenon. Examples of random phenomena with a numerical outcome include a toss of a coin (0 for heads and 1 for tails), a roll of a dice, or the height of the first stranger you meet outside. There are two types of random variables: discrete and continuous. A discrete random variable takes on only a countable number of distinct values such as red, yellow, blue or 1, 2, 3,.... The probability distribution of a discrete random variable is described by a list of probabilities associated with each of its possible values. This list of probabilities is called a probability mass function (pmf). For example: Pr(X = red) = 0.3, Pr(X = yellow) = 0.45, Pr(X = blue) = 0.25. Each probability in a probability mass function is a value greater than or equal to 0. The sum of probabilities equals 1 (Figure 3a). A continuous random variable (CRV) takes an infinite number of possible values in some interval. Examples include height, weight, and time. Because the number of values of a continuous random variable X is infinite, the probability Pr(X = c) for any c is 0. Therefore, instead of the list of probabilities, the probability distribution of a CRV (a continuous probability distribution) is described by a probability density function (pdf). The pdf is a function whose codomain is nonnegative and the area under the curve is equal to 1 (Figure 3b). Let a discrete random variable X have k possible values {xi }ki=1. The expectation of X Andriy Burkov The Hundred-Page Machine Learning Book - Draft 9 denoted as E[X] is given by, k X def E[X] = [xi · Pr(X = xi )] i=1 (1) = x1 · Pr(X = x1 ) + x2 · Pr(X = x2 ) + · · · + xk · Pr(X = xk ), where Pr(X = xi ) is the probability that X has the value xi according to the pmf. The expectation of a random variable is also called the mean, average or expected value and is frequently denoted with the letter µ. The expectation is one of the most important statistics of a random variable. Another important statistic is the standard deviation, defined as, def p σ = E[(X − µ)2 ]. Variance, denoted as σ 2 or var(X), is defined as, σ 2 = E[(X − µ)2 ]. For a discrete random variable, the standard deviation is given by: p σ= Pr(X = x1 )(x1 − µ)2 + Pr(X = x2 )(x2 − µ)2 + · · · + Pr(X = xk )(xk − µ)2 , where µ = E[X]. The expectation of a continuous random variable X is given by, Z def E[X] = xfX (x) dx, (2) R R where fX is the pdf of the variable X and R is the integral of function xfX. Integral is an equivalent of the summation over all values of the function when the function has a continuous domain. It equals the area under the curve of the function. R The property of the pdf that the area under its curve is 1 mathematically means that R fX (x) dx = 1. Most of the time we don’t know fX , but we can observe some values of X. In machine learning, we call these values examples, and the collection of these examples is called a sample or a dataset. Andriy Burkov The Hundred-Page Machine Learning Book - Draft 10 2.3 Unbiased Estimators Because fX is usually unknown, but we have a sample SX = {xi }N i=1 , we often content ourselves not with the true values of statistics of the probability distribution, such as expectation, but with their unbiased estimators. We say that θ̂(SX ) is an unbiased estimator of some statistic θ calculated using a sample SX drawn from an unknown probability distribution if θ̂(SX ) has the following property: h i E θ̂(SX ) = θ, where θ̂ is a sample statistic, obtained using a sample SX and not the real statistic θ that can be obtained only knowing X; the expectation is taken over all possible samples drawn from X. Intuitively, this means that if you can have an unlimited number of such samples as SX , and you compute some unbiased estimator, such as µ̂, using each sample, then the average of all these µ̂ equals the real statistic µ that you would get computed on X. It can be shown that an unbiased estimator of an unknown E[X] (given by either eq. 1 or PN eq. 2) is given by N1 i=1 xi (called in statistics the sample mean). 2.4 Bayes’ Rule The conditional probability Pr(X = x|Y = y) is the probability of the random variable X to have a specific value x given that another random variable Y has a specific value of y. The Bayes’ Rule (also known as the Bayes’ Theorem) stipulates that: Pr(Y = y|X = x) Pr(X = x) Pr(X = x|Y = y) =. Pr(Y = y) 2.5 Parameter Estimation Bayes’ Rule comes in handy when we have a model of X’s distribution, and this model fθ is a function that has some parameters in the form of a vector θ. An example of such a function could be the Gaussian function that has two parameters, µ and σ, and is defined as: 1 (x−µ)2 fθ (x) = √ e− 2σ 2 , (3) 2πσ 2 def where θ = [µ, σ]. Andriy Burkov The Hundred-Page Machine Learning Book - Draft 11 This function has all the properties of a pdf1. Therefore, we can use it as a model of an unknown distribution of X. We can update the values of parameters in the vector θ from the data using the Bayes’ Rule: Pr(X = x|θ = θ̂) Pr(θ = θ̂) Pr(X = x|θ = θ̂) Pr(θ = θ̂) Pr(θ = θ̂|X = x) ← = P. (4) Pr(X = x) θ̃ Pr(X = x|θ = θ̃) def where Pr(X = x|θ = θ̂) = fθ̂. If we have a sample S of X and the set of possible values for θ is finite, we can easily estimate Pr(θ = θ̂) by applying Bayes’ Rule iteratively, one example x ∈ S at a time. The initial value P Pr(θ = θ̂) can be guessed such that θ̂ Pr(θ = θ̂) = 1. This guess of the probabilities for different θ̂ is called the prior. First, we compute Pr(θ = θ̂|X = x1 ) for all possible values θ̂. Then, before updating Pr(θ = θ̂|X = x) once again, this time for x = x2 ∈ S using eq. 4, we replace the prior P Pr(θ = θ̂) in eq. 4 by the new estimate Pr(θ = θ̂) ← N1 x∈S Pr(θ = θ̂|X = x). The best value of the parameters θ∗ given one example is obtained using the principle of maximum a posteriori (or MAP): N Y θ∗ = arg max Pr(θ = θ̂|X = xi ). (5) θ i=1 If the set of possible values for θ isn’t finite, then we need to optimize eq. 5 directly using a numerical optimization routine, such as gradient descent, which we consider in Chapter 4. Usually, we optimize the natural logarithm of the right-hand side expression in eq. 5 because the logarithm of a product becomes the sum of logarithms and it’s easier for the machine to work with a sum than with a product2. 2.6 Parameters vs. Hyperparameters A hyperparameter is a property of a learning algorithm, usually (but not always) having a numerical value. That value influences the way the algorithm works. Hyperparameters aren’t learned by the algorithm itself from data. They have to be set by the data analyst before running the algorithm. I show how to do that in Chapter 5. 1 In fact, eq. 3 defines the pdf of one of the most frequently used in practice probability distributions called Gaussian distribution or normal distribution and denoted as N (µ, σ 2 ). 2 Multiplication of many numbers can give either a very small result or a very large one. It often results in the problem of numerical overflow when the machine cannot store such extreme numbers in memory. Andriy Burkov The Hundred-Page Machine Learning Book - Draft 12 Parameters are variables that define the model learned by the learning algorithm. Parameters are directly modified by the learning algorithm based on the training data. The goal of learning is to find such values of parameters that make the model optimal in a certain sense. 2.7 Classification vs. Regression Classification is a problem of automatically assigning a label to an unlabeled example. Spam detection is a famous example of classification. In machine learning, the classification problem is solved by a classification learning algorithm that takes a collection of labeled examples as inputs and produces a model that can take an unlabeled example as input and either directly output a label or output a number that can be used by the analyst to deduce the label. An example of such a number is a probability. In a classification problem, a label is a member of a finite set of classes. If the size of the set of classes is two (“sick”/“healthy”, “spam”/“not_spam”), we talk about binary classification (also called binomial in some sources). Multiclass classification (also called multinomial) is a classification problem with three or more classes3. While some learning algorithms naturally allow for more than two classes, others are by nature binary classification algorithms. There are strategies allowing to turn a binary classification learning algorithm into a multiclass one. I talk about one of them in Chapter 7. Regression is a problem of predicting a real-valued label (often called a target) given an unlabeled example. Estimating house price valuation based on house features, such as area, the number of bedrooms, location and so on is a famous example of regression. The regression problem is solved by a regression learning algorithm that takes a collection of labeled examples as inputs and produces a model that can take an unlabeled example as input and output a target. 2.8 Model-Based vs. Instance-Based Learning Most supervised learning algorithms are model-based. We have already seen one such algorithm: SVM. Model-based learning algorithms use the training data to create a model that has parameters learned from the training data. In SVM, the two parameters we saw were w∗ and b∗. After the model was built, the training data can be discarded. Instance-based learning algorithms use the whole dataset as the model. One instance-based algorithm frequently used in practice is k-Nearest Neighbors (kNN). In classification, to predict a label for an input example the kNN algorithm looks at the close neighborhood of the input example in the space of feature vectors and outputs the label that it saw the most often in this close neighborhood. 3 There’s still one label per example though. Andriy Burkov The Hundred-Page Machine Learning Book - Draft 13 2.9 Shallow vs. Deep Learning A shallow learning algorithm learns the parameters of the model directly from the features of the training examples. Most supervised learning algorithms are shallow. The notorious exceptions are neural network learning algorithms, specifically those that build neural networks with more than one layer between input and output. Such neural networks are called deep neural networks. In deep neural network learning (or, simply, deep learning), contrary to shallow learning, most model parameters are learned not directly from the features of the training examples, but from the outputs of the preceding layers. Don’t worry if you don’t understand what that means right now. We look at neural networks more closely in Chapter 6. Andriy Burkov The Hundred-Page Machine Learning Book - Draft 14 3 Fundamental Algorithms In this chapter, I describe five algorithms which are not just the most known but also either very effective on their own or are used as building blocks for the most effective learning algorithms out there. 3.1 Linear Regression Linear regression is a popular regression learning algorithm that learns a model which is a linear combination of features of the input example. 3.1.1 Problem Statement We have a collection of labeled examples {(xi , yi )}N i=1 , where N is the size of the collection, xi is the D-dimensional feature vector of example i = 1,... , N , yi is a real-valued1 target (j) and every feature xi , j = 1,... , D, is also a real number. We want to build a model fw,b (x) as a linear combination of features of example x: fw,b (x) = wx + b, (1) where w is a D-dimensional vector of parameters and b is a real number. The notation fw,b means that the model f is parametrized by two values: w and b. We will use the model to predict the unknown y for a given x like this: y ← fw,b (x). Two models parametrized by two different pairs (w, b) will likely produce two different predictions when applied to the same example. We want to find the optimal values (w∗ , b∗ ). Obviously, the optimal values of parameters define the model that makes the most accurate predictions. You could have noticed that the form of our linear model in eq. 1 is very similar to the form of the SVM model. The only difference is the missing sign operator. The two models are indeed similar. However, the hyperplane in the SVM plays the role of the decision boundary: it’s used to separate two groups of examples from one another. As such, it has to be as far from each group as possible. On the other hand, the hyperplane in linear regression is chosen to be as close to all training examples as possible. You can see why this latter requirement is essential by looking at the illustration in Figure 1. It displays the regression line (in red) for one-dimensional examples (blue dots). We can use this line to predict the value of the target ynew for a new unlabeled input example xnew. If our examples are D-dimensional feature vectors (for D > 1), the only difference 1 To say that y is real-valued, we write y ∈ R, where R denotes the set of all real numbers, an infinite set i i of numbers from minus infinity to plus infinity. Andriy Burkov The Hundred-Page Machine Learning Book - Draft 3 Figure 1: Linear Regression for one-dimensional examples. with the one-dimensional case is that the regression model is not a line but a plane (for two dimensions) or a hyperplane (for D > 2). Now you see why it’s essential to have the requirement that the regression hyperplane lies as close to the training examples as possible: if the red line in Figure 1 was far from the blue dots, the prediction ynew would have fewer chances to be correct. 3.1.2 Solution To get this latter requirement satisfied, the optimization procedure which we use to find the optimal values for w∗ and b∗ tries to minimize the following expression: 1 X (fw,b (xi ) − yi )2. (2) N i=1...N In mathematics, the expression we minimize or maximize is called an objective function, or, simply, an objective. The expression (fw,b (xi ) − yi )2 in the above objective is called the loss function. It’s a measure of penalty for misclassification of example i. This particular choice of the loss function is called squared error loss. All model-based learning algorithms have a loss function and what we do to find the best model is we try to minimize the objective known as the cost function. In linear regression, the cost function is given by the average loss, also called the empirical risk. The average loss, or empirical risk, for a model, is the average of all penalties obtained by applying the model to the training data. Andriy Burkov The Hundred-Page Machine Learning Book - Draft 4 Why is the loss in linear regression a quadratic function? Why couldn’t we get the absolute value of the difference between the true target yi and the predicted value f (xi ) and use that as a penalty? We could. Moreover, we also could use a cube instead of a square. Now you probably start realizing how many seemingly arbitrary decisions are made when we design a machine learning algorithm: we decided to use the linear combination of features to predict the target. However, we could use a square or some other polynomial to combine the values of features. We could also use some other loss function that makes sense: the absolute difference between f (xi ) and yi makes sense, the cube of the difference too; the binary loss (1 when f (xi ) and yi are different and 0 when they are the same) also makes sense, right? If we made different decisions about the form of the model, the form of the loss function, and about the choice of the algorithm that minimizes the average loss to find the best values of parameters, we would end up inventing a different machine learning algorithm. Sounds easy, doesn’t it? However, do not rush to invent a new learning algorithm. The fact that it’s different doesn’t mean that it will work better in practice. People invent new learning algorithms for one of the two main reasons: 1. The new algorithm solves a specific practical problem better than the existing algorithms. 2. The new algorithm has better theoretical guarantees on the quality of the model it produces. One practical justification of the choice of the linear form for the model is that it’s simple. Why use a complex model when you can use a simple one? Another consideration is that linear models rarely overfit. Overfitting is the property of a model such that the model predicts very well labels of the examples used during training but frequently makes errors when applied to examples that weren’t seen by the learning algorithm during training. An example of overfitting in regression is shown in Figure 2. The data used to build the red regression line is the same as in Figure 1. The difference is that this time, this is the polynomial regression with a polynomial of degree 10. The regression line predicts almost perfectly the targets almost all training examples, but will likely make significant errors on new data, as you can see in Figure 1 for xnew. We talk more about overfitting and how to avoid it Chapter 5. Now you know why linear regression can be useful: it doesn’t overfit much. But what about the squared loss? Why did we decide that it should be squared? In 1705, the French mathematician Adrien-Marie Legendre, who first published the sum of squares method for gauging the quality of the model stated that squaring the error before summing is convenient. Why did he say that? The absolute value is not convenient, because it doesn’t have a continuous derivative, which makes the function not smooth. Functions that are not smooth create unnecessary difficulties when employing linear algebra to find closed form solutions to optimization problems. Closed form solutions to finding an optimum of a function are simple algebraic expressions and are often preferable to using complex numerical optimization methods, such as gradient descent (used, among others, to train neural networks). Intuitively, squared penalties are also advantageous because they exaggerate the difference Andriy Burkov The Hundred-Page Machine Learning Book - Draft 5 Figure 2: Overfitting. between the true target and the predicted one according to the value of this difference. We might also use the powers 3 or 4, but their derivatives are more complicated to work with. Finally, why do we care about the derivative of the average loss? If we can calculate the gradient of the function in eq. 2, we can then set this gradient to zero2 and find the solution to a system of equations that gives us the optimal values w∗ and b∗. 3.2 Logistic Regression The first thing to say is that logistic regression is not a regression, but a classification learning algorithm. The name comes from statistics and is due to the fact that the mathematical formulation of logistic regression is similar to that of linear regression. I explain logistic regression on the case of binary classification. However, it can naturally be extended to multiclass classification. 3.2.1 Problem Statement In logistic regression, we still want to model yi as a linear function of xi , however, with a binary yi this is not straightforward. The linear combination of features such as wxi + b is a function that spans from minus infinity to plus infinity, while yi has only two possible values. 2 To find the minimum or the maximum of a function, we set the gradient to zero because the value of the gradient at extrema of a function is always zero. In 2D, the gradient at an extremum is a horizontal line. Andriy Burkov The Hundred-Page Machine Learning Book - Draft 6 At the time where the absence of computers required scientists to perform manual calculations, they were eager to find a linear classification model. They figured out that if we define a negative label as 0 and the positive label as 1, we would just need to find a simple continuous function whose codomain is (0, 1). In such a case, if the value returned by the model for input x is closer to 0, then we assign a negative label to x; otherwise, the example is labeled as positive. One function that has such a property is the standard logistic function (also known as the sigmoid function): 1 f (x) = , 1 + e−x where e is the base of the natural logarithm (also called Euler’s number; ex is also known as the exp(x) function in programming languages). Its graph is depicted in Figure 3. The logistic regression model looks like this: def 1 fw,b (x) =. (3) 1+ e−(wx+b) You can see the familiar term wx + b from linear regression. By looking at the graph of the standard logistic function, we can see how well it fits our classification purpose: if we optimize the values of w and b appropriately, we could interpret the output of f (x) as the probability of yi being positive. For example, if it’s higher than or equal to the threshold 0.5 we would say that the class of x is positive; otherwise, it’s negative. In practice, the choice of the threshold could be different depending on the problem. We return to this discussion in Chapter 5 when we talk about model performance assessment. Now, how do we find optimal w∗ and b∗ ? In linear regression, we minimized the empirical risk which was defined as the average squared error loss, also known as the mean squared error or MSE. Andriy Burkov The Hundred-Page Machine Learning Book - Draft 7 1.0 0.8 0.6 f(x) 0.4 0.2 0.0 6 4 2 0 2 4 6 x Figure 3: Standard logistic function. 3.2.2 Solution In logistic regression, on the other hand, we maximize the likelihood of our training set according to the model. In statistics, the likelihood function defines how likely the observation (an example) is according to our model. For instance, let’s have a labeled example (xi , yi ) in our training data. Assume also that we found (guessed) some specific values ŵ and b̂ of our parameters. If we now apply our model fŵ,b̂ to xi using eq. 3 we will get some value 0 < p < 1 as output. If yi is the positive class, the likelihood of yi being the positive class, according to our model, is given by p. Similarly, if yi is the negative class, the likelihood of it being the negative class is given by 1 − p. The optimization criterion in logistic regression is called maximum likelihood. Instead of minimizing the average loss, like in linear regression, we now maximize the likelihood of the training data according to our model: def Y Lw,b = fw,b (xi )yi (1 − fw,b (xi ))(1−yi ). (4) i=1...N The expression fw,b (x)yi (1 − fw,b (x))(1−yi ) may look scary but it’s just a fancy mathematical way of saying: “fw,b (x) when yi = 1 and (1 − fw,b (x)) otherwise”. Indeed, if yi = 1, then (1 − fw,b (x))(1−yi ) equals 1 because (1 − yi ) = 0 and we know that anything power 0 equals 1. On the other hand, if yi = 0, then fw,b (x)yi equals 1 for the same reason. Andriy Burkov The Hundred-Page Machine Learning Book - Draft 8 Q You may have noticedP that we used the product operator in the objective function instead of the sum operator which was used in linear regression. It’s because the likelihood of observing N labels for N examples is the product of likelihoods of each observation (assuming that all observations are independent of one another, which is the case). You can draw a parallel with the multiplication of probabilities of outcomes in a series of independent experiments in the probability theory. Because of the exp function used in the model, in practice, it’s more convenient to maximize the log-likelihood instead of likelihood. The log-likelihood is defined like follows: N X def LogLw,b = ln(L(w,b (x)) = yi ln fw,b (x) + (1 − yi ) ln (1 − fw,b (x)). i=1 Because ln is a strictly increasing function, maximizing this function is the same as maximizing its argument, and the solution to this new optimization problem is the same as the solution to the original problem. Contrary to linear regression, there’s no closed form solution to the above optimization problem. A typical numerical optimization procedure used in such cases is gradient descent. We talk about it in the next chapter. 3.3 Decision Tree Learning A decision tree is an acyclic graph that can be used to make decisions. In each branching node of the graph, a specific feature j of the feature vector is examined. If the value of the feature is below a specific threshold, then the left branch is followed; otherwise, the right branch is followed. As the leaf node is reached, the decision is made about the class to which the example belongs. As the title of the section suggests, a decision tree can be learned from data. 3.3.1 Problem Statement Like previously, we have a collection of labeled examples; labels belong to the set {0, 1}. We want to build a decision tree that would allow us to predict the class given a feature vector. 3.3.2 Solution There are various formulations of the decision tree learning algorithm. In this book, we consider just one, called ID3. The optimization criterion, in this case, is the average log-likelihood: Andriy Burkov The Hundred-Page Machine Learning Book - Draft 9 1 X N yi ln fID3 (xi ) + (1 − yi ) ln (1 − fID3 (xi )), (5) N i=1 where fID3 is a decision tree. By now, it looks very similar to logistic regression. However, contrary to the logistic regression learning algorithm which builds a parametric model fw∗ ,b∗ by finding an optimal solution to the optimization criterion, the ID3 algorithm optimizes it approximately by constructing a def nonparametric model fID3 (x) = Pr(y = 1|x). x Yes No x(3) < 18.3? x S={(x1, y1), (x2, y2), (x3, y3), S­ = {(x1, y1), (x2, y2), (x4, y4), (x5, y5), (x6, y6), S+ = {(x3, y3), (x5, y5), (x10, y10), (x4, y4), (x6, y6), (x7, y7), (x7, y7), (x8, y8), (x9, y9), (x11, y11), (x12, y12)} (x8, y8), (x9, y9)} (x10, y10), (x11, y11), (x12, y12)} Pr(y = 1|x) = (y1+y2+y3+y4+y5 Pr(y = 1|x) = (y1+y2+y4 Pr(y = 1|x) = +y6+y7+y8+y9+y10+y11+y12)/12 +y6+y7+y8+y9)/7 (y3+y5+y10+y11+y12)/5 Pr(y = 1|x) Pr(y = 1|x) Pr(y = 1|x) (a) (b) Figure 4: An illustration of a decision tree building algorithm. The set S contains 12 labeled examples. (a) In the beginning, the decision tree only contains the start node; it makes the same prediction for any input. (b) The decision tree after the first split; it tests whether feature 3 is less than 18.3 and, depending on the result, the prediction is made in one of the two leaf nodes. The ID3 learning algorithm works as follows. Let S denote a set of labeled examples. In the def beginning, the decision tree only has a start node that contains all examples: S = {(xi , yi )}N i=1. Start with a constant model fID3S defined as, Andriy Burkov The Hundred-Page Machine Learning Book - Draft 10 def 1 X S fID3 = y. (6) |S| (x,y)∈S The prediction given by the above model, fID3 S (x), would be the same for any input x. The corresponding decision tree built using a toy dataset of twelve labeled examples is shown in fig 4a. Then we search through all features j = 1,... , D and all thresholds t, and split the set S def def into two subsets: S− = {(x, y) | (x, y) ∈ S, x(j) < t} and S+ = {(x, y) | (x, y) ∈ S, x(j) ≥ t}. The two new subsets would go to two new leaf nodes, and we evaluate, for all possible pairs (j, t) how good the split with pieces S− and S+ is. Finally, we pick the best such values (j, t), split S into S+ and S− , form two new leaf nodes, and continue recursively on S+ and S− (or quit if no split produces a model that’s sufficiently better than the current one). A decision tree after one split is illustrated in fig 4b. Now you should wonder what do the words “evaluate how good the split is” mean. In ID3, the goodness of a split is estimated by using the criterion called entropy. Entropy is a measure of uncertainty about a random variable. It reaches its maximum when all values of the random variables are equiprobable. Entropy reaches its minimum when the random variable can have only one value. The entropy of a set of examples S is given by, def H(S) = −fID3 S ln fID3 S − (1 − fID3 S ) ln(1 − fID3 S ). When we split a set of examples by a certain feature j and a threshold t, the entropy of a split, H(S− , S+ ), is simply a weighted sum of two entropies: def |S− | |S+ | H(S− , S+ ) = H(S− ) + H(S+ ). (7) |S| |S| So, in ID3, at each step, at each leaf node, we find a split that minimizes the entropy given by eq. 7 or we stop at this leaf node. The algorithm stops at a leaf node in any of the below situations: All examples in the leaf node are classified correctly by the one-piece model (eq. 6). We cannot find an attribute to split upon. The split reduces the entropy less than some  (the value for which has to be found experimentally3 ). The tree reaches some maximum depth d (also has to be found experimentally). Because in ID3, the decision to split the dataset on each iteration is local (doesn’t depend on future splits), the algorithm doesn’t guarantee an optimal solution. The model can be 3 In Chapter 5, I show how to do that in the section on hyperparameter tuning. Andriy Burkov The Hundred-Page Machine Learning Book - Draft 11 improved by using techniques like backtracking during the search for the optimal decision tree at the cost of possibly taking longer to build a model. The most widely used formulation of a decision tree learning algorithm is called C4.5. It has several additional features as compared to ID3: it accepts both continuous and discrete features; it handles incomplete examples; it solves overfitting problem by using a bottom-up technique known as “pruning”. Pruning consists of going back through the tree once it’s been created and removing branches that don’t contribute significantly enough to the error reduction by replacing them with leaf nodes. The entropy-based split criterion intuitively makes sense: entropy reaches its minimum of 0 when all examples in S have the same label; on the other hand, the entropy is at its maximum of 1 when exactly one-half of examples in S is labeled with 1, making such a leaf useless for classification. The only remaining question is how this algorithm approximately maximizes the average log-likelihood criterion. I leave it for further reading. 3.4 Support Vector Machine I already presented SVM in the introduction, so this section only fills a couple of blanks. Two critical questions need to be answered: 1. What if there’s noise in the data and no hyperplane can perfectly separate positive examples from negative ones? 2. What if the data cannot be separated using a plane, but could be separated by a higher-order polynomial? You can see both situations depicted in Figure 5. In the left case, the data could be separated by a straight line if not for the noise (outliers or examples with wrong labels). In the right case, the decision boundary is a circle and not a straight line. Remember that in SVM, we want to satisfy the following constraints: wxi − b ≥ +1 if yi = +1, (8) wxi − b ≤ −1 if yi = −1. We also want to minimize kwk so that the hyperplane is equally distant from the closest examples of each class. Minimizing kwk is equivalent to minimizing 12 ||w||2 , and the use of this term makes it possible to perform quadratic programming optimization later on. The optimization problem for SVM, therefore, looks like this: Andriy Burkov The Hundred-Page Machine Learning Book - Draft 12 10 8 5 6 4 0 2 5 0 10 0 2 4 6 8 10 12 10 5 0 5 10 Figure 5: Linearly non-separable cases. Left: the presence of noise. Right: inherent nonlinearity. 1 min ||w||2 , such that yi (xi w − b) − 1 ≥ 0, i = 1,... , N. (9) 2 3.4.1 Dealing with Noise To extend SVM to cases in which the data is not linearly separable, we introduce the hinge loss function: max (0, 1 − yi (wxi − b)). The hinge loss function is zero if the constraints in 8 are satisfied; in other words, if wxi lies on the correct side of the decision boundary. For data on the wrong side of the decision boundary, the function’s value is proportional to the distance from the decision boundary. We then wish to minimize the following cost function, 1 X N Ckwk2 + max (0, 1 − yi (wxi − b)) , N i=1 where the hyperparameter C determines the tradeoff between increasing the size of the decision boundary and ensuring that each xi lies on the correct side of the decision boundary. The value of C is usually chosen experimentally, just like ID3’s hyperparameters  and d. SVMs that optimize hinge loss are called soft-margin SVMs, while the original formulation is referred to as a hard-margin SVM. As you can see, for sufficiently high values of C, the second term in the cost function will become negligible, so the SVM algorithm will try to find the highest margin by completely Andriy Burkov The Hundred-Page Machine Learning Book - Draft 13 ignoring misclassification. As we decrease the value of C, making classification errors is becoming more costly, so the SVM algorithm tries to make fewer mistakes by sacrificing the margin size. As we have already discussed, a larger margin is better for generalization. Therefore, C regulates the tradeoff between classifying the training data well (minimizing empirical risk) and classifying future examples well (generalization). 3.4.2 Dealing with Inherent Non-Linearity SVM can be adapted to work with datasets that cannot be separated by a hyperplane in its original space. Indeed, if we manage to transform the original space into a space of higher dimensionality, we could hope that the examples will become linearly separable in this transformed space. In SVMs, using a function to implicitly transform the original space into a higher dimensional space during the cost function optimization is called the kernel trick. 120 100 80 60 40 20 0 75 0 0 20 40 60 80 100 75 Figure 6: The data from Figure 5 (right) becomes linearly separable after a transformation into a three-dimensional space. The effect of applying the kernel trick is illustrated in Figure 6. As you can see, it’s possible to transform a two-dimensional non-linearly-separable data into a linearly-separable three- dimensional data using a specific mapping φ : x 7→ φ(x), where φ(x) is a vector of higher dimensionality than x. For the example of 2D data in Figure 5 (right), the mapping φ for that projects a 2D example x = [q, p] into a 3D space (Figure 6) would look like this: def √ φ([q, p]) = (q 2 , 2qp, p2 ), where ·2 means · squared. You see now that the data becomes linearly separable in the transformed space. Andriy Burkov The Hundred-Page Machine Learning Book - Draft 14 However, we don’t know a priori which mapping φ would work for our data. If we first transform all our input examples using some mapping into very high dimensional vectors and then apply SVM to this data, and we try all possible mapping functions, the computation could become very inefficient, and we would never solve our classification problem. Fortunately, scientists figured out how to use kernel functions (or, simply, kernels) to efficiently work in higher-dimensional spaces without doing this transformation explicitly. To understand how kernels work, we have to see first how the optimization algorithm for SVM finds the optimal values for w and b. The method traditionally used to solve the optimization problem in eq. 9 is the method of Lagrange multipliers. Instead of solving the original problem from eq. 9, it is convenient to solve an equivalent problem formulated like this: 1 XX N X N N XN max αi − yi αi (xi xk )yk αk subject to αi yi = 0 and αi ≥ 0, i = 1,... , N, α1...αN i=1 2 i=1 i=1 k=1 where αi are called Lagrange multipliers. When formulated like this, the optimization problem becomes a convex quadratic optimization problem, efficiently solvable by quadratic programming algorithms. Now, you could have noticed that in the above formulation, there is a term xi xk , and this is the only place where the feature vectors are used. If we want to transform our vector space into higher dimensional space, we need to transform xi into φ(xi ) and xk into φ(xk ) and then multiply φ(xi ) and φ(xk ). Doing so would be very costly. On the other hand, we are only interested in the result of the dot-product xi xk , which, as we know, is a real number. We don’t care how this number was obtained as long as it’s correct. By using the kernel trick, we can get rid of a costly transformation of original feature vectors into higher-dimensional vectors and avoid computing their dot-product. We replace that by a simple operation on the original feature vectors √ that gives the same result. √ For example, instead of transforming (q1 , p1 ) into (q√1 , 2q1 p1 , p21 ) and (q 2 √2 , p2 ) into (q2 , 2q2 p2 , p2 ) and then computing the dot-product of (q12 , 2q1 p1 , p21 ) and (q22 , 2q2 p2 , p22 ) 2 2 to obtain (q12 q22 + 2q1 q2 p1 p2 + p21 p22 ) we could find the dot-product between (q1 , p1 ) and (q2 , p2 ) to get (q1 q2 + p1 p2 ) and then square it to get exactly the same result (q12 q22 + 2q1 q2 p1 p2 + p21 p22 ). def That was an example of the kernel trick, and we used the quadratic kernel k(xi , xk ) = (xi xk )2. Multiple kernel functions exist, the most widely used of which is the RBF kernel:   kx − x0 k2 k(x, x ) = exp − 0 , 2σ 2 where kx − x0 k2 is the squared Euclidean distance between two feature vectors. The Euclidean distance is given by the following equation: Andriy Burkov The Hundred-Page Machine Learning Book - Draft 15 r v 2  2  2 uD  2 def (1) (1) (2) (2) (N ) (N ) uX (j) (j) d(xi , xk ) = xi − xk + xi − xk + · · · + xi − xk =t x −xi k. j=1 It can be shown that the feature space of the RBF (for “radial basis function”) kernel has an infinite number of dimensions. By varying the hyperparameter σ, the data analyst can choose between getting a smooth or curvy decision boundary in the original space. 3.5 k-Nearest Neighbors k-Nearest Neighbors (kNN) is a non-parametric learning algorithm. Contrary to other learning algorithms that allow discarding the training data after the model is built, kNN keeps all training examples in memory. Once a new, previously unseen example x comes in, the kNN algorithm finds k training examples closest to x and returns the majority label, in case of classification, or the average label, in case of regression. The closeness of two examples is given by a distance function. For example, Euclidean distance seen above is frequently used in practice. Another popular choice of the distance function is the negative cosine similarity. Cosine similarity defined as, PD (j) (j) def xi xk s(xi , xk ) = cos(∠(xi , xk )) = r j=1 r , PD  (j) 2 PD  (j) 2 j=1 xi j=1 xk is a measure of similarity of the directions of two vectors. If the angle between two vectors is 0 degrees, then two vectors point to the same direction, and cosine similarity is equal to 1. If the vectors are orthogonal, the cosine similarity is 0. For vectors pointing in opposite directions, the cosine similarity is −1. If we want to use cosine similarity as a distance metric, we need to multiply it by −1. Other popular distance metrics include Chebychev distance, Mahalanobis distance, and Hamming distance. The choice of the distance metric, as well as the value for k, are the choices the analyst makes before running the algorithm. So these are hyperparameters. The distance metric could also be learned from data (as opposed to guessing it). We talk about that in Chapter 10. Andriy Burkov The Hundred-Page Machine Learning Book - Draft 16 4 Anatomy of a Learning Algorithm 4.1 Building Blocks of a Learning Algorithm You may have noticed by reading the previous chapter that each learning algorithm we saw consisted of three parts: 1) a loss function; 2) an optimization criterion based on the loss function (a cost function, for example); and 3) an optimization routine leveraging training data to find a solution to the optimization criterion. These are the building blocks of any learning algorithm. You saw in the previous chapter that some algorithms were designed to explicitly optimize a specific criterion (both linear and logistic regressions, SVM). Some others, including decision tree learning and kNN, optimize the criterion implicitly. Decision tree learning and kNN are among the oldest machine learning algorithms and were invented experimentally based on intuition, without a specific global optimization criterion in mind, and (like it often happened in scientific history) the optimization criteria were developed later to explain why those algorithms work. By reading modern literature on machine learning, you often encounter references to gradient descent or stochastic gradient descent. These are two most frequently used optimization algorithms used in cases where the optimization criterion is differentiable. Gradient descent is an iterative optimization algorithm for finding the minimum of a function. To find a local minimum of a function using gradient descent, one starts at some random point and takes steps proportional to the negative of the gradient (or approximate gradient) of the function at the current point. Gradient descent can be used to find optimal parameters for linear and logistic regression, SVM and also neural networks which we consider later. For many models, such as logistic regression or SVM, the optimization criterion is convex. Convex functions have only one minimum, which is global. Optimization criteria for neural networks are not convex, but in practice even finding a local minimum suffices. Let’s see how gradient descent works. 4.2 Gradient Descent In this section, I demonstrate how gradient descent finds the solution to a linear regression problem1. I illustrate my description with Python code as well as with plots that show how the solution improves after some iterations of gradient descent. I use a dataset with only 1 As you know, linear regression has a closed form solution. That means that gradient descent is not needed to solve this specific type of problem. However, for illustration purposes, linear regression is a perfect problem to explain gradient descent. Andriy Burkov The Hundred-Page Machine Learning Book - Draft 3 one feature. However, the optimization criterion will have two parameters: w and b. The extension to multi-dimensional training data is straightforward: you have variables w(1) , w(2) , and b for two-dimensional data, w(1) , w(2) , w(3) , and b for three-dimensional data and so on. Sales as a function of radio ad spendings. 25 20 Sales, Units 15 10 5 0 10 20 30 40 50 Spendings, M$ Figure 1: The original data. The Y-axis corresponds to the sales in units (the quantity we want to predict), the X-axis corresponds to our feature: the spendings on radio ads in M$. To give a practical example, I use the real dataset (can be found on the book’s wiki) with the following columns: the Spendings of various companies on radio advertising each year and their annual Sales in terms of units sold. We want to build a regression model that we can use to predict units sold based on how much a company spends on radio advertising. Each row in the dataset represents one specific company: Company Spendings, M$ Sales, Units 1 37.8 22.1 2 39.3 10.4 3 45.9 9.3 4 41.3 18.5...... We have data for 200 companies, so we have 200 training examples in the form (xi , yi ) = (Spendingsi , Salesi ). Figure 1 shows all examples on a 2D plot. Remember that the linear regression model looks like this: f (x) = wx + b. We don’t know what the optimal values for w and b are and we want to learn them from data. To do that, Andriy Burkov The Hundred-Page Machine Learning Book - Draft 4 we look for such values for w and b that minimize the mean squared error: 1 X N def l = (yi − (wxi + b))2. N i=1 Gradient descent starts with calculating the partial derivative for every parameter: 1 X N ∂l = −2xi (yi − (wxi + b)); ∂w N i=1 (1) 1 X N ∂l = −2(yi − (wxi + b)). ∂b N i=1 To find the partial derivative of the term (yi − (wx + b))2 with respect to w I applied the chain rule. Here, we have the chain f = f2 (f1 ) where f1 = yi − (wx + b) and f2 = f12. To find a partial derivative of f with respect to w we have to first find the partial derivative of f with respect to f2 which is equal to 2(yi − (wx + b)) (from calculus, we know that the derivative ∂ 2 ∂x x = 2x) and then we have to multiply it by the partialPN derivative of yi − (wx + b) with 1 respect to w which is equal to −x. So overall ∂w = N i=1 −2xi (yi − (wxi + b)). In a similar ∂l way, the partial derivative of l with respect to b, ∂b ∂l , was calculated. Gradient descent proceeds in epochs. An epoch consists of using the training set entirely to update each parameter. In the beginning, the first epoch, we initialize2 w ← P0N and b ← 0. The partial derivatives, ∂w ∂l and ∂b ∂l given by eq. 1 equal, respectively, −2N i=1 xi yi and PN i=1 yi. At each epoch, we update w and b using partial derivatives. The learning rate α −2 N controls the size of an update: ∂l w ←w−α ; ∂w (2) ∂l b←b−α. ∂b We subtract (as opposed to adding) partial derivatives from the values of parameters because derivatives are indicators of growth of a function. If a derivative is positive at some point3 , then the function grows at this point. Because we want to minimize the objective function, 2 In complex models, such as neural networks, which have thousands of parameters, the initialization of parameters may significantly affect the solution found using gradient descent. There are different initialization methods (at random, with all zeroes, with small values around zero, and others) and it is an important choice the data analyst has to make. 3 A point is given by the current values of parameters. Andriy Burkov The Hundred-Page Machine Learning Book - Draft 5 when the derivative is positive we know that we need to move our parameter in the opposite direction (to the left on the axis of coordinates). When the derivative is negative (function is decreasing), we need to move our parameter to the right to decrease the value of the function even more. Subtracting a negative value from a parameter moves it to the right. At the next epoch, we recalculate partial derivatives using eq. 1 with the updated values of w and b; we continue the process until convergence. Typically, we need many epochs until we start seeing that the values for w and b don’t change much after each epoch; then we stop. It’s hard to imagine a machine learning engineer who doesn’t use Python. So, if you waited for the right moment to start learning Python, this is that moment. Below, I show how to program gradient descent in Python. The function that updates the parameters w and b during one epoch is shown below: 1 def update_w_and_b(spendings, sales, w, b, alpha): 2 dl_dw = 0.0 3 dl_db = 0.0 4 N = len(spendings) 5 6 for i in range(N): 7 dl_dw += -2*spendings[i]*(sales[i] - (w*spendings[i] + b)) 8 dl_db += -2*(sales[i] - (w*spendings[i] + b)) 9 10 # update w and b 11 w = w - (1/float(N))*dl_dw*alpha 12 b = b - (1/float(N))*dl_db*alpha 13 14 return w, b The function that loops over multiple epochs is shown below: 15 def train(spendings, sales, w, b, alpha, epochs): 16 for e in range(epochs): 17 w, b = update_w_and_b(spendings, sales, w, b, alpha) 18 19 # log the progress 20 if e % 400 == 0: 21 print("epoch:", e, "loss: ", avg_loss(spendings, sales, w, b)) 22 23 return w, b Andriy Burkov The Hundred-Page Machine Learning Book - Draft 6 30 30 30 25 25 25 20 20 20 15 15 15 10 10 10 5 5 5 0 0 0 0 10 20 30 40 50 0 10 20 30 40 50 0 10 20 30 40 50 Epoch 0 Epoch 400 Epoch 800 30 30 30 25 25 25 20 20 20 15 15 15 10 10 10 5 5 5 0 0 0 0 10 20 30 40 50 0 10 20 30 40 50 0 10 20 30 40 50 Epoch 1200 Epoch 1600 Epoch 3000 Figure 2: The evolution of the regression line through gradi

Use Quizgecko on...
Browser
Browser