Analytics for a Better World Exam - 2021 PDF
Document Details
Uploaded by Deleted User
2021
Tags
Summary
This is an example exam for an Analytics course, likely part of a higher-level education. This exam has problems relating to linear optimization problems, with 3 questions. The exam covers topics relating to calculating shortest paths and network flows.
Full Transcript
Example Exam Analytics for a Better World Duration: 2 hours October 21, 2021 This exam is worth 100 points. Question 1. 20 points During the next three...
Example Exam Analytics for a Better World Duration: 2 hours October 21, 2021 This exam is worth 100 points. Question 1. 20 points During the next three months Airco must meet (on time) the following demands for air conditioners: month 1: 300 month 2: 400 month 3: 500 Air conditioners can be produced in either New York or Los Angeles. It takes 1.5 hours of skilled labor to produce an air conditioner in Los Angeles, and 2 hours in New York. It costs $400 to produce an air conditioner in Los Angeles, and $350 in New York. During each month, each city has 420 hours of skilled labor available. It costs $100 to hold an air conditioner in inventory for a month. At the beginning of month 1, Airco has 200 air conditioners in stock. Formulate a linear optimization problem whose solution will tell Airco how to minimize the cost of meeting air conditioner demands for the next three months. Solution 1. Let Lt be the air conditioners made in LA during month t. Let Nt be the air conditioners made in NY during month t. Let It be the air conditioners made in inventory at end of month t. Then the linear optimization model is: min 400L1 + 400L2 + 400L3 + 100I1 + 100I2 + 100I3 + 350N1 + 350N2 + 350N3 s.t. L1 + N1 − I1 = 100 L2 + N2 + I1 − I2 = 400 L3 + N3 + I2 − I3 = 500 1.5L1 ≤ 420 1.5L2 ≤ 420 1.5L3 ≤ 420 2N1 ≤ 420 2N2 ≤ 420 2N3 ≤ 420 Lt , Nt , It ≥ 0, t = 1, 2, 3. 1 Question 2. 20 points Graphically solve the following linear optimization problem: min 6x1 + 2x2 s.t. 3x1 + 2x2 ≥ 12 2x1 + 4x2 ≥ 12 x2 ≥ 1 x1 , x2 ≥ 0. Solution 2. 6 5 x1 ≥ 0 x2 ≥ 0 4 3x1 + 2x2 ≥ 12 2x1 + 4x2 ≥ 12 x2 ≥ 1 x2 3 6x1 + 2x2 = 12.0 6x1 + 2x2 = 21.0 2 6x1 + 2x2 = 26.0 (0.0, 6.0) 1 0 0 1 2 3 4 x1 Optimal solution: (x∗1 , x∗2 ) = (0, 6). Question 3. 20 points Find the shortest path from ‘start’ to ‘finish’ in the following graph by applying Dijkstra’s method: 1 A D 3 2 1 2 2 start 1 C finish 2 1 1 1 2 B E 2 Solution 3. 1 1 | A 3(start) D 3 2 1 2 2 0 | start 0(start) 1 2 | C 5(A) 5 | finish 7(E) 2 1 1 1 2 3 | B 6(C) 4 | E 6(C) The s t a r t node s t a r t r e c e i v e s t h e t em por ar y l a b e l 0. Lowest tem po ra ry l a b e l a t i t e r a t i o n 0 i s 0 a t node s t a r t , permanent. We s c a n t h e s u c c e s s o r s [ ’ A ’ ] o f node s t a r t. − Node A can be r e a c h e d from s t a r t i n 3 = 0 + 3. − node A r e c e i v e s t em por ar y l a b e l 3 = 0 + 3 from s t a r t Lowest tem po rar y l a b e l a t i t e r a t i o n 1 i s 3 a t node A, permanent. We s c a n t h e s u c c e s s o r s [ ’ C ’ ] o f node A. − Node C can be r e a c h e d from A i n 5 = 3 + 2. − node C r e c e i v e s t emp or ary l a b e l 5 = 3 + 2 from A Lowest te mpo ra ry l a b e l a t i t e r a t i o n 2 i s 5 a t node C, permanent. We s c a n t h e s u c c e s s o r s [ ’ B ’ , ’E ’ ] o f node C. − Node B can be r e a c h e d from C i n 6 = 5 + 1. − node B r e c e i v e s te mp or ary l a b e l 6 = 5 + 1 from C − Node E can be r e a c h e d from C i n 6 = 5 + 1. − node E r e c e i v e s te mpo ra ry l a b e l 6 = 5 + 1 from C Lowest te mpo ra ry l a b e l a t i t e r a t i o n 3 i s 6 a t node B , permanent. We s c a n t h e s u c c e s s o r s [ ’ s t a r t ’ , ’A’ , ’E ’ ] o f node B. − Node s t a r t can be r e a c h e d from B i n 8 = 6 + 2. − Node A can be r e a c h e d from B i n 7 = 6 + 1. − Node E can be r e a c h e d from B i n 8 = 6 + 2. Lowest te mpo ra ry l a b e l a t i t e r a t i o n 4 i s 6 a t node E , permanent. We s c a n t h e s u c c e s s o r s [ ’ f i n i s h ’ ] o f node E. − Node f i n i s h can be r e a c h e d from E i n 7 = 6 + 1. − node f i n i s h r e c e i v e s te mpo ra ry l a b e l 7 = 6 + 1 from E Lowest te mpo ra ry l a b e l a t i t e r a t i o n 5 i s 7 a t node f i n i s h , permanent. S i n c e f i n i s h i s t h e t e r m i n u s node , we s t o p. We c r e a t e t h e path , from t h e end. We add node by node t h a t l e d t o t h e l a b e l o f t h e node added. The path i s t h e r e f o r e [ ’ s t a r t ’ , ’A’ , ’C’ , ’E ’ , ’ f i n i s h ’ ] Question 4. 20 points State University must purchase 1,100 computers from three vendors. Vendor 1 charges $500 per computer plus a delivery charge of $5,000. Vendor 2 charges $350 per computer plus a delivery charge of $4,000. Vendor 3 charges $250 per computer plus a delivery charge of $6,000. Vendor 1 will sell the university at most 500 computers; vendor 2, at most 900; and vendor 3, at most 400. Formulate an integer optimization problem to minimize the cost of purchasing the needed computers. Solution 4. Let yi = 1 if we buy any computers from vendor i, and yi = 0 otherwise. Let xi be the number of computers purchased from vendor i. 3 Then the integer optimization model is: min 500x1 + 350x2 + 250x3 + 5000y1 + 4000y2 + 6000y3 s.t. x1 + x2 + x3 = 1100 x1 ≤ 500y1 x2 ≤ 900y2 x3 ≤ 400y3 x1 , x2 , x3 ≥ 0 y1 , y2 , y3 ∈ {0, 1}. Question 5. 20 points Are the following statements True or False? Motivate your answers. a) (5 points) Predictions are always about future events. b) (5 points) Clustering is the same as classification. c) (5 points) Binary perceptrons require that the inputs are binary. d) (5 points) Selecting the decision tree split (at each node as you move down the tree) that maximizes information gain will guarantee an optimal decision tree. Solution 5. a) False, consider for example predicting the the length of a person from gender, age and nationality. b) False, clustering is an unsupervised way to determine groups of similar or related data points, distinct from those in other groups. Classification is a supervised way of learning how to predict categories, for instance the gender of a person given height and age. c) False, binary perceptrons yield binary outputs. d) False, decisions taken early in the process condition future decisions and compromise optimality. We have seen how such a tree yields misclassifica- tions on the training set that can be avoided by a similar tree with the same depth and number of leaves. 4