Devore Solutions Ed9 PDF
Document Details
Uploaded by Deleted User
California Polytechnic State University, San Luis Obispo
2016
Jay Devore and Matthew A. Carlton
Tags
Related
- Probability & Statistics 2024-2025 PDF
- EDA 1st Midterm Reviewer PDF
- Probabilidad y Estadistica para Ingeniería y Ciencias PDF
- Probability & Statistics for Engineers & Scientists PDF
- Probability and Statistics for Engineers (STAT 301& 305) - Taibah University
- Probability and Statistics for Engineers - Taibah University - STAT 301+305 - PDF
Summary
This document is a solutions manual for the ninth edition of Probability and Statistics for Engineering and the Sciences by Jay Devore. It provides comprehensive answers to chapter exercises. The manual is published by Cengage Learning in 2016.
Full Transcript
Complete Solutions Manual to Accompany Probability and Statistics...
Complete Solutions Manual to Accompany Probability and Statistics for Engineering and the Sciences NINTH EDITION © Cengage Learning. All rights reserved. No distribution allowed without express authorization. Jay Devore California Polytechnic State University, San Luis Obispo, CA Prepared by Matthew A. Carlton California Polytechnic State University, San Luis Obispo, CA Australia Brazil Mexico Singapore United Kingdom United States © 2016 Cengage Learning ISBN-13: 978-1-305-26061-0 ISBN-10: 1-305-26061-9 ALL RIGHTS RESERVED. No part of this work covered by the copyright herein may be reproduced, transmitted, stored, or Cengage Learning used in any form or by any means graphic, electronic, or 20 Channel Center Street mechanical, including but not limited to photocopying, Fourth Floor recording, scanning, digitizing, taping, Web distribution, Boston, MA 02210 information networks, or information storage and retrieval USA systems, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without the prior written Cengage Learning is a leading provider of customized permission of the publisher except as may be permitted by the learning solutions with office locations around the globe, license terms below. including Singapore, the United Kingdom, Australia, Mexico, Brazil, and Japan. Locate your local office at: www.cengage.com/global. For product information and technology assistance, contact us at Cengage Learning products are represented in Cengage Learning Customer & Sales Support, Canada by Nelson Education, Ltd. 1-800-354-9706. To learn more about Cengage Learning Solutions, For permission to use material from this text or product, submit visit www.cengage.com. all requests online at www.cengage.com/permissions Further permissions questions can be emailed to Purchase any of our products at your local college [email protected]. store or at our preferred online store www.cengagebrain.com. NOTE: UNDER NO CIRCUMSTANCES MAY THIS MATERIAL OR ANY PORTION THEREOF BE SOLD, LICENSED, AUCTIONED, OR OTHERWISE REDISTRIBUTED EXCEPT AS MAY BE PERMITTED BY THE LICENSE TERMS HEREIN. READ IMPORTANT LICENSE INFORMATION Dear Professor or Other Supplement Recipient: may not copy or distribute any portion of the Supplement to any third party. You may not sell, license, auction, or otherwise Cengage Learning has provided you with this product (the redistribute the Supplement in any form. We ask that you take “Supplement”) for your review and, to the extent that you adopt reasonable steps to protect the Supplement from unauthorized the associated textbook for use in connection with your course use, reproduction, or distribution. Your use of the Supplement (the “Course”), you and your students who purchase the textbook indicates your acceptance of the conditions set forth in this may use the Supplement as described below. Cengage Learning Agreement. If you do not accept these conditions, you must return has established these use limitations in response to concerns the Supplement unused within 30 days of receipt. raised by authors, professors, and other users regarding the pedagogical problems stemming from unlimited distribution of All rights (including without limitation, copyrights, patents, and Supplements. trade secrets) in the Supplement are and will remain the sole and exclusive property of Cengage Learning and/or its licensors. The Cengage Learning hereby grants you a nontransferable license to Supplement is furnished by Cengage Learning on an “as is” basis use the Supplement in connection with the Course, subject to the without any warranties, express or implied. This Agreement will be following conditions. The Supplement is for your personal, governed by and construed pursuant to the laws of the State of noncommercial use only and may not be reproduced, posted New York, without regard to such State’s conflict of law rules. electronically or distributed, except that portions of the Supplement may be provided to your students IN PRINT FORM Thank you for your assistance in helping to safeguard the integrity of ONLY in connection with your instruction of the Course, so long the content contained in this Supplement. We trust you find the as such students are advised that they Supplement a useful teaching tool. Printed in the United States of America 1 2 3 4 5 6 7 17 16 15 14 13 CONTENTS Chapter 1 Overview and Descriptive Statistics 1 Chapter 2 Probability 48 Chapter 3 Discrete Random Variables and Probability 90 Distributions Chapter 4 Continuous Random Variables and Probability 126 Distributions Chapter 5 Joint Probability Distributions and Random Samples 177 Chapter 6 Point Estimation 206 Chapter 7 Statistical Intervals Based on a Single Sample 217 Chapter 8 Tests of Hypotheses Based on a Single Sample 234 Chapter 9 Inferences Based on Two Samples 255 Chapter 10 The Analysis of Variance 285 Chapter 11 Multifactor Analysis of Variance 299 Chapter 12 Simple Linear Regression and Correlation 330 Chapter 13 Nonlinear and Multiple Regression 368 Chapter 14 Goodness-of-Fit Tests and Categorical Data Analysis 406 Chapter 15 Distribution-Free Procedures 424 Chapter 16 Quality Control Methods 434 CHAPTER 1 Section 1.1 1. a. Los Angeles Times, Oberlin Tribune, Gainesville Sun, Washington Post b. Duke Energy, Clorox, Seagate, Neiman Marcus c. Vince Correa, Catherine Miller, Michael Cutler, Ken Lee d. 2.97, 3.56, 2.20, 2.97 2. a. 29.1 yd, 28.3 yd, 24.7 yd, 31.0 yd b. 432 pp, 196 pp, 184 pp, 321 pp c. 2.1, 4.0, 3.2, 6.3 d. 0.07 g, 1.58 g, 7.1 g, 27.2 g 3. a. How likely is it that more than half of the sampled computers will need or have needed warranty service? What is the expected number among the 100 that need warranty service? How likely is it that the number needing warranty service will exceed the expected number by more than 10? b. Suppose that 15 of the 100 sampled needed warranty service. How confident can we be that the proportion of all such computers needing warranty service is between.08 and.22? Does the sample provide compelling evidence for concluding that more than 10% of all such computers need warranty service? 1 Chapter 1: Overview and Descriptive Statistics 4. a. Concrete populations: all living U.S. Citizens, all mutual funds marketed in the U.S., all books published in 1980 Hypothetical populations: all grade point averages for University of California undergraduates during the next academic year, page lengths for all books published during the next calendar year, batting averages for all major league players during the next baseball season b. (Concrete) Probability: In a sample of 5 mutual funds, what is the chance that all 5 have rates of return which exceeded 10% last year? Statistics: If previous year rates-of-return for 5 mutual funds were 9.6, 14.5, 8.3, 9.9 and 10.2, can we conclude that the average rate for all funds was below 10%? (Hypothetical) Probability: In a sample of 10 books to be published next year, how likely is it that the average number of pages for the 10 is between 200 and 250? Statistics: If the sample average number of pages for 10 books is 227, can we be highly confident that the average for all books is between 200 and 245? 5. a. No. All students taking a large statistics course who participate in an SI program of this sort. b. The advantage to randomly allocating students to the two groups is that the two groups should then be fairly comparable before the study. If the two groups perform differently in the class, we might attribute this to the treatments (SI and control). If it were left to students to choose, stronger or more dedicated students might gravitate toward SI, confounding the results. c. If all students were put in the treatment group, there would be no firm basis for assessing the effectiveness of SI (nothing to which the SI scores could reasonably be compared). 6. One could take a simple random sample of students from all students in the California State University system and ask each student in the sample to report the distance form their hometown to campus. Alternatively, the sample could be generated by taking a stratified random sample by taking a simple random sample from each of the 23 campuses and again asking each student in the sample to report the distance from their hometown to campus. Certain problems might arise with self reporting of distances, such as recording error or poor recall. This study is enumerative because there exists a finite, identifiable population of objects from which to sample. 7. One could generate a simple random sample of all single-family homes in the city, or a stratified random sample by taking a simple random sample from each of the 10 district neighborhoods. From each of the selected homes, values of all desired variables would be determined. This would be an enumerative study because there exists a finite, identifiable population of objects from which to sample. 2 Chapter 1: Overview and Descriptive Statistics 8. a. Number observations equal 2 x 2 x 2 = 8 b. This could be called an analytic study because the data would be collected on an existing process. There is no sampling frame. 9. a. There could be several explanations for the variability of the measurements. Among them could be measurement error (due to mechanical or technical changes across measurements), recording error, differences in weather conditions at time of measurements, etc. b. No, because there is no sampling frame. Section 1.2 10. a. 59 6 33588 7 00234677889 8 127 9 077 stem: ones 10 7 leaf: tenths 11 368 A representative strength for these beams is around 7.8 MPa, but there is a reasonably large amount of variation around that representative value. (What constitutes large or small variation usually depends on context, but variation is usually considered large when the range of the data – the difference between the largest and smallest value – is comparable to a representative value. Here, the range is 11.8 – 5.9 = 5.9 MPa, which is similar in size to the representative value of 7.8 MPa. So, most researchers would call this a large amount of variation.) b. The data display is not perfectly symmetric around some middle/representative value. There is some positive skewness in this data. c. Outliers are data points that appear to be very different from the pack. Looking at the stem-and-leaf display in part (a), there appear to be no outliers in this data. (A later section gives a more precise definition of what constitutes an outlier.) d. From the stem-and-leaf display in part (a), there are 4 values greater than 10. Therefore, the proportion of data values that exceed 10 is 4/27 =.148, or, about 15%. 3 Chapter 1: Overview and Descriptive Statistics 11. 3L 1 3H 56678 4L 000112222234 4H 5667888 stem: tenths 5L 144 leaf : hundredths 5H 58 6L 2 6H 6678 7L 7H 5 The stem-and-leaf display shows that.45 is a good representative value for the data. In addition, the display is not symmetric and appears to be positively skewed. The range of the data is.75 –.31 =.44, which is comparable to the typical value of.45. This constitutes a reasonably large amount of variation in the data. The data value.75 is a possible outlier. 12. The sample size for this data set is n = 5 + 15 + 27 + 34 + 22 + 14 + 7 + 2 + 4 + 1 = 131. a. The first four intervals correspond to observations less than 5, so the proportion of values less than 5 is (5 + 15 + 27 + 34)/131 = 81/131 =.618. b. The last four intervals correspond to observations at least 6, so the proportion of values at least 6 is (7 + 2 + 4 + 1)/131 = 14/131 =.107. c. & d. The relative (percent) frequency and density histograms appear below. The distribution of CeO2 sizes is not symmetric, but rather positively skewed. Notice that the relative frequency and density histograms are essentially identical, other than the vertical axis labeling, because the bin widths are all the same. 25 0.5 20 0.4 Percent Density 15 0.3 10 0.2 5 0.1 0 0.0 3 4 5 6 7 8 3 4 5 6 7 8 CeO2 particle size (nm) CeO2 particle size (nm) 4 Chapter 1: Overview and Descriptive Statistics 13. a. 12 2 stem: tens 12 445 leaf: ones 12 6667777 12 889999 13 00011111111 13 2222222222333333333333333 13 44444444444444444455555555555555555555 13 6666666666667777777777 13 888888888888999999 14 0000001111 14 2333333 14 444 14 77 The observations are highly concentrated at around 134 or 135, where the display suggests the typical value falls. b. 40 30 Frequency 20 10 0 124 128 132 136 140 144 148 strength (ksi) The histogram of ultimate strengths is symmetric and unimodal, with the point of symmetry at approximately 135 ksi. There is a moderate amount of variation, and there are no gaps or outliers in the distribution. 5 Chapter 1: Overview and Descriptive Statistics 14. a. 2 23 stem: 1.0 3 2344567789 leaf:.10 4 01356889 5 00001114455666789 6 0000122223344456667789999 7 00012233455555668 8 02233448 9 012233335666788 10 2344455688 11 2335999 12 37 13 8 14 36 15 0035 16 17 18 9 b. A representative is around 7.0. c. The data exhibit a moderate amount of variation (this is subjective). d. No, the data is skewed to the right, or positively skewed. e. The value 18.9 appears to be an outlier, being more than two stem units from the previous value. 15. American French 8 1 755543211000 9 00234566 9432 10 2356 6630 11 1369 850 12 223558 8 13 7 14 15 8 2 16 American movie times are unimodal strongly positively skewed, while French movie times appear to be bimodal. A typical American movie runs about 95 minutes, while French movies are typically either around 95 minutes or around 125 minutes. American movies are generally shorter than French movies and are less variable in length. Finally, both American and French movies occasionally run very long (outliers at 162 minutes and 158 minutes, respectively, in the samples). 6 Chapter 1: Overview and Descriptive Statistics 16. a. Beams Cylinders 9 5 8 88533 6 16 98877643200 7 012488 721 8 13359 stem: ones 770 9 278 leaf: tenths 7 10 863 11 2 12 6 13 14 1 The data appears to be slightly skewed to the right, or positively skewed. The value of 14.1 MPa appears to be an outlier. Three out of the twenty, or 15%, of the observations exceed 10 MPa. b. The majority of observations are between 5 and 9 MPa for both beams and cylinders, with the modal class being 7.0-7.9 MPa. The observations for cylinders are more variable, or spread out, and the maximum value of the cylinder observations is higher. c.... :.. :.:... :... -+---------+---------+---------+---------+---------+----- 6.0 7.5 9.0 10.5 12.0 13.5 Cylinder strength (MPa) 17. The sample size for this data set is n = 7 + 20 + 26 + … + 3 + 2 = 108. a. “At most five bidders” means 2, 3, 4, or 5 bidders. The proportion of contracts that involved at most 5 bidders is (7 + 20 + 26 + 16)/108 = 69/108 =.639. Similarly, the proportion of contracts that involved at least 5 bidders (5 through 11) is equal to (16 + 11 + 9 + 6 + 8 + 3 + 2)/108 = 55/108 =.509. b. The number of contracts with between 5 and 10 bidders, inclusive, is 16 + 11 + 9 + 6 + 8 + 3 = 53, so the proportion is 53/108 =.491. “Strictly” between 5 and 10 means 6, 7, 8, or 9 bidders, for a proportion equal to (11 + 9 + 6 + 8)/108 = 34/108 =.315. c. The distribution of number of bidders is positively skewed, ranging from 2 to 11 bidders, with a typical value of around 4-5 bidders. 25 20 Frequency 15 10 5 0 2 3 4 5 6 7 8 9 10 11 Number of bidders 7 Chapter 1: Overview and Descriptive Statistics 18. a. The most interesting feature of the histogram is the heavy presence of three very large outliers (21, 24, and 32 directors). Absent these three corporations, the distribution of number of directors would be roughly symmetric with a typical value of around 9. 20 15 Percent 10 5 0 4 8 12 16 20 24 28 32 Number of directors Note: One way to have Minitab automatically construct a histogram from grouped data such as this is to use Minitab’s ability to enter multiple copies of the same number by typing, for example, 42(9) to enter 42 copies of the number 9. The frequency data in this exercise was entered using the following Minitab commands: MTB > set c1 DATA> 3(4) 12(5) 13(6) 25(7) 24(8) 42(9) 23(10) 19(11) 16(12) 11(13) 5(14) 4(15) 1(16) 3(17) 1(21) 1(24) 1(32) DATA> end b. The accompanying frequency distribution is nearly identical to the one in the textbook, except that the three largest values are compacted into the “≥ 18” category. If this were the originally-presented information, we could not create a histogram, because we would not know the upper boundary for the rectangle corresponding to the “≥ 18” category. No. dir. 4 5 6 7 8 9 10 11 Freq. 3 12 13 25 24 42 23 19 No dir. 12 13 14 15 16 17 ≥ 18 Freq. 16 11 5 4 1 3 3 c. The sample size is 3 + 12 + … + 3 + 1 + 1 + 1 = 204. So, the proportion of these corporations that have at most 10 directors is (3 + 12 + 13 + 25 + 24 + 42 + 23)/204 = 142/204 =.696. d. Similarly, the proportion of these corporations with more than 15 directors is (1 + 3 + 1 + 1 + 1)/204 = 7/204 =.034. 8 Chapter 1: Overview and Descriptive Statistics 19. a. From this frequency distribution, the proportion of wafers that contained at least one particle is (100-1)/100 =.99, or 99%. Note that it is much easier to subtract 1 (which is the number of wafers that contain 0 particles) from 100 than it would be to add all the frequencies for 1, 2, 3,… particles. In a similar fashion, the proportion containing at least 5 particles is (100 - 1-2-3-12-11)/100 = 71/100 =.71, or, 71%. b. The proportion containing between 5 and 10 particles is (15+18+10+12+4+5)/100 = 64/100 =.64, or 64%. The proportion that contain strictly between 5 and 10 (meaning strictly more than 5 and strictly less than 10) is (18+10+12+4)/100 = 44/100 =.44, or 44%. c. The following histogram was constructed using Minitab. The histogram is almost symmetric and unimodal; however, the distribution has a few smaller modes and has a very slight positive skew. 20 15 Percent 10 5 0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 Number of contaminating particles 20. a. The following stem-and-leaf display was constructed: 0 123334555599 1 00122234688 stem: thousands 2 1112344477 leaf: hundreds 3 0113338 4 37 5 23778 A typical data value is somewhere in the low 2000’s. The display is bimodal (the stem at 5 would be considered a mode, the stem at 0 another) and has a positive skew. 9 Chapter 1: Overview and Descriptive Statistics b. A histogram of this data, using classes boundaries of 0, 1000, 2000, …, 6000 is shown below. The proportion of subdivisions with total length less than 2000 is (12+11)/47 =.489, or 48.9%. Between 2000 and 4000, the proportion is (10+7)/47 =.362, or 36.2%. The histogram shows the same general shape as depicted by the stem-and-leaf in part (a). 12 10 8 Frequency 6 4 2 0 0 1000 2000 3000 4000 5000 6000 Total length of streets 21. a. A histogram of the y data appears below. From this histogram, the number of subdivisions having no cul-de-sacs (i.e., y = 0) is 17/47 =.362, or 36.2%. The proportion having at least one cul-de-sac (y ≥ 1) is (47 – 17)/47 = 30/47 =.638, or 63.8%. Note that subtracting the number of cul-de-sacs with y = 0 from the total, 47, is an easy way to find the number of subdivisions with y ≥ 1. 25 20 15 Frequency 10 5 0 0 1 2 3 4 5 Number of culs-de-sac 10 Chapter 1: Overview and Descriptive Statistics b. A histogram of the z data appears below. From this histogram, the number of subdivisions with at most 5 intersections (i.e., z ≤ 5) is 42/47 =.894, or 89.4%. The proportion having fewer than 5 intersections (i.e., z < 5) is 39/47 =.830, or 83.0%. 14 12 10 Frequency 8 6 4 2 0 0 1 2 3 4 5 6 7 8 Number of intersections 22. A very large percentage of the data values are greater than 0, which indicates that most, but not all, runners do slow down at the end of the race. The histogram is also positively skewed, which means that some runners slow down a lot compared to the others. A typical value for this data would be in the neighborhood of 200 seconds. The proportion of the runners who ran the last 5 km faster than they did the first 5 km is very small, about 1% or so. 23. Note: since the class intervals have unequal length, we must use a density scale. 0.20 0.15 Density 0.10 0.05 0.00 0 2 4 11 20 30 40 Tantrum duration The distribution of tantrum durations is unimodal and heavily positively skewed. Most tantrums last between 0 and 11 minutes, but a few last more than half an hour! With such heavy skewness, it’s difficult to give a representative value. 11 Chapter 1: Overview and Descriptive Statistics 24. The distribution of shear strengths is roughly symmetric and bell-shaped, centered at about 5000 lbs and ranging from about 4000 to 6000 lbs. 25 20 15 Frequency 10 5 0 4000 4400 4800 5200 5600 6000 Shear strength (lb) 25. The transformation creates a much more symmetric, mound-shaped histogram. Histogram of original data: 14 12 10 Frequency 8 6 4 2 0 10 20 30 40 50 60 70 80 IDT 12 Chapter 1: Overview and Descriptive Statistics Histogram of transformed data: 9 8 7 6 Frequency 5 4 3 2 1 0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 log(IDT) 26. a. Yes: the proportion of sampled angles smaller than 15° is.177 +.166 +.175 =.518. b. The proportion of sampled angles at least 30° is.078 +.044 +.030 =.152. c. The proportion of angles between 10° and 25° is roughly.175 +.136 + (.194)/2 =.408. d. The distribution of misorientation angles is heavily positively skewed. Though angles can range from 0° to 90°, nearly 85% of all angles are less than 30°. Without more precise information, we cannot tell if the data contain outliers. Histogram of Angle 0.04 0.03 Density 0.02 0.01 0.00 0 10 20 40 90 Angle 13 Chapter 1: Overview and Descriptive Statistics 27. a. The endpoints of the class intervals overlap. For example, the value 50 falls in both of the intervals 0–50 and 50–100. b. The lifetime distribution is positively skewed. A representative value is around 100. There is a great deal of variability in lifetimes and several possible candidates for outliers. Class Interval Frequency Relative Frequency 0–< 50 9 0.18 50– 0, P(X ≤ x) = f ( y;θ )dy = e− y 1 − e − x /2θ. 2 dx = 2 2 2 2 d. −∞ 0 θ 2 0 127 Chapter 4: Continuous Random Variables and Probability Distributions 5. 2 ∞ 2 kx 3 8k 3 a. 1= ∫ −∞ f ( x)dx = ∫ 0 kx 2 dx = = 3 0 3 ⇒k= 8. 1.6 1.4 1.2 1.0 0.8 f(x) 0.6 0.4 0.2 0.0 0.0 0.5 1.0 1.5 2.0 x 1 ∫ 1 b. P(0 ≤ X ≤ 1) = 3 x 2 dx= 1 x 3 = =.125. 1 0 8 8 0 8 8 ( 2 ) − 8 (1) == 1.5 ∫ 1.5 1 3 P(1 ≤ X ≤ 1.5) = 3 x 2 dx = = 3 1 3 1 19 3 c. 8 8 x 64.296875. 1 1 2 ∫ 2 d. P(X ≥ 1.5) = 1 – 3 = x 2 dx 1 x 3 = 1 (2)3 − 18 (1.5)3 =.578125. 1.5 8 8 1.5 8 6. a. 0.8 0.7 0.6 0.5 0.4 f(x) 0.3 0.2 0.1 0.0 2.0 2.5 3.0 3.5 4.0 4.5 x 4 1 4k 3 b. 1= ∫ 2 k[1 − ( x − 3) 2 ]dx = ∫ −1 k[1 − u 2 ]du = = 3 ⇒k =. 4 4 c. P(X > 3) = ∫3 4 3 [1 − ( x − 3) 2 ]dx =.5. This matches the symmetry of the pdf about x = 3. 47 P ( 114 ≤ X ≤ 134 )= 13/4 1/4 d. ∫ 11/4 4 3 [1 − ( x − 3) 2 ]dx= 3 4 ∫ −1/4 [1 − u 2 ]du = 128 ≈.367. e. P(|X – 3| >.5) = 1 – P(|X – 3| ≤.5) = 1 – P(2.5 ≤ X ≤ 3.5) =.5 1− ∫ 3 [1 − u= 2 ]du = 1 −.6875 =.3125. −.5 4 128 Chapter 4: Continuous Random Variables and Probability Distributions 7. 1 1 1 a. f(x)= = = for.20 ≤ x ≤ 4.25 and = 0 otherwise. B − A 4.25 −.20 4.05 Distribution Plot Uniform, Lower=0.2, Upper=4.25 0.25 0.20 0.15 Density 0.10 0.05 0.00 0 1 2 3 4 X 4.25 b. P(X > 3) = ∫ 3 1 4.05 dx = 1.25 4.05 =.309. µ +1 c. P(µ – 1 ≤ X ≤ µ + 1) = ∫µ 1 −1 4.05 dx = 2 4.05 =.494. (We don’t actually need to know µ here, but it’s clearly the midpoint of 2.225 mm by symmetry.) a +1 d. P(a ≤ X ≤ a + 1) = ∫ 1 4.05 dx = 1 4.05 =.247. a 8. a. 0.20 0.15 f(y) 0.10 0.05 0.00 0 2 4 6 8 10 y 5 10 ∞ 5 10 y2 2 1 2 b. ∫−∞ f ( y )dy= ∫ 1 0 25 ydy + ∫ 5 ( − y )dy = 2 5 1 25 + y− 50 0 5 y = 50 5 25 1 1 1 + (4 − 2) − 2 − = + = 1 50 2 2 2 3 3 y2 9 c. P(Y ≤ 3) = ∫ 1 0 25 y dy = = 50 0 50 =.18. 5 8 23 d. P(Y ≤ 8) = ∫ 1 0 25 y dy + ∫ ( 52 − 251 y )dy = 5 25 =.92. 129 Chapter 4: Continuous Random Variables and Probability Distributions e. Use parts c and d: P(3 ≤ Y ≤ 8) = P(Y ≤ 8) – P(Y < 3) =.92 –.18 =.74. 2 10 f. ∫ 251 y dy + ∫ ( 52 − 251 y)dy = P(Y < 2 or Y > 6) = = 0 =.4. 6 9. 5 4 ∫.15e dx =.15∫ e −.15u du (after the substitution u = x – 1) −.15( x −1) a. P(X ≤ 5) = 1 0 4 = −e −.15u = 1 − e −.6 ≈.451. P(X > 5) = 1 – P(X ≤ 5) = 1 –.451 =.549. 0 5 4 ∫.15e ∫ 4 b. P(2 ≤ X ≤ 5) = −.15( x −1) dx =.15e −.15u du = −e −.15u =.312. 2 1 1 10. a. The pdf is a decreasing function of x, beginning at x = θ. θ ∞ ∞ kθ k ∞ ∞ b. ∫−∞ f ( x; k ,θ ) dx = ∫θ x k +1 dx = kθ k ∫ x − k −1 dx = θ k ⋅ (− x − k ) = 0 − θ k ·(−θ − k ) = 1. θ θ b kθ k θk θ k b c. P(X ≤ b) = ∫θ x k +1 dx = − = x k θ 1− . b b kθ k θ k θ θ k k b d. P(a ≤ X ≤ b) = ∫ a x k +1 dx = − k = − . x a a b 130 Chapter 4: Continuous Random Variables and Probability Distributions Section 4.2 11. 12 a. P(X ≤ 1) = F(1) = =.25. 4 12.52 b. P(.5 ≤ X ≤ 1) = F(1) – F(.5) = − =.1875. 4 4 1.52 c. P(X > 1.5) = 1 – P(X ≤ 1.5) = 1 – F(1.5) = 1 − =.4375. 4 µ 2 d..5 = F ( µ ) = ⇒ µ 2 =2 ⇒ µ = 2 ≈ 1.414. 4 x e. f(x) = F′(x) = for 0 ≤ x < 2, and = 0 otherwise. 2 2 ∞ 2 x 1 2 2 x3 8 f. E(X) = ∫−∞ x ⋅ f ( x ) dx ∫0 2 2 ∫0 = x ⋅ dx = x dx = =≈ 1.333. 6 0 6 2 ∞ 2 x 1 2 3 x4 E(X2) = ∫ x f ( x= ∫ x = ∫ x=dx = 2 2 2 2 g. −∞ )dx dx 2, so V(X) = E(X ) – [E(X)] = 0 2 2 0 8 0 2 8 8 2 − = ≈.222 , and σX =.222 =.471. 6 36 h. From g, E(X2) = 2. 12. a. P(X < 0) = F(0) =.5. b. P(–1 ≤ X ≤ 1) = F(1) – F(–1) =.6875. c. P(X >.5) = 1 – P(X ≤.5) = 1 – F(.5) = 1 –.6836 =.3164. d 1 3 x3 3 3x 2 + 4x − = 0 + 4 − = .09375 ( 4 − x ). 2 d. f(x) = F′(x) = dx 2 32 3 32 3 e. By definition, F ( µ ) =.5. F(0) =.5 from a above, which is as desired. 131 Chapter 4: Continuous Random Variables and Probability Distributions 13. ∞ ∞ k ∞ k k k a. 1=∫ dx = k ∫ x −4 dx = x −3 = 0 − (1) −3 = ⇒ k = 3. 1 x 4 1 −3 1 −3 3 x 3 x 1 ∫ f ( y )dy =∫ x b. For x ≥ 1, F(x)= 4 dy =− y −3 =− x −3 + 1 =1 − 3. For x < 1, F(x) = 0 since the −∞ y 1 1 x 0 x 2) = 1 – F(2) = 1 – 78 = 18 or.125; P(2 < X < 3) = F (3) − F (2) = (1 − 271 ) − (1 − 18 ) =.963 −.875 =.088. ∞ ∞ 3 ∞ 3 3 3 d. The mean is E ( X ) = ∫1 x x 4 dx =∫1 x3 dx =− 32 x −2 1 =+ 0 = = 1.5. Next, 2 2 ∞ 3 ∞ 3 ∞ E ( X 2 ) =∫ x 2 4 dx =∫ 2 dx =−3 x −1 =0 + 3 =3 , so V(X) = 3 – (1.5)2 =.75. Finally, the 1 x 1 x 1 standard deviation of X is σ =.75 =.866. e. P (1.5 −.866 < X < 1.5 +.866) = P (.634 < X < 2.366) = F (2.366) − F (.634) =.9245 – 0 =.9245. 14. B 1 A+ B a. ∫ x If X is uniformly distributed on the interval from A to B, then E ( X ) =⋅ A B− A dx = , the 2 A2 + AB + B 2 midpoint of the interval. Also, E ( X 2 ) = , from which V(X) = E(X2) – [E(X)]2 = … = 3 ( B − A) 2. 12 With A = 7.5 and B = 20, E(X) = 13.75 and V(X) = 13.02. 0 x < 7.5 x − 7.5 b. From Example 4.6, the complete cdf is F(x) = 7.5 ≤ x < 20. 12.5 1 20 ≤ x c. P(X ≤ 10) = F(10) =.200; P(10 ≤ X ≤ 15) = F(15) – F(10) =.4. d. σ = 13.02 = 3.61, so µ ± σ = (10.14, 17.36). Thus, P(µ – σ ≤ X ≤ µ + σ) = P(10.14 ≤ X ≤ 17.36) = F(17.36) – F(10.14) =.5776. Similarly, P(µ – 2σ ≤ X ≤ µ + 2σ) = P(6.53 ≤ X ≤ 20.97) = 1. 132 Chapter 4: Continuous Random Variables and Probability Distributions 15. a. Since X is limited to the interval (0, 1), F(x) = 0 for x ≤ 0 and F(x) = 1 for x ≥ 1. For 0 < x < 1, x x x ∫ ∫ ∫ x F ( x= ) = f ( y )dy 90 y 8 (1 − y )dy = (90 y 8 − 90 y 9 )dy = 10 y 9 − 9 y10 =10 x 9 − 9 x10. −∞ 0 0 0 The graphs of the pdf and cdf of X appear below. 4 1.0 0.8 3 0.6 2 f(x) F(x) 0.4 1 0.2 0 0.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 x x b. F(.5) = 10(.5)9 – 9(.5)10 =.0107. c. P(.25 < X ≤.5) = F(.5) – F(.25) =.0107 – [10(.25)9 – 9(.25)10] =.0107 –.0000 =.0107. Since X is continuous, P(.25 ≤ X ≤.5) = P(.25 < X ≤.5) =.0107. d. The 75th percentile is the value of x for which F(x) =.75: 10x9 – 9x10 =.75 ⇒ x =.9036 using software. 1 ∞ 1 1 90 11 90 9 ∫ ∫ x ⋅ 90 x (1 − x)dx =∫ (90 x − 90 x )dx =9 x − x ⋅ f ( x)dx = =9− = =.8182. 8 9 10 10 e. E(X) = x −∞ 0 0 11 0 11 11 ∞ 1 Similarly, E(X2) = ∫ ∫ x ⋅ 90 x (1 − x)dx = … =.6818, from which V(X) =.6818 – x 2 ⋅ f ( x)dx = 2 8 −∞ 0 (.8182)2 =.0124 and σX =.11134. f. μ ± σ = (.7068,.9295). Thus, P(μ – σ ≤ X ≤ μ + σ) = F(.9295) – F(.7068) =.8465 –.1602 =.6863, and the probability X is more than 1 standard deviation from its mean value equals 1 –.6863 = 3137. 133 Chapter 4: Continuous Random Variables and Probability Distributions 16. a. The graph below shows f(x; θ, 80) for θ = 4 (green), θ = 1 (red), and θ =.5 (gold). For θ > 1, X has a right-skewed distribution on [0, 80]; for θ = 1, f is constant (i.e., X ~ Unif[0, 80]); and for θ < 1, X has a left-skewed distribution and f has an asymptote as x → 80. θ −1 θ y x y b. For 0 < x < τ, F(x) = ∫0 τ 1− τ dy. Make the substitution u = 1 − τ , from which dy = – τ du: θ 1− x /τ θ θ −1 1 x ∫ ∫1− x/τ θ u du = u 1− x/τ = 1 − 1 − τ . Also, F(x) = 0 for x ≤ 0 and θ −1 θ 1 F(x) = u ⋅ (−τ )du = 1 τ F(x) = 1 for x ≥ τ. θ θ η η η c. ⇒ 1 − .5 ⇒ =1 −.51/θ ⇒ η =τ (1 −.51/θ ). Set.5 = F(η) and solve for η:.5 = 1 − 1 − = τ τ τ 70 50 3 1 4 4 4 4 d. P(50 ≤ X ≤ 70) = F(70) – F(50) = 1 − 1 − – 1 − 1 − = − =.0195. 80 80 8 8 17. a. To find the (100p)th percentile, set F(x) = p and solve for x: x− A = p ⇒ x = A + (B – A)p. B− A B1 A+ B b. ∫ x E ( X ) =⋅ B− A A dx = , the midpoint of the interval. Also, 2 A + AB + B 2 2 ( B − A) 2 E( X 2 ) = , from which V(X) = E(X2) – [E(X)]2 = … =. Finally, 3 12 B− A σX = V ( X ) =. 12 B B 1 1 x n+1 B n+1 − An+1 ∫ x⋅ E( X n ) = dx = = n c. . A B− A B − A n + 1 A (n + 1)( B − A) 134 Chapter 4: Continuous Random Variables and Probability Distributions 1 1 18. f ( x) = = for –1 ≤ x ≤ 1 1 − (−1) 2 1 ∫ 1 a. P(Y =.5) = P(X ≥.5) = dx =.25..5 2 b. P(Y = –.5) =.25 as well, due to symmetry. y 1 For –.5 < y <.5, F(y) =.25 + −.5 2 ∫ dx =.25 +.5(y +.5) =.5 +.5y. Since Y ≤.5, F(y) = 1 for all y ≥.5. That is, 0 y < −.5 F ( y ) = .5 +.5 y −.5 ≤ y <.5 .5 ≤ y 1 1.0 0.8 0.6 F(y) 0.4 0.2 0.0 -1.0 -0.5 0.0 0.5 1.0 y 19. a. P(X ≤ 1) = F(1) =.25[1 + ln(4)] =.597. b. P(1 ≤ X ≤ 3) = F(3) – F(1) =.966 –.597 =.369. c. For x < 0 or x > 4, the pdf is f(x) = 0 since X is restricted to (0, 4). For 0 < x < 4, take the first derivative of the cdf: x 4 1 ln(4) 1 F ( x) = 1 + ln =x + x − x ln( x) ⇒ 4 x 4 4 4 1 ln(4) 1 1 1 ln(4) 1 F ′( x) = f ( x) = + − ln( x) − x = − ln( x) =.3466 −.25ln( x) 4 4 4 4 x 4 4 135 Chapter 4: Continuous Random Variables and Probability Distributions 20. u y y2 a. For 0 ≤ y < 5, F(y) = ∫0 25 du = 50 ; for 5 ≤ y ≤ 10, 52 y 2 u y2 2 + ∫ − du = = − + y − 1 y 5 y F(y) = ∫= f (u )du ∫ f (u )du + ∫ f (u )du = 0 0 5 50 5 5 25 50 5 So, the complete cdf of Y is 0 y.50) = P(Z > 3.33) = 1 – Φ(3.33) = 1 –.9996 =.0004. b. P(X ≤.20) = Φ(–0.50) =.3085. c. We want the 95th percentile, c, of this normal distribution, so that 5% of the values are higher. The 95th percentile of the standard normal distribution satisfies Φ(z) =.95, which from the normal table yields z = 1.645. So, c =.30 + (1.645)(.06) =.3987. The largest 5% of all concentration values are above.3987 mg/cm3. 140 Chapter 4: Continuous Random Variables and Probability Distributions 35. µ = 8.46 min, σ = 0.913 min a. P(X ≥ 10) = P(Z ≥ 1.69) = 1 – Φ(1.69) = 1 –.9545 =.0455. Since X is continuous, P(X > 10) = P(X ≥ 10) =.0455. b. P(X > 15) = P(Z > 7.16) ≈ 0. c. P(8 ≤ X ≤ 10) = P(–0.50 ≤ Z ≤ 1.69) = Φ(1.69) – Φ(–0.50) =.9545 –.3085 =.6460. d. P(8.46 – c ≤ X ≤ 8.46 + c) =.98, so 8.46 – c and 8.46 + c are at the 1st and the 99th percentile of the given distribution, respectively. The 99th percentile of the standard normal distribution satisfies Φ(z) =.99, which corresponds to z = 2.33. So, 8.46 + c = µ + 2.33σ = 8.46 + 2.33(0.913) ⇒ c = 2.33(0.913) = 2.13. e. From a, P(X > 10) =.0455 and P(X ≤ 10) =.9545. For four independent selections, P(at least one haul time exceeds 10) = 1 – P(none of the four exceeds 10) = 1 – P(first doesn’t ∩ … fourth doesn’t) = 1 – (.9545)(.9545)(.9545)(.9545) by independence = 1 – (.9545)4 =.1700. 36. a. P(X < 1500) = P(Z < 3) = Φ(3) =.9987; P(X ≥ 1000) = P(Z ≥ –.33) = 1 – Φ(–.33) = 1 –.3707 =.6293. b. P(1000 < X < 1500) = P(–.33 < Z < 3) = Φ(3) – Φ(–.33) =.9987 –.3707 =.6280 c. From the table, Φ(z) =.02 ⇒ z = –2.05 ⇒ x = 1050 – 2.05(150) = 742.5 μm. The smallest 2% of droplets are those smaller than 742.5 μm in size. d. Let Y = the number of droplets, out of 5, that exceed 1500 µm. Then Y is binomial, with n = 5 and p = 5.0013 from a. So, P(Y = 2) = (.0013) 2 (.9987)3 ≈ 1.68 × 10–5. 2 37. a. P(X = 105) = 0, since the normal distribution is continuous; P(X < 105) = P(Z < 0.2) = P(Z ≤ 0.2) = Φ(0.2) =.5793; P(X ≤ 105) =.5793 as well, since X is continuous. b. No, the answer does not depend on μ or σ. For any normal rv, P(|X – μ| > σ) = P(|Z| > 1) = P(Z < –1 or Z > 1) = 2P(Z < –1) by symmetry = 2Φ(–1) = 2(.1587) =.3174. c. From the table, Φ(z) =.1% =.001 ⇒ z = –3.09 ⇒ x = 104 – 3.09(5) = 88.55 mmol/L. The smallest.1% of chloride concentration values are those less than 88.55 mmol/L 38. Let X denote the diameter of a randomly selected cork made by the first machine, and let Y be defined analogously for the second machine. P(2.9 ≤ X ≤ 3.1) = P(–1.00 ≤ Z ≤ 1.00) =.6826, while P(2.9 ≤ Y ≤ 3.1) = P(–7.00 ≤ Z ≤ 3.00) =.9987. So, the second machine wins handily. 141 Chapter 4: Continuous Random Variables and Probability Distributions 39. µ = 30 mm, σ = 7.8 mm a. P(X ≤ 20) = P(Z ≤ –1.28) =.1003. Since X is continuous, P(X < 20) =.1003 as well. b. Set Φ(z) =.75 to find z ≈ 0.67. That is, 0.67 is roughly the 75th percentile of a standard normal distribution. Thus, the 75th percentile of X’s distribution is µ + 0.67σ = 30 + 0.67(7.8) = 35.226 mm. c. Similarly, Φ(z) =.15 ⇒ z ≈ –1.04 ⇒ η(.15) = 30 – 1.04(7.8) = 21.888 mm. d. The values in question are the 10th and 90th percentiles of the distribution (in order to have 80% in the middle). Mimicking b and c, Φ(z) =.1 ⇒ z ≈ –1.28 & Φ(z) =.9 ⇒ z ≈ +1.28, so the 10th and 90th percentiles are 30 ± 1.28(7.8) = 20.016 mm and 39.984 mm. 40. 40 − 43 a. P(X < 40) = P Z ≤ = P(Z < –0.667) =.2514. 4.5 60 − 43 P(X > 60) = P Z > = P(Z > 3.778) ≈ 0. 4.5 b. We desire the 25th percentile. Since the 25th percentile of a standard normal distribution is roughly z = –0.67, the answer is 43 + (–0.67)(4.5) = 39.985 ksi. 100 − 200 41. For a single drop, P(damage) = P(X < 100) = P Z < = P(Z < –3.33) =.0004. So, the 30 probability of no damage on any single drop is 1 –.0004 =.9996, and P(at least one among five is damaged) = 1 – P(none damaged) = 1 – (.9996)5 = 1 –.998 =.002. 42. The probability X is within.1 of its mean is given by P(µ –.1 ≤ X ≤ µ +.1) = ( µ −.1) − µ ( µ +.1) − µ .1 .1 .1 P 73.24) = 1 – Φ ⇒ =Φ −1 (.9) ≈ 1.28 ⇒ 73.24 − µ = 1.28σ. σ σ Subtract the top equation from the bottom one to get 34.12 = 2.925σ, or σ ≈ 11.665 mph. Then, substitute back into either equation to get µ ≈ 58.309 mph. b. P(50 ≤ X ≤ 65) = Φ(.57) – Φ(–.72) =.7157 –.2358 =.4799. c. P(X > 70) = 1 – Φ(1.00) = 1 –.8413 =.1587. 142 Chapter 4: Continuous Random Variables and Probability Distributions 44. a. P(µ – 1.5σ ≤ X ≤ µ + 1.5σ) = P(–1.5 ≤ Z ≤ 1.5) = Φ(1.50) – Φ(–1.50) =.8664. b. P(X < µ – 2.5σ or X > µ + 2.5σ) = 1 – P(µ – 2.5σ ≤ X ≤ µ + 2.5σ) = 1 – P(–2.5 ≤ Z ≤ 2.5) = 1 –.9876 =.0124. c. P(µ – 2σ ≤ X ≤ µ – σ or µ + σ ≤ X ≤ µ + 2σ) = P(within 2 sd’s) – P(within 1 sd) = P(µ – 2σ ≤ X ≤ µ + 2σ) – P(µ – σ ≤ X ≤ µ + σ) =.9544 –.6826 =.2718. 45. With µ =.500 inches, the acceptable range for the diameter is between.496 and.504 inches, so unacceptable bearings will have diameters smaller than.496 or larger than.504. The new distribution has µ =.499 and σ =.002. .496 −.499 .504 −.499 P(X <.496 or X >.504) = P Z < + P Z > = P(Z < –1.5) + P(Z > 2.5) = .002 .002 Φ(–1.5) + [1 – Φ(2.5)] =.073. 7.3% of the bearings will be unacceptable. 46. 67 − 70 X − 70 75 − 70 a. P(67 < X < 75) = P < < = P(−1 < Z < 1.67) = Φ (1.67) − Φ (−1) =.9525 – 3 3 3 .1587 =.7938. b. By the Empirical Rule, c should equal 2 standard deviations. Since σ = 3, c = 2(3) = 6. We can be a little more precise, as in Exercise 42, and use c = 1.96(3) = 5.88. c. Let Y = the number of acceptable specimens out of 10, so Y ~ Bin(10, p), where p =.7938 from part a. Then E(Y) = np = 10(.7938) = 7.938 specimens. d. Now let Y = the number of specimens out of 10 that have a hardness of less than 73.84, so Y ~ Bin(10, p), where 73.84 − 70 p= P ( X < 73.84) = P Z < = P ( Z < 1.28) = Φ (1.28) =.8997. Then 3 8 10 P(Y ≤ 8) = ∑ (.8997) y (.1003)10− y =.2651. y =0 y You can also compute 1 – P(Y = 9, 10) and use the binomial formula, or round slightly to p =.9 and use the binomial table: P(Y ≤ 8) = B(8; 10,.9) =.265. 47. The stated condition implies that 99% of the area under the normal curve with µ = 12 and σ = 3.5 is to the left of c – 1, so c – 1 is the 99th percentile of the distribution. Since the 99th percentile of the standard normal distribution is z = 2.33, c – 1 = µ + 2.33σ = 20.155, and c = 21.155. 48. a. By symmetry, P(–1.72 ≤ Z ≤ –.55) = P(.55 ≤ Z ≤ 1.72) = Φ(1.72) – Φ(.55). b. P(–1.72 ≤ Z ≤.55) = Φ(.55) – Φ(–1.72) = Φ(.55) – [1 – Φ(1.72)]. No, thanks to the symmetry of the z curve about 0. 143 Chapter 4: Continuous Random Variables and Probability Distributions 49. 4000 − 3432 a. P ( X > 4000 ) =P Z > =P ( Z > 1.18 ) = 1 − Φ (1.18) = 1 −.8810 =.1190 ; 482 3000 − 3432 4000 − 3432 P ( 3000 482 482 = Φ (− 2.97 ) + [1 − Φ (3.25)] =.0015 +.0006 =.0021. c. We will use the conversion 1 lb = 454 g, then 7 lbs = 3178 grams, and we wish to find 3178 − 3432 P ( X > 3178 ) = P Z > = 1 − Φ (−.53) =.7019. 482 d. We need the top.0005 and the bottom.0005 of the distribution. Using the z table, both.9995 and.0005 have multiple z values, so we will use a middle value, ±3.295. Then 3432 ± 3.295(482) = 1844 and 5020. The most extreme.1% of all birth weights are less than 1844 g and more than 5020 g. e. Converting to pounds yields a mean of 7.5595 lbs and a standard deviation of 1.0608 lbs. 7 − 7.5595 Then P( X > 7 ) = P Z > = 1 − Φ (−.53) =.7019. This yields the same answer as in part c. 1.0608 50. We use a normal approximation to the binomial distribution: Let X denote the number of people in the sample of 1000 who can taste the difference, so X ∼ Bin(1000,.03). Because μ = np = 1000(.03) = 30 and σ = np (1 − p ) = 5.394, X is approximately N(30, 5.394). 39.5 − 30 a. Using a continuity correction, P ( X ≥ 40 ) = 1 − P ( X ≤ 39 ) = 1− P Z ≤ = 5.394 1 – P(Z ≤ 1.76) = 1 – Φ(1.76) = 1 –.9608 =.0392. 50.5 − 30 b. 5% of 1000 is 50, and P ( X ≤ 50 ) = P Z ≤ = Φ (3.80) ≈ 1. 5.394 51. P(|X – µ| ≥ σ) = 1 – P(|X – µ| < σ) = 1 – P(µ – σ < X < µ + σ) = 1 – P(–1 ≤ Z ≤ 1) =.3174. Similarly, P(|X – µ| ≥ 2σ) = 1 – P(–2 ≤ Z ≤ 2) =.0456 and P(|X – µ| ≥ 3σ) =.0026. These are considerably less than the bounds 1,.25, and.11 given by Chebyshev. 52. a. P(20 ≤ X ≤ 30) = P(20 –.5 ≤ X ≤ 30 +.5) = P(19.5 ≤ X ≤ 30.5) = P(–1.1 ≤ Z ≤ 1.1) =.7286. b. P(X ≤ 30) = P(X ≤ 30.5) = P(Z ≤ 1.1) =.8643, while P(X < 30) = P(X ≤ 29.5) = P(Z <.9) =.8159. 144 Chapter 4: Continuous Random Variables and Probability Distributions 53. p =.5 ⇒ μ = 12.5 & σ2 = 6.25; p =.6 ⇒ μ = 15 & σ2 = 6; p =.8 ⇒ μ = 20 and σ2 = 4. These mean and standard deviation values are used for the normal calculations below. a. For the binomial calculation, P(15 ≤ X ≤ 20) = B(20; 25, p) – B(14; 25, p). p P(15 ≤ X ≤ 20) P(14.5 ≤ Normal ≤ 20.5).5 =.212 = P(.80 ≤ Z ≤ 3.20) =.2112.6 =.577 = P(–.20 ≤ Z ≤ 2.24) =.5668.8 =.573 = P(–2.75 ≤ Z ≤.25) =.5957 b. For the binomial calculation, P(X ≤ 15) = B(15; 25, p). p P(X ≤ 15) P(Normal ≤ 15.5).5 =.885 = P(Z ≤ 1.20) =.8849.6 =.575 = P(Z ≤.20) =.5793.8 =.017 = P(Z ≤ –2.25) =.0122 c. For the binomial calculation, P(X ≥ 20) = 1 – B(19; 25, p). p P(X ≥ 20) P(Normal ≥ 19.5).5 =.002 = P(Z ≥ 2.80) =.0026.6 =.029 = P(Z ≥ 1.84) =.0329.8 =.617 = P(Z ≥ –0.25) =.5987 54. Use the normal approximation to the binomial, with a continuity correction. With p =.10 and n = 200, μ = np = 20, and σ2 = npq = 18. So, Bin(200,.10) ≈ N(20, 18 ). (30 +.5) − 20 a. P(X ≤ 30) = Φ = Φ(2.47) =.9932. 18 (29 +.5) − 20 b. P(X < 30) =P(X ≤ 29) = Φ = Φ(2.24) =.9875. 18 (25 +.5) − 20 (14 +.5) − 20 c. P(15 ≤ X ≤ 25) = P(X ≤ 25) – P(X ≤ 14) = Φ − Φ 18 18 = Φ(1.30) – Φ(–1.30) =.9032 –.0968 =.8064. 55. Use the normal approximation to the binomial, with a continuity correction. With p =.75 and n = 500, μ = np = 375, and σ = 9.68. So, Bin(500,.75) ≈ N(375, 9.68). a. P(360 ≤ X ≤ 400) = P(359.5 ≤ X ≤ 400.5) = P(–1.60 ≤ Z ≤ 2.58) = Φ(2.58) – Φ(–1.60) =.9409. b. P(X < 400) = P(X ≤ 399.5) = P(Z ≤ 2.53) = Φ(2.53) =.9943. 56. Let z1–p denote the (100p)th percentile of a standard normal distribution. The claim is the (100p)th percentile of a N(μ, σ) distribution is μ + z1–pσ. To verify this, X −µ P(X ≤ μ + z1–pσ) = P ≤ z1− p = P ( Z ≤ z1− p ) = p by definition of z1–p. That establishes μ + z1–pσ as σ the (100p)th percentile. 145 Chapter 4: Continuous Random Variables and Probability Distributions 57. y −b y −b a. For any a > 0, FY ( y ) = P (Y ≤ y ) = P (aX + b ≤ y ) = P X ≤ = FX . This, in turn, implies a a d d y −b 1 y −b =fY ( y ) = FY ( y ) FX = fX . dy dy a a a Now let X have a normal distribution. Applying this rule, 1 1 (( y − b) / a − µ ) 2 1 ( y − b − aµ )2 fY ( y ) = exp − = exp − . This is the pdf of a normal a 2πσ 2σ 2 2π aσ 2a 2σ 2 distribution. In particular, from the exponent we can read that the mean of Y is E(Y) = aμ + b and the variance of Y is V(Y) = a2σ2. These match the usual rescaling formulas for mean and variance. (The same result holds when a < 0.) b. Temperature in °F would also be normal, with a mean of 1.8(115) + 32 = 239°F and a variance of 1.8222 = 12.96 (i.e., a standard deviation of 3.6°F). 58. 83 + 351 + 562 a. P(Z ≥ 1) ≈.5 ⋅ exp =.1587 , which matches 1 – Φ(1). 703 + 165 −2362 b. P(Z < –3) = P(Z > 3) ≈.5 ⋅ exp =.0013 , which matches Φ(–3). 399.3333 −3294 c. P(Z > 4) ≈.5 ⋅ exp =.0000317 , so P(–4 < Z < 4) = 1 – 2P(Z ≥ 4) ≈ 340.75 1 – 2(.0000317) =.999937. −4392 d. P(Z > 5) ≈.5 ⋅ exp =.00000029. 305.6 146 Chapter 4: Continuous Random Variables and Probability Distributions Section 4.4 59. 1 a. E(X) = =1. λ 1 b. σ = =1. λ c. P(X ≤ 4) = 1 − e − (1)( 4) = 1 − e −4 =.982. d. P(2 ≤ X ≤ 5) = (1 − e − (1)(5) ) − (1 − e − (1)(2) ) = e −2 − e −5 =.129. 60. a. P(X ≤ 100) = 1 − e − (100)(.01386) = 1 − e −1.386 =.7499. − (200)(.01386) −2.772 P(X ≤ 200) = 1 − e = 1− e =.9375. P(100 ≤ X ≤ 200) = P(X ≤ 200) – P(X ≤ 100) =.9375 –.7499 =.1876. 1 1 b. First, since X is exponential, µ = = 72.15 , σ = 72.15. Then =.01386 λ P(X > µ + 2σ) = P(X > 72.15 + 2(72.15)) = P(X > 216.45) = 1 – (1 – e–.01386(216.45)) = e–3 =.0498. c. Remember the median is the solution to F(x) =.5. Use the formula for the exponential cdf and solve ln(.5) for x: F(x) = 1 – e–.01386x =.5 ⇒ e–.01386x =.5 ⇒ –.01386x = ln(.5) ⇒ x = − = 50.01 m..01386 1 61. Note that a mean value of 2.725 for the exponential distribution implies λ =. Let X denote the 2.725 duration of a rainfall event. a. P(X ≥ 2) = 1 – P(X < 2) = 1 – P(X ≤ 2) = 1 – F(2; λ) = 1 – [1 – e–(1/2.725)(2)] = e–2/2.725 =.4800; P(X ≤ 3) = F(3; λ) = 1 – e–(1/2.725)(3) =.6674; P(2 ≤ X ≤ 3) =.6674 –.4800 =.1874. b. For this exponential distribution, σ = μ = 2.725, so P(X > μ + 2σ) = P(X > 2.725 + 2(2.725)) = P(X > 8.175) = 1 – F(8.175; λ) = e–(1/2.725)(8.175) = e–3 =.0498. On the other hand, P(X < μ – σ) = P(X < 2.725 – 2.725) = P(X < 0) = 0, since an exponential random variable is non-negative. 147 Chapter 4: Continuous Random Variables and Probability Distributions 62. ∞ λ ∞ Γ(3) ∫ ∫ 2 a. Clearly E(X) = 0 by symmetry, so V(X) = E(X2) = x2 e −λ | x| dx = λ x 2 e −λx dx = λ ⋅ =. −∞ 2 0 λ 3 λ2 2 Solving = V(X) = (40.9)2 yields λ = 0.034577. λ2 40.9 λ 40.9 b. P(|X – 0| ≤ 40.9) = ∫ − 40.9 2 e −λ | x| dx = ∫ 0 λe −λx dx = 1 – e–40.9 λ =.75688. 63. a. If a customer’s calls are typically short, the first calling plan makes more sense. If a customer’s calls are somewhat longer, then the second plan makes more sense, viz. 99¢ is less than 20min(10¢/min) = $2 for the first 20 minutes under the first (flat-rate) plan. b. h1(X) = 10X, while h2(X) = 99 for X ≤ 20 and 99 + 10(X – 20) for X > 20. With μ = 1/λ for the exponential distribution, it’s obvious that E[h1(X)] = 10E[X] = 10μ. On the other hand, ∞ 10 ∫ E[h2(X)] = 99 + 10 ( x − 20)λe −λx dx = 99 + e − 20λ = 99 + 10μe–20/μ. 20 λ When μ = 10, E[h1(X)] = 100¢ = $1.00 while E[h2(X)] = 99 + 100e–2 ≈ $1.13. When μ = 15, E[h1(X)] = 150¢ = $1.50 while E[h2(X)] = 99 + 150e–4/3 ≈ $1.39. As predicted, the first plan is better when expected call length is lower, and the second plan is better when expected call length is somewhat higher. 64. a. Γ(6) = 5! = 120. 5 3 1 3 1 1 3 b. π ≈ 1.329. Γ =Γ =⋅ ⋅ Γ = 2 2 2 2 2 2 4 c. F(4; 5) =.371 from row 4, column 5 of Table A.4. F(5; 4) =.735 from row 5, column 4 of Table A.4. d. P(X ≤ 5) = F(5; 7) =.238. e. P(3 < X < 8) = P(X < 8) – P(X ≤ 3) = F(8; 7) – F(3; 7) =.687 –.034 =.653. 65. a. From the mean and sd equations for the gamma distribution, αβ = 37.5 and αβ2 = (21.6)2 = 466.56. Take the quotient to get β = 466.56/37.5 = 12.4416. Then, α = 37.5/β = 37.5/12.4416 = 3.01408…. b. P(X > 50) = 1 – P(X ≤ 50) = 1 – F(50/12.4416; 3.014) = 1 – F(4.0187; 3.014). If we approximate this by 1 – F(4; 3), Table A.4 gives 1 –.762 =.238. Software gives the more precise answer of.237. c. P(50 ≤ X ≤ 75) = F(75/12.4416; 3.014) – F(50/12.4416; 3.014) = F(6.026; 3.014) – F(4.0187; 3.014) ≈ F(6; 3) – F(4; 3) =.938 –.762 =.176. 148 Chapter 4: Continuous Random Variables and Probability Distributions 66. a. If X has a gamma distribution with parameters α, β, γ, then Y = X – γ has a gamma distribution with parameters α and β (i.e., threshold 0). So, write X = Y + γ, from which E(X) = E(Y) + γ = αβ + γ and SD(X) = SD(Y) = αβ 2. For the given values, E(X) = 12(7) + 40 = 124 (108 m3) and SD(X) = 12(7) 2 = 24.25 (108 m3). b. Use the same threshold-shift idea as in part a: P(100 ≤ X ≤ 150) = P(60 ≤ X – 40 ≤ 110) = 110 60 P(60 ≤ Y ≤ 110) = F ;12 − F ;12 . To evaluate these functions or the equivalent integrals 7 7 requires software; the answer is.8582 –.1575 =.7007. P(X > µ + σ) = P(X > 148.25) = P(X – 40 > 108.25) = P(Y > 108.25) = 1 – F 108.25 c. ;12 . From 7 software, the answer is.1559. x − 40 d. Set.95 = P(X ≤ x) = P(Y ≤ x – 40) = F th ;12 . From software, the 95 percentile of the standard 7 x − 40 gamma distribution with α = 12 is 18.21, so = 18.21, or x = 167.47 (108 m3). 7 144 24 67. Notice that µ = 24 and σ2 = 144 ⇒ αβ = 24 and αβ 2 = 144 ⇒ β = = 6 and α = = 4. 24 β a. P(12 ≤ X ≤ 24) = F(4; 4) – F(2; 4) =.424. b. P(X ≤ 24) = F(4; 4) =.567, so while the mean is 24, the median is less than 24, since P(X