Numerical Methods in Engineering - B.Tech (EE) PDF
Document Details
Gati Shakti Vishwavidyalaya
P. Danumjaya
Tags
Summary
This document covers numerical methods in engineering, specifically focusing on error analysis. It details floating-point representation and related concepts, providing examples and explanations. The document is suitable for undergraduate engineering students.
Full Transcript
Numerical Methods in Engineering B.Tech (EE), Semester III P. Danumjaya Gati Shakti Vishwavidyalaya 1/33 P. Danumjaya Error Analysis 1 / 33 Introd...
Numerical Methods in Engineering B.Tech (EE), Semester III P. Danumjaya Gati Shakti Vishwavidyalaya 1/33 P. Danumjaya Error Analysis 1 / 33 Introduction A real number x can have infinitely many digits. But a digital calculating device can hold only a finite number of digits and therefore, after a finite number of digits (depending on the capacity of the calculating device), the rest should be discarded in some sense. In this way, the representation of the real number x on a computing device is only approximate. Although, the omitted part of x is very small in its value, this approximation can lead to considerably large error in the numerical computation. 2/33 P. Danumjaya Error Analysis 2 / 33 Floating-Point Form On a computer, real numbers are represented in the floating-point form. Let x be a non-zero real number. An n-digit floating-point number in base β has the form: fl(x) = (−1)s × (·d1 d2 · · · dn )β × β e , (1) where d1 d2 dn + 2 + ··· + n (·d1 d2... dn )β = β β β is a β-fraction called the mantissa, s = 0 or 1 is called the sign and e is an integer called the exponent. When β = 2, the floating-point representation (1) is called the binary floating-point representation and when When β = 10, it is called the decimal floating-point representation. 3/33 P. Danumjaya Error Analysis 3 / 33 Normalization A floating-point number is said to be normalized if d1 ̸= 0. The number zero is represented as (0.00 · · · 0)β × β 0. 4/33 P. Danumjaya Error Analysis 4 / 33 Example 1 Express the following real numbers in the normalized decimal floating-point representation: 1 x = 7.2567 2 x = −0.008672. 5/33 P. Danumjaya Error Analysis 5 / 33 Overflow and Underflow The exponent e is limited to a range m < e < M. During the calculation, if some computed number has an exponent e > M then we say, the memory overflow or if e < m, we say the memory underflow. In the case of overflow, computer will usually produce meaningless results or simply prints the symbol NaN, which means, the quantity obtained due to such a calculation is not a number. The symbol ∞ is also denoted as NaN on some computers. The underflow is less serious because in this case, a computer will simply consider the number as zero. 6/33 P. Danumjaya Error Analysis 6 / 33 Chopping and Rounding a number Any real number x can be represented exactly as x = (−1)s × (·d1 d2 · · · dn dn+1 · · · )β × β e , with d1 ̸= 0. Let us denote n-digit approximation of x by fl(x). There are two ways to produce fl(x) from x as defined below. The chopped machine approximation of x is given by fl(x) = (−1)s × (·d1 d2 · · · dn )β × β e. The rounded machine approximation of x is given by ( (−1)s × (·d1 d2 · · · dn )β × β e , 0 ≤ dn+1 < β2 fl(x) = (−1)s × (·d1 d2 · · · (dn + 1))β × β e , β2 ≤ dn+1 < β. 7/33 P. Danumjaya Error Analysis 7 / 33 Definition of errors The absolute error is defined as the difference between the true value and the approximate value. absolute error = true - approximate value. Thus, the error for fl(x) is absolute error of fl(x) = x − fl(x). The absolute error is often simply called the error. 8/33 P. Danumjaya Error Analysis 8 / 33 Note 1 The absolute error may not reflect the reality. One picks up 995 correct answers from 1000 problems certainly is better than the one that picks up 95 correct answers from 100 problems although both of the errors are 5. 9/33 P. Danumjaya Error Analysis 9 / 33 Relative error A more realistic error measurement is the relative error which is defined as absolute error relative error =. true value For example, if x ̸= 0 then x − fl(x) relative error =. x If we denote the relative error in fl(x) as ϵ > 0, then we have fl(x) = (1 − ϵ)x, where x is a real number. 10/33 P. Danumjaya Error Analysis 10 / 33 Significant digits In place of relative error, we often use the concept of significant digits. If x ∗ is an approximation to x, then we say that x ∗ approximates x to r significant β-digits if 1 |x − x ∗ | ≤ β s−r +1 2 with s the largest integer such that β s ≤ |x|. 11/33 P. Danumjaya Error Analysis 11 / 33 Example 2 Let x = 0.33333 and x ∗ = 0.333, then the error |x − x ∗ | = 0.00033. We say that x ∗ has three significant digits with respect to x. In a very simple way, the number of leading non-zero digits of x ∗ that are correct relative to the corresponding digits in the true value x is called the number of significant digits in x ∗. 12/33 P. Danumjaya Error Analysis 12 / 33 Theorem Let fl(x) be n − β floating-point representation of a real number x and x − fl(x) = ϵ, x then 1 ϵ ≤ β −n+1 , if chopping is used, 2 ϵ ≤ 21 β −n+1 , if rounding is used. 13/33 P. Danumjaya Error Analysis 13 / 33 Propagation of Errors Propagated error in an arithmetic operations (+, −, ∗, /) occurs due to approximate values of numbers taken by computer with finite digits. Let x and y are the true or exact values and x ∗ and y ∗ be the corresponding approximate values respectively. Then, we have x = x ∗ + ϵ, y = y ∗ + η. 14/33 P. Danumjaya Error Analysis 14 / 33 Propagated Relative Errors 1 The propagated relative error in multiplication: rxy ≈ rx + ry , for |rx |, |ry |