Numerical Computing (CS-2008) Past Paper PDF May 2024
Document Details
Uploaded by ReverentTin
National University of Computer and Emerging Sciences
2024
National University of Computer and Emerging Sciences
Tags
Summary
This document is a past paper for Numerical Computing (CS-2008) from National University of Computer and Emerging Sciences (May 2024). It contains questions on LU decomposition, Gauss-Seidel method and other numerical computing related topics.
Full Transcript
National University of Computer and Emerging Sciences Islamabad Campus Numerical Computing (CS-2008) Final Exam Date: May 21, 2024...
National University of Computer and Emerging Sciences Islamabad Campus Numerical Computing (CS-2008) Final Exam Date: May 21, 2024 Total Time (Hrs): 3 Course Instructors Total Marks: 84 Mukhtar Ullah, Muhammad Ali, Imran Ashsraf, Almas Khan Total Questions: 4 Roll No Section Student Signature Attempt all the questions Use answer sheet to answer all questions DO NOT WRITE BELOW THIS LINE Question # 1 [Marks = 16] (a) Perform LU factorization by hand to write matrices P, L, and U for the matrix: (8) 1 −2 0 −3 −2 1 0 −2 8 Solution Where p is identity matrix, because of no swaping. (b) Write python code for LU decomposition with partial pivoting for a general matrix A to show P , L, and U. (8) Python Code: 1 import numpy a s np 2 3 d e f l u d e c o m p o s i t i o n w i t h p i v o t i n g (A) : 4 n = A. shape [ 0 ] 5 L = np. z e r o s ( ( n , n ) ) 6 U = A. copy ( ) 7 P = np. eye ( n ) 8 9 f o r i in range (n) : 10 # Partial pivoting 11 max row = np. argmax ( np. abs (U[ i : n , i ] ) ) + i 12 i f i != max row : Spring 2024 Final Exam, FAST School of Computing Page 1 of 12 National University of Computer and Emerging Sciences Islamabad Campus 13 # Swap rows i n U 14 U [ [ i , max row ] , : ] = U [ [ max row , i ] , : ] 15 # Swap rows i n P 16 P [ [ i , max row ] , : ] = P [ [ max row , i ] , : ] 17 i f i > 0: 18 # Swap rows i n L , but o n l y t h e f i r s t i columns 19 L [ [ i , max row ] , : i ] = L [ [ max row , i ] , : i ] 20 21 # Compute L and U 22 f o r j i n r a n g e ( i +1, n ) : 23 L [ j , i ] = U[ j , i ] / U[ i , i ] 24 U[ j , i : ] −= L [ j , i ] ∗ U[ i , i : ] 25 26 np. f i l l d i a g o n a l (L , 1 ) 27 return P, L , U Question # 2 [Marks = 18] Consider the following data: 10 −1 0 4 1 (0) A = −1 10 −1 , b = 8 , x = 1 0 −1 10 20 1 (a) Using Gauss-Seidel iterative method, approximate the solution of the linear system Ax = b up to 6 Solution Spring 2024 Final Exam, FAST School of Computing Page 2 of 12 National University of Computer and Emerging Sciences Islamabad Campus (b) Calculate ∥x(2) − x(1) ∥2 (4) Spring 2024 Final Exam, FAST School of Computing Page 3 of 12 National University of Computer and Emerging Sciences Islamabad Campus (c) Write the missing code in following function (on the answer sheet): (8) Solution 1 d e f g a u s s s e i d e l (A, b , x , t o l = 1. e −5, maxit = 1 0 0 ) : 2 n = len (b) 3 err = 1.0 4 iters = 0 5 6 # I n i t i a l i z e t h e s o l u t i o n with t h e i n i t i a l g u e s s 7 xnew = np. z e r o s l i k e ( x ) 8 # Extract the lower t r i a n g u l a r part of A 9 M = np. t r i l (A) 10 # C o n s t r u c t t h e upper t r i a n g u l a r p a r t o f A 11 U = A −M 12 13 w h i l e ( e r r > t o l and i t e r s < maxit ) : 14 i t e r s += 1 15 # Compute t h e new a p p r o x i m a t i o n 16 xnew = np. dot ( n p l. i n v (M) , b − np. dot (U, x ) ) 17 # Estimate c o n v e r g e n c e 18 e r r = n p l. norm ( xnew−x ) 19 x = np. copy ( xnew ) 20 return x Question # 3 [Marks = 20] (a) Consider the following three vectors in R3 : 1 8 0 x1 = 2 , x2 = 1 , x3 = 0 0 −6 1 (i) Apply Gram-Schmidt process to generate orthogonal vectors u1 , u2 , u3. (9) (ii) Using u1 , u2 and u3 compute the orthonormal vectors v1 , v2 and v3. (3) Spring 2024 Final Exam, FAST School of Computing Page 4 of 12 National University of Computer and Emerging Sciences Islamabad Campus Figure 1: Solution (a),(b) (b) Comprehend the following piece of python code which contains comments which are numbered from 1 to 10. This numbering is done so that you only provide the comments in the answer sheet (without re-producing the source code in the answer sheet) to save time. Provide the missing comments with comment numbers. (8) 1 import i m a g e i o 2 import numpy a s np 3 import numpy. l i n a l g a s n p l 4 5 # Comment 1 : Read t h e image i n t o t h e a r r a y photo 6 photo = i m a g e i o. imread ( ”Newton. j p g ” ) 7 8 #Comment 2 : e x t r a c t Red , Green and Blue c h a n n e l s i n t o s e p a r a t e m a t r i c e s 9 Red [ : , : , 0 ] = photo [ : , : , 0 ] 10 Green [ : , : , 1 ] = photo [ : , : , 1 ] 11 Blue [ : , : , 2 ] = photo [ : , : , 2 ] 12 13 # Comment 3 : perform SVD on each c h a n n e l t o g e t c o r r e s p o n d i n g U, S and V componenets 14 U r , S r , V r = n p l. svd ( Red ) 15 U g , S g , V g = n p l. svd ( Green ) 16 U b , S b , V b = n p l. svd ( Blue ) 17 Spring 2024 Final Exam, FAST School of Computing Page 5 of 12 National University of Computer and Emerging Sciences Islamabad Campus 18 # Comment 4 : s e t t h e number o f s i n g u l a r v a l u e s t o be used 19 k=100 20 21 # Comment 5 : perform c o m p r e s s i o n by e x t r a c t i n g o n l y k d i m e n s i o n s from each component 22 U r c = U r [ : , 0 : k ] ; V r c = V r [ 0 : k , : ] ; S r c = np. d i a g ( S r [ 0 : k ] ) 23 U g c = U g [ : , 0 : k ] ; V g c = V g [ 0 : k , : ] ; S g c = np. d i a g ( S g [ 0 : k ] ) 24 U b c = U b [ : , 0 : k ] ; V b c = V b [ 0 : k , : ] ; S b c = np. d i a g ( S b [ 0 : k ] ) 25 26 # Comment 6 : compute each c h a n n e l back by u s i n g t h e compressed components 27 comp img r = np. dot ( U r c , np. dot ( S r c , V r c ) ) 28 comp img g = np. dot ( U g c , np. dot ( S g c , V g c ) ) 29 comp img b = np. dot ( U b c , np. dot ( S b c , V b c ) ) 30 31 # Comment 7 : z e r o i n i t i a l i z e t h e r e s u l t matrix which r e p r e s e n t s t h e computed image 32 comp img = np. z e r o s ( ( row , c o l , 3 ) ) 33 34 # Comment 8 : add Red , Green and Blue c h a n n e l back t o t h e s i n g l e matrix r e p r e s e n t i n g t h e computed image 35 comp img [ : , : , 0 ] = comp img r 36 comp img [ : , : , 1 ] = comp img g 37 comp img [ : , : , 2 ] = comp img b 38 39 # Comment 9 : c l i p v a l u e s l e s s than 0 and g r e a t e r than 1 40 comp img [ comp img < 0 ] = 0 ; comp img [ comp img > 1 ] = 1 41 42 # Comment 1 0 : show t h e comp img 43 p l t. imshow ( comp img ) 44 p l t. show ( ) Question # 4 [Marks = 30] 1. Gaussian elimination is (a) a direct method with finite precision in theory (b) a direct method with infinite precision in theory (c) an iterative method with finite precision in practice (d) an iterative method with infinite precision in theory 2. LU factorization is (a) a modification of Gaussian elimination (b) a decomposition into lower and upper triangular parts of a matrix (c) a method for forward substitution (d) a method for backward substitution 3. Pivoting strategies can resolve numerical issues arising in (a) forward substitution (b) backward substitution (c) LU factorization (d) all of the above 4. Naı̈ve Gaussian elimination cannot be performed with Spring 2024 Final Exam, FAST School of Computing Page 6 of 12 National University of Computer and Emerging Sciences Islamabad Campus 1 0 (a) A1 = 3 2 2 1 (b) A2 = 0 2 3 1 (c) A3 = 2 0 0 1 (d) A4 = 3 2 5. How many cubic polynomials are typically used to construct a cubic spline with n data points? (a) n (b) n − 1 (c) 2n − 1 (d) n + 1 6. What is a cubic spline used for? (a) Interpolation (b) Regression (c) Integration (d) Differentiation 7. In cubic spline interpolation, what condition must the spline satisfy at each data point? (a) The first derivative must be continuous (b) The second derivative must be continuous (c) Both first and second derivative must be continuous (d) Only the function value must be continuous 8. Which of the following is a permutation matrix? 0 1 (a) P1 = 1 0 1 1 (b) P2 = 0 1 0 1 (c) P3 = 1 1 1 1 (d) P4 = 1 0 9. What is approximate solution of the inconsistent system Ax = b (a) (AT A)−1 AT b (b) (AAT )−1 AT b (c) AT b(AT A)−1 (d) AT b(AAT )−1 10. What is the primary objective of least squares approximation? Spring 2024 Final Exam, FAST School of Computing Page 7 of 12 National University of Computer and Emerging Sciences Islamabad Campus (a) To maximize the sum of the residuals (b) To minimize the sum of the squares of the residuals (c) To maximize the correlation between variables (d) To minimize the mean of the residuals 11. Failure of Cholesky factorization of a matrix A indicates that (a) A is upper triangular (b) A is lower triangular (c) xT Ax > 0 for all nonzero vectors x (d) A is not positive definite 12. Which of the following is an iterative method? (a) Jacobi method (b) LU factorization (c) Cholesky factorization (d) Gaussian elimination 13. Which of the following is a numerical stable method? (a) Jacobi method (b) LU factorization (c) Cholesky factorization (d) Gauss-Seidel Method 14. Which of the following can be used to decide when pivoting is needed? (a) number of nonzero rows (b) number of nonzero columns (c) determinant (d) condition number 15. The choice between Jacobi and Gauss-Seidel methods is guided by the observation that (a) Jacobi method converges faster (b) Gauss-Seidel converges faster (c) Jacobi can be implemented in parallel computers (d) both b and c 16. Under certain conditions, pseudo-inverse of a matrix can be same as its classical inverse? (a) Always True (b) Always False (c) Conditionally True (d) Conditionally False 17. QR factorization decomposes a matrix A into which two matrices? (a) LU matrices (b) Diagonal and triangular matrices (c) Upper triangular and lower triangular matrices Spring 2024 Final Exam, FAST School of Computing Page 8 of 12 National University of Computer and Emerging Sciences Islamabad Campus (d) Orthogonal matrix and upper triangular matrix 18. Which of the following statements is true about the Gram-Schmidt process? (a) It is computationally expensive for large sets of vectors. (b) It can only be applied to square matrices. (c) It can be numerically unstable for ill-conditioned sets of vectors. (d) It always produces a unique set of orthonormal vectors. 19. Which of the following is NOT an advantage of using an orthonormal set of vectors in numerical computing? (a) Improved stability in calculations involving the vectors (b) Easier computation of vector norms (c) Simpler projection operations onto the subspace spanned by the vectors (d) Reduced storage requirements compared to the original set 20. Application domains of linear least squares fitting include? (a) Image processing (b) Financial modeling (c) Curve fitting (d) All of the above 21. Which of the following functions in NumPy is used to generate evenly spaced numbers over a specified range? (a) numpy.linspace (b) numpy.arange (c) numpy.random.rand (d) numpy.zeros 22. Which of the following iterative scheme is always convergent (a) Newton-Raphson Method (b) Fixed Point Iteration (c) Bisection Method (d) All of them 23. What function in NumPy can be used to implement the Gram-Schmidt process? (a) numpy.linalg.orthogonalize (b) numpy.orthogonalize (c) numpy.linalg.qr (d) numpy.linalg.gramschmidt 1 24. Given x = 2 . ∥x∥2 is −3 (a) Positive (b) Negative Spring 2024 Final Exam, FAST School of Computing Page 9 of 12 National University of Computer and Emerging Sciences Islamabad Campus (c) Non-negative (d) Non-positive 1 25. Given x = 2 . ∥x∥1 is −3 (a) Positive (b) Negative (c) Non-negative (d) Non-positive 26. Which of the following statement about SVD is true? (a) It can only be applied to square matrices. (b) It can be used to determine the rank of any matrix. (c) The singular values in σ are always positive. (d) The columns of U and V are always linearly independent. 27. Provided AT A is non-singular, pseudo-inverse of a A exists if A is (a) square non-singular matrix only (b) square singular matrix only (c) any rectangular matrix (d) rectangular singular matrix only 28. What is the computational complexity of Gaussian elimination for solving a system of linear equations of order n? (a) O(n) (b) O(n2 ) (c) O(n3 ) (d) O(2n ) 29. SVD is particularly useful for image compression because: (a) It reduces the computational cost of storing pixel values. (b) It separates image information into components with varying importance. (c) It directly removes redundant information from the image. (d) None of the above 30. Which of the following statements is TRUE about square matrices? (a) A matrix must have all zero entries to be non-invertible. (b) A matrix is invertible only if it has an equal number of rows and columns. (c) A matrix is invertible if and only if its columns (or rows) are linearly independent. (d) A matrix with a determinant of 0 is always invertible. Spring 2024 Final Exam, FAST School of Computing Page 10 of 12 National University of Computer and Emerging Sciences Islamabad Campus Useful Formulae and Algorithms Figure 2: Projection of vector a on v Spring 2024 Final Exam, FAST School of Computing Page 11 of 12 National University of Computer and Emerging Sciences Islamabad Campus Figure 3: LU factorization algorithm Figure 4: Gauss-Seidel iterative method Figure 5: Gauss-Seidel iterative method Figure 6: Lagrange polynomials Figure 7: Hermite Interpolation Spring 2024 Final Exam, FAST School of Computing Page 12 of 12