Calculus And Statistical Analysis AM121 Past Paper PDF
Document Details
Uploaded by CommendableCentaur
Chitkara University
null
null
Tags
Summary
This document is a lecture or study material on Calculus and Statistical Analysis. It introduces concepts like matrices, their applications, types of matrices, and operations. The material also touches on solving linear equations and linear combinations in physics. This is not a past paper.
Full Transcript
CALCULUS AND STATISTICAL ANALYSIS AM121 Department of Applied Sciences Chitkara University, Punjab Matrix ⚫ A matrix is a collection of numbers arranged into a fixed number of rows and columns. Applications of Matrices ⚫ Computer graphics 4×4 transformation rotation m...
CALCULUS AND STATISTICAL ANALYSIS AM121 Department of Applied Sciences Chitkara University, Punjab Matrix ⚫ A matrix is a collection of numbers arranged into a fixed number of rows and columns. Applications of Matrices ⚫ Computer graphics 4×4 transformation rotation matrices are commonly used in computer graphics. ⚫ Graph Theory The adjacency matrix of a finite graph is a basic notion of graph theory. ⚫ Cryptography One type of code, which is extremely difficult to break, makes use of a large matrix to encode a message. The receiver of the message decodes it using the inverse of the matrix. This first matrix, used by the sender is called the encoding matrix and its inverse is called the decoding matrix, which is used by the receiver. ⚫ Solving linear equations Using Row reduction ⚫ Cramer's Rule ( Determinants) Using the inverse matrix ⚫ Linear combinations of quantum states in Physics The first model of quantum mechanics by Heisenberg in 1925 represented the theory's operators by infinite-dimensional matrices acting on quantum states. This is also referred to as matrix mechanics. Types of Matrices ⚫ Row matrices – A matrices which has only one row called row matrices e.g.- ⚫ Column matrices – A matrices which has only one column is called column matrices e.g. - ⚫ Square matrices – No. of the rows and No. of the column is same e.g. ⚫ Null matrices – Which all elements is equal to ‘0’ e.g. ⚫ Identity matrices – A square matrices who’s main diagonal is assigned value ‘1’ and each of the other element is ‘0’ e.g. ⚫ Diagonal matrices – A square matrices all of who’s element expect those in the leading diagonal are ‘0’ ⚫ Scalar matrices – A diagonal matrices in which all the elements of main diagonal is same called scalar matrices e.g.- ⚫ Triangular matrices – there are two types: - Upper triangular – All the below elements are ‘0’ Lower triangular – All the above elements are ‘0’ ⚫ Transpose Matrices – A matrices obtained by inter changing the rows and columns of a matrices A It is denoted by A’ , AT e.g.:- ⚫ Symmetric Matrices - A square matrices A is called symmetric matrices if it is equal to its transpose ⚫ Skew Matrices - if A’ = -A is called skew matrices e.g.- Elementary Matrix Operations Elementary Operations Determinant of a Matrix Find the determinant of the matrix: Rank of a Matrix Let A be any mXn matrix. Then A consists of n column vectors a₁, a₂ ,....,a, which are m-vectors. DEFINTION: The rank of A is the maximal number of linearly independent column vectors in A, i.e. the maximal number of linearly independent vectors among {a₁, a₂,......, a}. If A = 0, then the rank of A is 0. We write rk(A) for the rank of A. Note that we may compute the rank of any matrix-square or not Computing Rank by Various methods 1. By Determinant Method 2. By Normal Form and Echelon Form Determinant Method Rank of 2X2 matrix Practice Question Rank(A)=2 Row Echelon form Example: 1 Example:2 Example:3 Which is the desired echelon form Example 2 Normal Form Example 1 Example 2 Example 3 Inverse of a Matrix(Gauss Jorden Method) Example 1 Find the inverse if the given matrix by Gauss Jorden Method Question:2 Eigen Values and Eigen Vectors Definition Let A be an n × n matrix. A scalar λ is called an eigenvalue of A if there exists a nonzero vector x in Rn such that Ax = λx. The vector x is called an eigenvector corresponding to λ. Computation of Eigen Values and Let AVectors be an n × n matrix with eigenvalue λ and corresponding eigenvector x. Thus Ax = λx. This equation may be written Ax – λx = 0 given (A – λIn)x = 0 Solving the equation |A – λIn| = 0 for λ leads to all the eigenvalues of A. On expending the determinant |A – λIn|, we get a polynomial in λ. This polynomial is called the characteristic polynomial of A. The equation |A – λIn| = 0 is called the characteristic equation of A. Example:1 Find the eigen values and eigenvectors of the matrix Let us first derive the characteristic polynomial of A. We get We now solve the characteristic equation of A. The eigenvalues of A are 2 and –1. The corresponding eigenvectors are found by using these values of λ in the equation(A – λI2)x = 0. There are many eigenvectors corresponding to each eigenvalue. For λ = 2 We solve the equation (A – 2I2)x = 0 for x. The matrix (A – 2I2) is obtained by subtracting 2 from the diagonal elements of A. We get This leads to the system of equations giving x1 = –x2. The solutions to this system of equations are x1 = –r, x2 = r, where r is a scalar. Thus the eigenvectors of A corresponding to λ = 2 are nonzero vectors of the form Example:2 Example:3 Find the eigenvalues Example 4 and eigenvectors of the matrix Solution The matrix A – λI is obtained by subtracting λ from the 3 diagonal elements of A.Thus The characteristic polynomial of A is |A – λI3|. Using row and column operations to simplify determinants, we get We now solving the characteristic equation of A: The eigenvalues of A are 10 and 1. The corresponding eigenvectors are found by using three values of λ in the equation (A – λI3)x = 0. ⚫ λ1 = 10 We get The solution to this system of equations are x1 = 2r, x2 = 2r, and x3 = r, where r is a scalar. Thus the eigenspace of λ1 = 10 is the one-dimensional space of vectors of the form. ⚫ λ2 = 1 Let λ = 1 in (A – λI3)x = 0. We get The solution to this system of equations can be shown to be x1 = – s – t, x2 = s, and x3 = 2t, where s and t are scalars. Thus the eigenspace of λ2 = 1 is the space of vectors of the form. Separating the parameters s and t, we can write Thus the eigenspace of λ = 1 is a two-dimensional subspace of R3 with basis If an eigenvalue occurs as a k times repeated root of the characteristic equation, we say that it is of multiplicity k. Thus λ=10 has multiplicity 1, while λ=1 has multiplicity 2 in this example. Properties of eigenvalues and eigenvectors 1. A square matrix A and its transpose have the same eigenvalues. 2. The eigenvalues of a diagonal or triangular matrix are its diagonal elements. 3. An n x n matrix is invertible if and only if it doesn't have 0 as an eigenvalue. 4. If a matrix A has eigenvalue λ with corresponding eigenvector x, then for any k = 1, 2,... , Ak has eigenvalue λk corresponding to the same eigenvector x. 5. If A is an invertible matrix with eigenvalue λ corresponding to eigenvector x, then A–1 has eigenvalue λ–1 corresponding to the same eigenvector x. Diagonalization of a Matrix Definition A square matrix A is said to be diagonalizable if there exists a matrix C such that D = C–1AC is a diagonal matrix. Solving Systems of Linear Equations by Matrix Methods Objectives 1. Define a matrix. 2. Write the augmented matrix for a system. 3. Use row operations to solve a system of equations by echleon form. 4. Check for consistency using rank method. 5. Find the Values of variables if consistent. Solving Systems of Linear Equations by Matrix Methods Write the augmented matrix for a system. Constants Coefficients Solutions of Linear Systems: Existence, Uniqueness Fundamental Theorem for Linear Systems (a) Existence. A linear system of m equations in n unknowns x1, … ,xn (1) is consistent, that is, has solutions, if and only if the coefficient matrix A and the augmented matrix à have the same rank. is inconsistent, that is, has solutions, if and only if the coefficient matrix A and the augmented matrix à doesnot have the same rank. Fundamental Theorem for Linear Systems (continued) (a) Existence. (continued) Here, Fundamental Theorem for Linear Systems (continued) (b) Uniqueness. The system (1) has precisely one solution if and only if this common rank r of A and à equals n. Fundamental Theorem for Linear Systems (continued) (c) Infinitely many solutions. If this common rank r is less than n, the system (1) has infinitely many solutions. All of these solutions are obtained by determining r suitable unknowns (whose submatrix of coefficients must have rank r) in terms of the remaining n − r unknowns, to which arbitrary values can be assigned. Solving Systems of Linear Equations by Matrix Methods Using Row Operations to Solve a System with Three Variables Solution: Now we already have a value for z. Recognizing Inconsistent or Dependent Systems Definition:A function of several variables is called function of two variables if its domain is a set of points in the plane. Z =F(x ,y ) Where Z is called the dependent variable. Limit Working rule to find the limit Practice Continuity Working rule for continuity at a point (a,b) Practice Partial Derivative of First Order f (x, y) is a function of two variables. The first order partial derivative of f with respect to x at a point (x, y) is ∂x h h→0 f (x + h, y) − f (x, y) ∂f = lim f (x, y) is a function of two variables. The first partial derivative of f with respect to y at a point (x, y) is ∂f f (x, y + k ) − f (x, y) ∂y k= klim →0 ∂f/∂x and ∂f/∂y are called First Order Partial Derivatives of f. Example Compute the first order partial derivatives f (x, y) = 3x2 y − 2 + y3. f y = 3x2 + 3y f x = 6xy 2 Example Compute the first order partial derivatives f ( x ,y ) = 2x + 3y − 4 fx = 2 f y = 3 Partial Derivative of Higher Order Definition :- if z(x, y) is a function of two variables. Then ∂z/∂x and ∂z/∂y are also functions of two variables and their partials can be taken. Hence we can differentiate with respect to x and y again and find , Take the partial with respect to x, and then with respect 1. ∂2 z ∂2 f ∂2 z ∂2 f = fxx = = = to x again. ∂x∂x∂x∂x ∂x2 ∂x2 Take the partial with 2. ∂2 z ∂2 f respect to x, and then = = fxy with respect to y. ∂y∂x∂y∂x Take the partial with respect to y, and then 3. ∂ z 2 ∂ f2 = = with respect to x. fyx ∂x∂y ∂x∂y Take the partial with 4. ∂2 z ∂2 f ∂2 z ∂2 f = fyy respect to y, and then = = = ∂y∂y ∂y∂y∂y2 ∂y2 with respect to y again. Example Find the second-order partial derivatives of the function f (x, y) = 3x2 y + x ln y f x = 6xy + ln ⎛1⎞ f = 3x + x⎜ ⎟ y 2 y ⎝y⎠ f xx = 6 x 1 1 f yy =− 2 f xy = 6x + y f yx = 6x + y y y Example Find the second-order partial derivatives of the function a. f (x, y) = 3x − 2 y2 fx=3 f y = −4y f xx = 0 f yy = −4 f xy = 0 f yx = 0 Find the higher-order partial derivatives of the function Example 2 f (x, y) = exy ∂∂ y ∂∂ fx = e xy2 = y2exyf 2 = exy2 = 2xyexy2 x f xx = y4exy ( ) 2 y ( ) yx f = 2 yexy2 1+ xy2 f x = 2 ye xy 2 +y 2 ( 2xye xy 2 )( = 2 yexy 2 ( y 1 y + xy f = 2x 2 () )e xy 2 +y ( e xy 2. y Homogeneous Functions Euler’s Theorem on Homogeneous Functions Practice Tangent and Normal Plane to the Surface Total derivative Case – I: Case – II: Error Determination Example: The diameter and altitude of a can in the shape of a right circular cylinder are measured as 4cm and 6cm respectively. The possible error in each measurement is 0.1 cm. Find approximately the maximum possible error in the values computed for the volume and the lateral surface. Jacobian Properties of Jacobian Practice Taylor’s Series of Two Variables Practice Maxima and Minima of Functions of Two Variables Maximum Value Minimum Value Lagranges’s Method of Undetermined Multipliers A method to find the local minimum and maximum of a function with two variables subject to conditions or constraints on the variables involved. Suppose that, subject to the constraint g(x,y)=0, the function z=f(x,y) has a local maximum or a local minimum at the point. Form the function F x, y, λ = f x, y + λg x, Then there is a value of ( ) such that (x( , y ), λ) ( is a solution of the system of equations λ ∂f y ∂F 0 ∂g 0 ) = +λ … 1 () ∂F0 ∂f = ∂g ∂x ∂x ∂x λ = + … 2 () =0 ∂F ∂y ∂yg ∂yx, y = ∂ λ= … 3 ( ) ( 0 provided all the partial derivatives exists. Step 1: Write the function to be maximized (or minimized) and the constraint in the form: Find the maximum (or minimum) value z = f x, y ofsubject to the constraint ( ) g x, y = Step 2: Construct the function F: ( ) 0 F x, y, λ = f x, y + λg x, ( ) ( ) ( y ) ∂F = 0 ∂ of equations Step 3: Set up the system x ∂F ∂ =0 y ∂F ∂λ = g x, y =0 ( ) Step 4: Solve the system of equations for x, y and λ. (x , y , λ) Step 5: Test the solution 0 0 to determine maximum or minimum point. Find D* = Fxx. Fyy - (Fxy)2 If D* > 0 ⇒ Fxx < 0 ∴ maximum point Fxx > 0 ∴ minimum point D* ≤ 0 ⇒ Test is inconclusive Step 6: (x , y , z = f x, y at each solution 0 0 found in Step 5. ( ) λ) Evaluate Practice Curve Tracing Multiple Integration Practice Practice Change of Order of Integration Triple Integration Types of Discontinuity