Optimality Conditions PDF
Document Details
Uploaded by Deleted User
Tags
Related
- Business Mathematics Part A PDF
- Mathematics for Economists Solutions PDF
- Calculus For Engineers (FIC 103) Assignment Sheet 2 PDF
- Real-World Applications of Differentiation PDF
- Engineering Mathematics I Assignment PDF 2024-2025 - Babu Banarasi Das
- Nonlinear Optimisation Seminar Exercises II-6 Solutions PDF
Summary
This document contains true or false questions about optimality conditions in unconstrained and constrained optimization. It includes questions regarding functions, gradients, Hessians, Taylor series, quadratic forms, and other related mathematical concepts.
Full Transcript
Optimality Conditions 1. Answer true or false. a. A function can have several local minimum points in a small neighborhood of x*. b. A function cannot have more than one global minimum point. c. The value of the function having a global minimum at several points must...
Optimality Conditions 1. Answer true or false. a. A function can have several local minimum points in a small neighborhood of x*. b. A function cannot have more than one global minimum point. c. The value of the function having a global minimum at several points must be the same. d. A function defined on an open set cannot have a global minimum. e. The gradient of a function f (x) at a point is normal to the surface defined by the level surface f (x) = constant. f. The gradient of a function at a point gives a local direction of maximum decrease in the function. g. The Hessian matrix of a continuously differentiable function can be asymmetric. h. The Hessian matrix for a function is calculated using only the first derivatives of the function. i. Taylor series expansion for a function at a point uses the function value and its derivatives. j. Taylor series expansion can be written at a point where the function is discontinuous. k. Taylor series expansion of a complicated function replaces it with a polynomial function at the point. I. Linear Taylor series expansion of a complicated function at a point is only a good local approximation for the function. m. A quadratic form can have first-order terms in the variables. n. For a given x, the quadratic form defines a vector. o. Every quadratic form has a symmetric matrix associated with it. p. A symmetric matrix is positive definite if its eigenvalues are nonnegative. q. A matrix is positive semidefinite if some of its eigenvalues are negative and others are nonnegative. r. All eigenvalues of a negative definite matrix are strictly negative. s. The quadratic form appears as one of the terms in Taylor's expansion of a function. t. A positive definite quadratic form must have positive value for any x ≠ 0. Section 4.4 Optimality Conditions: Unconstrained Problems 4.21 Answer True or False. 1. If the first-order necessary condition at a point is satisfied for an unconstrained problem, it can be a local maximum point for the function. 2. A point satisfying first-order necessary conditions for an unconstrained function may not be a local minimum point. 3. A function can have a negative value at its maximum point. 4. If a constant is added to a function, the location of its minimum point is changed. 5. If a function is multiplied by a positive constant, the location of the function's minimum point is unchanged. 6. If curvature of an unconstrained function of a single variable at the point x" is zero, then it is a local maximum point for the function. 7. The curvature of an unconstrained function of a single variable at its local minimum point is negative. 8. The Hessian of an unconstrained function at its local minimum point must be positive semidefinite. 9. The Hessian of an unconstrained function at its minimum point is negative definite. 10. If the Hessian of an unconstrained function is indefinite at a candidate point, the point may be a local maximum or minimum. 2. Indicate whether the following statements are True or False: (a) A regular point of the feasible region is defined as a point where the cost function gradient is independent of the gradients of active constraints. (b) A point satisfying KKT conditions for a general optimum design problem can be a local nuutimum for the cost function. (c) At the optimum point, the number of active independent constraints is always more than the number of design variables. (d) In the general optimum design problem formulation. the number of independent equality con straints must be less than or equal to the number of design variables. (e) In the general optimum design problem formulation. the number of inequality constraints cannot exceed the number of design variables. (f) At the optimum point. the Lagrange multipliers for the" < type" inequality constraints must be non-negative. (g) St the optimum point. the Lagrange multiplier for a " < type" constraint can be zero. (b) While solving an optimum design problem by using the KKT conditions. each case defined by the switching conditions can have multiple solutions. (i) In optimum design problem formulation, " > type" constraints Ca/IDOt be treated. (j) Optimum design points for constrained optimization problems render a stationary value of the Lagrange function with respect to design variables. 1. All optimum design algorithms require a starting point to initiate the iterative process. false 2. A vector of design changes must be computed at each iteration of the iterative process. 3. The design change calculation can be divided into step size determination and direction finding subproblems. 4. The search direction requires evaluation of the gradient of the cost function. 5. Step size along the search direction is always negative. 6. Step size along the search direction can be zero. 7. In unconstrained optimization, the cost function can increase for an arbitrary small step along the descent direction. 8. A descent direction always exists if the current point is not a local minimum. 9. In unconstrained optimization, a direction of descent can be found at a point where the gradient of the cost function is zero. 10. The descent direction makes an angle of 0-90° with the gradient of the cost function. 10.7 Search Direction Determination: Conjugate Gradient Method 10.66 Answer True or False. 1. The conjugate gradient method usually converges faster than the steepest- descent method. 2. Conjugate directions are computed from gradients of the cost function. 3. Conjugate directions are normal to each other. 4. The conjugate direction at the kth point is orthogonal to the gradient of the cost function at the (k + 1)th point when an exact step size is calculated. 5. The conjugate direction at the kth point is orthogonal to the gradient of the cost function at the (k-1)th point. Section 11.4 Search Direction Determination: Newton's Method 11.9 Answer True or False. 1. In Newton's method, it is always possible to calculate a search direction at any point. 2. The Newton direction is always that of descent for the cost function. 3. Newton's method is convergent starting from any point with a step size of 1. 4. Newton's method needs only gradient information at any point. Chapter 2- Optimum Design 1 Design of system implies specification of the design variable values. True 2 All design problems have only linear inequality constraints. False 3 All design variables should be independent of each other as far as possible. True 4 If there is an equality constraint in the design problem, the optimum solution must satisfy it. True 5 Each optimization problem must have certain parameters called the design variables. True 6 A feasible design may violate equality constraints. False 7 A feasible design may violate “≥ type” constraints False A “≥ type” constraint expressed in the standard form is active at a design point if it has zero value 8 True there. 9 The constraint set for a design problem consists of all feasible points. True The number of independent equality constraints can be larger than the number of design variables 10 True for the problem. The number of “≤ type” constraints must be less than the number of design variables for a valid 11 False problem formulation. The feasible region for an equality constraint is a subset of that for the same constraint expressed as 12 True an inequality. 13 Maximization of f(x) is equivalent to minimization of 1/f(x) False A lower minimum value for the cost function is obtained if more constraints are added 14 False to the problem formulation. Let fn be the minimum value for the cost function with n design variables for a problem. If the number of design variables for the same problem is increased to, say, 15 False m=2n, then fm > fn, where fm is the minimum value for the cost function with m design variables. Chapter 4 1 A function can have several local minimum points in a small neighborhood of x*. True 2 A function cannot have more than one global minimum point. False 3 The value of the function having a global minimum at several points must be the same. True 4 A function defined on an open set cannot have a global minimum. False The gradient of a function f(x) at a point is normal to the surface defined by the level 5 True surface f(x)=constant. The gradient of a function at a point gives a local direction of maximum decrease in the 6 False function. 7 The Hessian matrix of a continuously differentiable function can be asymmetric. False The Hessian matrix for a function is calculated using only the first derivatives of the 8 False function. Taylor series expansion for a function at a point uses the function value and its 9 True derivatives. 10 Taylor series expansion can be written at a point where the function is discontinuous. False Taylor series expansion of a complicated function replaces it with a polynomial function 11 True at the point. Linear Taylor series expansion of a complicated function at a point is only a good local 12 True approximation for the function. 13 A quadratic form can have first-order terms in the variables. False 14 For a given x, the quadratic form defines a vector. False 15 Every quadratic form has a symmetric matrix associated with it. True 16 A symmetric matrix is positive definite if its eigenvalues are non-negative. False A matrix is positive semidefinite if some of its eigenvalues are negative and others are 17 False non-negative. 18 All eigenvalues of a negative definite matrix are strictly negative. True 19 The quadratic form appears as one of the terms in Taylor’s expansion of a function. True 20 A positive definite quadratic form must have positive value for any x ≠ 0. True Chapter 4 If the first-order necessary condition at a point is satisfied for an unconstrained 1 True problem, it can be a local maximum point for the function. A point satisfying first-order necessary conditions for an unconstrained function may 2 True not be a local minimum point. 3 A function can have a negative value at its maximum point. True 4 If a constant is added to a function, the location of its minimum point is changed. False If a function is multiplied by a positive constant, the location of the function’s 5 True minimum point is unchanged. If curvature of an unconstrained function of a single variable at the point x* is zero, 6 False then it is a local maximum point for the function. The curvature of an unconstrained function of a single variable at its local minimum 7 False point is negative. The Hessian of an unconstrained function at its local minimum point must be positive 8 False semidefinite. 9 The Hessian of an unconstrained function at its minimum point is negative definite. False If the Hessian of an unconstrained function is indefinite at a candidate point, the point 10 False may be a local maximum or minimum. Chapter 4 1 A regular point of the feasible region is defined as a point where the cost function False gradient is independent of the gradients of active constraints. 2 A point satisfying KKT conditions for a general optimum design problem can be a local True max-point for the cost function. 3 At the optimum point, the number of active independent constraints is always more False than the number of design variables. 4 In the general optimum design problem formulation, the number of independent True equality constraints must be “≤” to the number of design variables. 5 In the general optimum design problem formulation, the number of inequality False constraints cannot exceed the number of design variables. 6 At the optimum point, Lagrange multipliers for the “≤ type” inequality constraints True must be non-negative. 7 At the optimum point, the Lagrange multiplier for a “≤ type” constraint can be zero. True 8 While solving an optimum design problem by KKT conditions, each case defined by the True switching conditions can have multiple solutions. 9 In optimum design problem formulation, “≥ type” constraints cannot be treated. False 10 Optimum design points for constrained optimization problems give stationary value to True the Lagrange function with respect to design variables. 11 Optimum design points having at least one active constraint give stationary value to the False cost function. 12 At a constrained optimum design point that is regular, the cost function gradient is True linearly dependent on the gradients of the active constraint functions. 13 If a slack variable has zero value at the optimum, the inequality constraint is inactive. False 14 Gradients of inequality constraints that are active at the optimum point must be zero. False 15 Design problems with equality constraints have the gradient of the cost function as zero False at the optimum point. Chapter 4 1 A linear inequality constraint always defines a convex feasible region. True 2 A linear equality constraint always defines a convex feasible region. True 3 A nonlinear equality constraint cannot give a convex feasible region. True 4 A function is convex if and only if its Hessian is positive definite everywhere. False 5 An optimum design problem is convex if all constraints are linear and the cost function is convex. True 6 A convex programming problem always has an optimum solution. False 7 An optimum solution for a convex programming problem is always unique. False 8 A nonconvex programming problem cannot have global optimum solution. False For a convex design problem, the Hessian of the cost function must be positive semidefinite 9 False everywhere. Checking for the convexity of a function can actually identify a domain over which the 10 True function may be convex. Chapter 5 1 A convex programming problem always has a unique global minimum point. False 2 For a convex programming problem, KKT necessary conditions are also sufficient. True 3 The Hessian of the Lagrange function must be positive definite at constrained minimum points. False For a constrained problem, if the sufficiency condition of Theorem 5.2 is violated, the candidate 4 True point x* may still be a minimum point. If the Hessian of the Lagrange function at x*, ∇2 (𝑥 ∗ ), is positive definite, the optimum design 5 False problem is convex. For a constrained problem, the sufficient condition at x* is satisfied if there are no feasible 6 True directions in a neighborhood of x* along which the cost function reduces. Chapter 5 Candidate minimum points for a constrained problem that do not satisfy second-order 1 True sufficiency conditions can be global minimum designs. Lagrange multipliers may be used to calculate the sensitivity coefficient for the 2 cost function with respect to the right side parameters even if Theorem 4.7 cannot True be used. Relative magnitudes of the Lagrange multipliers provide useful information for practical 3 True design problems. Chapter 8 A linear programming problem having maximization of a function cannot be transcribed into the 1 False standard LP form. 2 A surplus variable must be added to a “≤ type” constraint in the standard LP formulation. False 3 A slack variable for an LP constraint can have a negative value. False 4 A surplus variable for an LP constraint must be non-negative. True 5 If a “≤ type” constraint is active, its slack variable must be positive. False 6 If a “≥ type” constraint is active, its surplus variable must be zero. True 7 In the standard LP formulation, the resource limits are free in sign. False 8 Only “≤ type” constraints can be transcribed into the standard LP form. False 9 Variables that are free in sign can be treated in any LP problem. True 10 In the standard LP form, all the cost coefficients must be positive. False 11 All variables must be non-negative in the standard LP definition. True Chapter 8 In the standard LP definition, the number of constraint equations (i.e., rows in the matrix A) must be 1 True less than the number of variables. In an LP problem, the number of “# type” constraints cannot be more than the number of design 2 False variables. In an LP problem, the number of “$ type” constraints cannot be more than the number of design 3 False variables. 4 An LP problem has an infinite number of basic solutions. False 5 A basic solution must have zero value for some of the variables. True 6 A basic solution can have negative values for some of the variables. True A degenerate basic solution has exactly m variables with nonzero values, where m is the number of 7 False equations. 8 A basic feasible solution has all variables with non-negative values. True A basic feasible solution must have m variables with positive values, where m is the number of 9 False equations. 10 The optimum point for an LP problem can be inside the feasible region. False 11 The optimum point for an LP problem lies at a vertex of the feasible region. True 12 The solution to any LP problem is only a local optimum. False 13 The solution to any LP problem is a unique global optimum. False Chapter 8 1 A pivot step of the Simplex method replaces a current basic variable with a nonbasic variable. True 2 The pivot step brings the design point to the interior of the constraint set. False The pivot column in the Simplex method is determined by the largest reduced cost coefficient 3 False corresponding to a basic variable. The pivot row in the Simplex method is determined by the largest ratio of right-side parameters 4 False with the positive coefficients in the pivot column. The criterion for a current basic variable to leave the basic set is to keep the new solution basic and 5 False feasible. A move from one basic feasible solution to another corresponds to extreme points of the convex 6 True polyhedral set. A move from one basic feasible solution to another can increase the cost function value in the 7 False Simplex method. 8 The right sides in the Simplex tableau can assume negative values. False 9 The right sides in the Simplex tableau can become zero. True 10 The reduced cost coefficients corresponding to the basic variables must be positive at the optimum. False If a reduced cost coefficient corresponding to a nonbasic variable is zero at the optimum point, 11 True there may be multiple solutions to the problem. 12 If all elements in the pivot column are negative, the problem is infeasible. False 13 The artificial variables must be positive in the final solution. False 14 If artificial variables are positive at the final solution, the artificial cost function is also positive. True 15 If artificial cost function is positive at the optimum solution, the problem is unbounded. False Chapter 10 1 All optimum design algorithms require a starting point to initiate the iterative process. True 2 A vector of design changes must be computed at each iteration of the iterative process. True The design change calculation can be divided into step size determination and direction finding 3 True subproblems. 4 The search direction requires evaluation of the gradient of the cost function. True 5 Step size along the search direction is always negative. False 6 Step size along the search direction can be zero. False In unconstrained optimization, the cost function can increase for an arbitrary small step along the 7 False descent direction. 8 A descent direction always exists if the current point is not a local minimum. True In unconstrained optimization, a direction of descent can be found at a point where the 9 False gradient of the cost function is zero. 10 The descent direction makes an angle of 0⁰-90⁰ with the gradient of the cost function. False Chapter 10 1 Step size determination is always a one-dimensional problem. True In unconstrained optimization, the slope of the cost function along the descent direction at zero 2 False step size is always positive. 3 The optimum step lies outside the interval of uncertainty. False After initial bracketing, the golden section search requires two function evaluations to reduce the 4 False interval of uncertainty. Chapter 10 1 The steepest-descent method is convergent. True The steepest-descent method can converge to a local maximum point starting from a point where 2 False the gradient of the function is nonzero. 3 Steepest-descent directions are orthogonal to each other. True 4 Steepest-descent direction is orthogonal to the cost surface. True Chapter 10 1 The conjugate gradient method usually converges faster than the steepest-descent method. True 2 Conjugate directions are computed from gradients of the cost function. True 3 Conjugate directions are normal to each other. False The conjugate direction at the kth point is orthogonal to the gradient of the cost function at the 4 True (k+1) th point when an exact step size is calculated. The conjugate direction at the kth point is orthogonal to the gradient of the cost 5 False function at the (k-1) th point. Chapter 11 1 In Newton’s method, it is always possible to calculate a search direction at any point. False 2 The Newton direction is always that of descent for the cost function. False 3 Newton’s method is convergent starting from any point with a step size of 1. False 4 Newton’s method needs only gradient information at any point. False Chapter 11 1 The DFP method generates an approximation to the inverse of the Hessian. True 2 The DFP method generates a positive definite approximation to the inverse of the Hessian. True 3 The DFP method always gives a direction of descent for the cost function. True 4 The BFGS method generates a positive definite approximation to the Hessian of the cost function. True 5 The BFGS method always gives a direction of descent for the cost function. True 6 The BFGS method always converges to the Hessian of the cost function. False Chapter 12 The basic numerical iterative philosophy for solving constrained and unconstrained problems is the 1 True same. 2 Step size determination is a one-dimensional problem for unconstrained problems. True 3 Step size determination is a multidimensional problem for constrained problems. False 4 An inequality constraint 𝑔𝑖 (𝑥) ≤ 0 is violated at 𝑥 (𝑘) if 𝑔𝑖 (𝑥 (𝑘) ) > 0. True 5 An inequality constraint 𝑔𝑖 (𝑥) ≤ 0 is active at𝑥 (𝑘) if 𝑔𝑖 (𝑥 (𝑘) ) > 0. False 6 An equality constraint 𝑔𝑖 (𝑥) ≤ 0 is violated at 𝑥 (𝑘) if ℎ𝑖 (𝑥 (𝑘) ) < 0. True 7 An equality constraint is always active at the optimum. True 8 In constrained optimization problems, search direction is found using the cost gradient only. False 9 In constrained optimization problems, search direction is found using the constraint gradients only. False 10 In constrained problems, the descent function is used to calculate the search direction. False 11 In constrained problems, the descent function is used to calculate a feasible point. False 12 Cost function can be used as a descent function in unconstrained problems. True 13 One-dimensional search on a descent function is needed for convergence of algorithms. True 14 A robust algorithm guarantees convergence. True 15 A feasible set must be closed and bounded to guarantee convergence of algorithms. True 16 A constraint 𝑥1 + 𝑥2 = −2 can be normalized as (𝑥1 + 𝑥1 )/(−2) ≤ −1.0. False 17 A constraint 𝑥12 + 𝑥22 ≤ 9is active at 𝑥1 = 3 and 𝑥2 = 3. False Chapter 12 Linearization of cost and constraint functions is a basic step for solving nonlinear optimization 1 True problems. General constrained problems cannot be solved by solving a sequence of linear programming 2 False subproblems. 3 In general, the linearized subproblem without move limits may be unbounded. True 4 The sequential linear programming method for general constrained problems is False 5 Move limits are essential in the sequential linear programming procedure. True 6 Equality constraints can be treated in the sequential linear programming algorithm. True Chapter 12 The constrained steepest-descent (CSD) method, when there are active constraints, is based on 1 False using the cost function gradient as the search direction. The constrained steepest-descent method solves two subproblems: the search direction and step 2 True size determination. 3 The cost function is used as the descent function in the CSD method. False 4 The QP subproblem in the CSD method is strictly convex. True 5 The search direction, if one exists, is unique for the QP subproblem in the CSD method. True 6 Constraint violations play no role in step size determination in the CSD method. False 7 Lagrange multipliers of the subproblem play a role in step size determination in the CSD method. True 8 Constraints must be evaluated during line search in the CSD method. True All quotes are from Engineering Design Optimization. Questions in yellow – answer not found Optimum Design Concepts: a. A function can have several local minimum points in a small neighborhood of x*. True b. A function cannot have more than one global minimum point. False c. The value of the function having a global minimum at several points must be the same. True d. A function defined on an open set cannot have a global minimum. False e. The gradient of a function f(x) at a point is normal to the surface defined by the level surface f(x) = constant. True f. The gradient of a function at a point gives a local direction of maximum decrease in the function. False g. The Hessian matrix of a continuously differentiable function can be asymmetric. False. h. The Hessian matrix for a function is calculated using only the first derivatives. False i. Taylor series expansion for a function at a point uses the function value and its derivatives. True. j. Taylor series expansion can be written at a point where the function is discontinuous. False. k. Taylor series expansion of a complicated function replaces it with a polynomial function at the point. True. I. Linear Taylor series expansion of a complicated function at a point is only a good local approximation for the function. True. “The Taylor series provides a local approximation to a function and is the foundation for gradient-based optimization algorithms.” (p.88) m. A quadratic form can have first-order terms in the variables. False. Quadratic form: 𝒅 ∙ 𝑯𝒇 (𝑥 ∗ ) ∙ 𝒅 = 𝑎 Hessian matrix only has 2nd order terms. The result 𝑎 is a scalar. n. For a given x, the quadratic form defines a vector. False. 0. Every quadratic form has a symmetric matrix associated with it. True. “Also, in a quadratic form, we assume that A is symmetric (even if it is not, only the symmetric part of A contributes, so effectively, it acts like a symmetric matrix).” (p.547) A is the Hessian matrix in this case. p. A symmetric matrix is positive definite if its eigenvalues are nonnegative. False. q. A matrix is positive semidefinite if some of its eigenvalues are negative and others are nonnegative. False. r. All eigenvalues of a negative definite matrix are strictly negative. True. Negative definite – eigenvalues < 0 Negative semidefinite – eigenvalues ≤ 0 Positive semidefinite – eigenvalues ≥ 0 Positive definite – eigenvalues > 0 s. The quadratic form appears as one of the terms in Taylor's expansion of a function. True. Second order Taylor Series: 1 𝑓(𝑥 + 𝑝) = 𝑓(𝑥) + ∇𝒇(𝑥) 𝒑 + ∙ 𝒑 𝑯𝒇 (𝑥)𝒑 2 t. A positive definite quadratic form must have a positive value for any x ≠ 0. True Unconstrained Optimum Design Problems: 1. If the first-order necessary condition at a point is satisfied for an unconstrained problem, it can be a local maximum point for the function. True 2. A point satisfying first-order necessary conditions for an unconstrained function may not be a local minimum point. True 3. A function can have a negative value at its maximum point. True 4. If a constant is added to a function, the location of its minimum point is changed. False 5. If a function is multiplied by a positive constant, the location of the function's minimum point is unchanged. True 6. If the curvature of an unconstrained function of a single variable at the point x* is zero, then it is a local maximum point for the function. False 7. The curvature of an unconstrained function of a single variable at its local minimum point is negative. False 8. The Hessian of an unconstrained function at its local minimum point must be positive semidefinite. False It can be positive definite. If it is a positive semidefinite it may not even be a local minimum: “These conditions on the gradient and curvature are necessary conditions for a local minimum but are not sufficient. They are not sufficient because if the curvature is zero in some direction 𝑝 𝑖. 𝑒. , 𝒑 𝑯𝒇 (𝑥 ∗ )𝒑 = 0 we have no way of knowing if it is a minimum unless we check the third-order term. In that case, even if it is a minimum, it is a weak minimum.” (p.90) The Hessian matrix corresponds to the curvature of a function (the slope of the grandient). Therefore, if the Hessian is positive definite, the curvature will also be positive. 9. The Hessian of an unconstrained function at its minimum point is negative definite. False 10. If the Hessian of an unconstrained function is indefinite at a candidate point, the point may be a local maximum or minimum. False Constrained Optimum Design Problems: (a) A regular point of the feasible region is defined as a point where the cost function gradient is independent of the gradients of active constraints. False “the objective function gradient must be a linear combination of the gradients of the constraints.” (p.160) (b) A point satisfying KKT conditions for a general optimum design problem can be a local maximum for the cost function. True “In addition to the conditions for a stationary point of the Lagrangian (Eqs. 5.22 to 5.25), recall that we require the Lagrange multipliers for the active constraints to be nonnegative. Putting all these conditions together in matrix form, the first-order constrained optimality conditions are as follows: 𝛻𝑓 + 𝐽 𝜆 + 𝐽 𝜎 = 0 ⎧ ⎪ ℎ=0 𝑔 + 𝑠⨀𝑠 = 0 ⎨ ⎪ 𝜎⨀𝑠 = 0 ⎩ 𝜎≥0 These are called the Karush–Kuhn–Tucker (KKT) conditions.” (p.168) (c) At the optimum point, the number of active independent constraints is always more than the number of design variables. False (d) In the general optimum design problem formulation, the number of independent equality constraints must be less than or equal to the number of design variables. True (e) In the general optimum design problem formulation, the number of inequality constraints cannot exceed the number of design variables. False “Thus, the number of independent equality constraints must be less than or equal to the number of design variables (nh ≤ nx ). There is no limit on the number of inequality constraints.” nx = design variables and nh = equality constraints. (p.13) (f) At the optimum point, the Lagrange multipliers for the " ≤ type" inequality constraints must be non-negative. True (g) At the optimum point, the Lagrange multiplier for a " ≤ type" constraint can be zero. True “In addition to the conditions for a stationary point of the Lagrangian (Eqs. 5.22 to 5.25), recall that we require the Lagrange multipliers for the active constraints to be nonnegative.” (p.168) (h) While solving an optimum design problem by using the KKT conditions, each case defined by the switching conditions can have multiple solutions. (i) In optimum design problem formulation, " ≥ type" constraints cannot be treated. False Carlos Alberto Conceição António, 2024 (j) Optimum design points for constrained optimization problems render a stationary value of the Lagrange function with respect to design variables. True “Similar to the equality constrained case, we seek a stationary point for the Lagrangian” (p.167) General Concepts Related to Numerical Algorithms: 1. All optimum design algorithms require a starting point to initiate the iterative process. True 2. A vector of design changes must be computed at each iteration of the iterative process. True 3. The design change calculation can be divided into step size determination and direction finding subproblems. True “As mentioned in the previous section, there are two main subproblems in line search gradient-based optimization algorithms: choosing the search direction and determining how far to step in that direction.” (p.96) 4. The search direction requires evaluation of the gradient of the cost function. True “We start by introducing two first-order methods that only require the gradient and then explain two second-order methods that require the Hessian, nor at least an approximation of the Hessian.” (p.110) 5. Step size along the search direction is always negative. False 6. Step size along the search direction can be zero. False The step size is always a positive scalar. 7. In unconstrained optimization, the cost function can increase for an arbitrary small step along the descent direction. False For any step along the descent direction, the cost function decreases. 8. A descent direction always exists if the current point is not a local minimum. True 9. In unconstrained optimization, a direction of descent can be found at a point where the gradient of the cost function is zero. False When ∇𝒇 = 0, there is no direction of descent because it is at least a local minimum. 10. The descent direction makes an angle of 0-90 ° with the gradient of the cost function. False The descent direction can make an angle of 180º with the gradient of the cost function. That is the case in the “Steepest-Descent Method”, as the descent direction is symmetric to the gradient. ∇𝒇𝒌 𝒑𝒌 = − ‖∇𝒇𝒌 ‖ (p.111) False By João Cunha & Tiago Cardoso ACHAMOS QUE NÃO DEMOS? (não está no mini livro amarelo da AE – Fonte de conhecimento)