Summary

This document details geometric programming, a mathematical optimization technique, focusing on posynomials and their optimization. It presents the primal and dual problem formulations, along with key theorems and examples.

Full Transcript

# CHAPTER 1. ## INTRODUCTION In engineering design problems, total cost can be represented as a sum of component costs in the following manner: $g_k(t) = \sum_{i=1}^{N_k} \sum_{j=1}^m {C_{ki} t^{a_{kij}}_j}$ where k = 0, 1,..., p. The coefficients $c_{ki}$ are positive constants and the exponents...

# CHAPTER 1. ## INTRODUCTION In engineering design problems, total cost can be represented as a sum of component costs in the following manner: $g_k(t) = \sum_{i=1}^{N_k} \sum_{j=1}^m {C_{ki} t^{a_{kij}}_j}$ where k = 0, 1,..., p. The coefficients $c_{ki}$ are positive constants and the exponents $a_{kij}$ are real constants that can be either positive or negative. The design variables $t_j$ are assumed to be strictly positive variables. From this point on $t$ will denote the vector $t = (t_1,..., t_m)$. Functions $g_r(t)$ that have the above properties are called positive polynomials or posynomials for short. Geometric programming is a technique used to optimize posynomials subject to posynomial constraints. Geometric programming splits the optimization problem into two components. The first, termed the primal problem, minimizes a posynomial $g_0(t)$ subject to posynomial constraints that are less than or equal to 1. The second component is a maximisation problem called the dual problem. The dual problem maximises the dual function $v(\delta)$, subject to linear constraints. As will be shown in Chapter 2, the primal problem and dual problem share a special relationship. This relationship enables one to first determine the distribution of cost among each term of $g_0(t)$. # CHAPTER 2. ## MINIMIZING POSYNOMIALS This chapter contains the major theorems and definitions associated with geometric programming. The general geometric programming problem has the following form: **Minimize** $g_0(t) = \sum_{i=1}^{n_0} \prod_{j=1}^m C_{0i} t^{a_{0ij}}_j$ (2-1) **Subject to** $g_k(t) = \sum_{i=1}^{n_k} \prod_{j=1}^m {C_{ki} t^{a_{kij}}_j} \le 1$ for k = 1, 2,..., p. (2-2) In the above problem, each coefficient $c_{ki}$ and variable $t_j$ must be strictly greater than zero. The exponents $a_{kij}$ are real constants that can be either positive or negative. Functions that have these qualities are called posynomials. Geometric programming is the minimization of a posynomial subject to posynomials that are bounded by 1. (2-1) and (2-2 )is the primal problem associated with geometric programming. The corresponding dual problem is given by: **Maximize** $v(\delta) = \prod_{k=0}^p \prod_{i=1}^{n_k} ( \frac{C_{ki}}{\delta_{ki}} )^{\delta_{ki}} ( \prod_{j=1}^m (\delta_{kj})^{\delta_{kj}})^{\delta_{kj}}$ (2-3) Since all of the constraints of the dual problems are linear, the constraints form a convex set. In general, the objective function (2-3) is neither convex nor concave, but maximizing v(δ) is equivalent to maximizing z(δ) = ln[v(δ)]. This transformation can be made since all $δ_{ki}$'s are between zero and one, and the ln[v(δ)] is a monotonic function in the dual variables $δ_i$. The function z(δ) is concave with respect to the weights since it is the sum convex functions. Therefore, the dual problem is a convex programming problem. Hence, any local maximum is also a global maximum. The following results link the primal and the dual problems together. **Theorem 2.1.** Suppose that t is a vector feasible to the primal geometric programming problem (2-1)-(2-2) and that δ is feasible to the dual problem (2-3)-(2-6). Then $g_0(t)\ge v(\delta)$. **Proof:** Define $u_{ki} = C_{ki} \prod_{j=1}^m t^{a_{kij}}_j$ for i = 1, .., $n_k$, k = 0, 1,..., p. (2-7) The arithmetic-geometric mean inequality can be written as $\delta_{01}V_1 + \delta_{02}V_2 + ... + \delta_{0n}V_n \ge V_1^{\delta_{01}} V_2^{\delta_{02}} * ... * V_n^{\delta_{0n}}$ (2-8) where $V_1, V_2, ..., V_n$ are positive variables and $δ_{ki}$ are postive weights. Now let $u_{ki} = \delta_{ki}V_i$, for i = 1, 2,...,n. (2-8) can be rewritten as $u_{01} + u_{02} + ... + u_{0n} \ge \frac{u_{01}}{\delta_{01}} \frac{u_{02}}{\delta_{02}} * ... * \frac{u_{0n}}{\delta_{0n}}$ (2-9) Applying (2-15) to the objective function yields, $\frac{u_{01}}{\delta_{01}} \frac{u_{02}}{\delta_{02}} * ... * \frac{u_{0n}}{\delta_{0n}} \ge \frac{u_{01}}{\gamma_{01}} \frac{u_{02}}{\gamma_{02}} * ... * \frac{u_{0n}}{\gamma_{0n}} * \lambda_0$ (2-18) where $g_0(t) = u_{01} + u_{02} + ... + u_{0n}$. Similarly, if $g_k(t) = u_{k1} + u_{k2} + ... + u_{k n_k}$ for k = 1, ..., p. the constraint inequality can be written as, $1 \ge g_k \ge \frac{u_{k1}}{\gamma_{k1}} \frac{u_{k2}}{\gamma_{k2}} * ... * \frac{u_{k n_k}}{\gamma_{k n_k}} * \lambda_k$ for k = 1,...p. (2-19) Multiplying the left-hand side of (2-18) by the left-hand side of (2-19) and the right-hand side of (2-18) by the right-hand side of (2-19) gives, $\frac{u_{01}}{\gamma_{01}} \frac{u_{02}}{\gamma_{02}} * ... * \frac{u_{0n}}{\gamma_{0n}} \lambda_0*...*\lambda_p \ge \frac{u_{01}}{\gamma_{01}} \frac{u_{02}}{\gamma_{02}} * ... * \frac{u_{0n}}{\gamma_{0n}} * \lambda_0*...*\lambda_p$. (2-20) Since $\delta_{ki}$ must be feasible to the dual problem, from (2-4) $\lambda_0 = \sum_{i=1}^{n_0}\delta_{0i} = 1$ Using this, (2-12) can be rewritten as $\frac{\gamma_{ki}}{\lambda_0} = \delta_{ki}$ (2-20) becomes $g_0(t) \ge (\frac{u_{01}}{\delta_{01}} \frac{u_{02}}{\delta_{02}} * ... * \frac{u_{0n}}{\delta_{0n}} ) (\prod_{k=1}^p \lambda_k^{δ_{k}}$ (2-21) for $B_k > 0$ and k=0,1,..,p. Summing (2-26) over $i$, $\sum_{i=1}^{n_k} \delta_{ki} = B_k \sum_{i=1}^{n_k} \prod_{j=1}^m C_{ki}(t_j)^{a_{kij}}$ (2-27) When k=0, from the normality condition, (2-27) becomes $1 = B_0 \sum_{i=1}^{n_0} \prod_{j=1}^m C_{0i} (t_j)^{a_{0ij}}$ (2-28) Thus $B_0 = \frac{1}{g_0(t)}$ (2-29) Substituting (2-29) into (2-26), $\delta_{0i} = \frac{1}{g_0(t)} C_{0i} \prod_{j=1}^m t^{a_{0ij}}_j$ which shows (2-23) is true. From the feasibility condition (2-2), $g_k(t) \le 1$ for k=1,...,p. Since it is assumed that $g_0(t) = v(\delta)$ then equality can be assumed in (2-2). Thus, $g_k(t) = \sum_{i=1}^{n_k} \prod_{j=1}^m C_{ki} t^{a_{kij}}_j = 1$ k = 1, ..., p. Hence, (2-27) becomes $\sum_{i=1}^{n_k} \delta_{ki} = B_k$. Now using (2-6), $\lambda_k = \sum_{i=1}^{n_k} \delta_{ki} = B_k$ Substituting (2-33) and (2-34) into the above inequality, $g_0(t) = (\sum_{i=1}^{n_0} \delta_{0i}) (\prod\limits_{k=0}^p \prod\limits_{i=1}^{n_k} \lambda_k^{\delta_{ki}}) (\prod\limits_{k=1}^p \lambda_k^{\delta_{ki}}) = v(\delta)$. From (2-5) and (2-6), $\sum\limits_{k=0}^p \sum\limits_{i=1}^{n_k} a_{kij} \delta_{ki} = 0$ and $\lambda_k = \sum\limits_{i=1}^{n_k} \delta_{ki}$ Therefore $g_0(t) = (\sum\limits_{i=1}^{n_0} \delta_{0i}) (\prod\limits_{k=1}^p \lambda_k^{\delta_{ki}}) = v(\delta)$. Recall by normality that $\sum_{i=1}^{n_0} \delta_{0i} = 1$. Hence $g_0(t) = v(\delta)$. **Theorem 2.3.** Let δ* be a solution to the dual problem (2-3)-(2-6). Then any vector t > 0 that satisfies the systems of equations $\prod_{j=1}^m {C_{0i} (t)_j^{a_{0ij}}} = v(\delta*) \delta_{0i}^*$ i = 1, ..., $n_0$ (2-35) $\prod_{j=1}^m {C_{ki} (t)_j^{a_{kij}}} = \frac{\delta_{ki}^*}{\lambda_{k}^*}$ i = 1, ..., $n_k$ (2-36) Equations (2-35) and (2-36)are used to obtain an optimal vector t from the dual solution. Taking the natural logarithm of both sides of equations (2-35) and (2-36) results in a system of linear equation in $t_j$. Specifically, $\sum_{j=1}^{m} a_{0ij} ln(t_j) = ln(v(\delta^*)) + ln(\delta_{0i}^*)- ln(c_{0i})$ for all k where $\lambda_k^* > 0$. Linear algebra techniques can be used to solve the system of equations for the $t_j$'s. To illustrate the theory previously developed, consider the following examples. **Example 2.1.** **Minimize** $g_0(t) = t_3^{0.8}t_4^{1.4}$ **Subject to** $g_1(t) = t_1^{-2}t_3^{-1} + t_2t_3^{-1} \le 1$ $g_2(t) = t_1t_4^{-1} + t_2^{-1}t_4^{-1} \le 1$ **Solution.** From Definition (2.1), D=0. The corresponding dual problem is $t_2^{-1}t_4^{-1} = 0.143$. Hence, $t^* = (0.055, 108.3, 433.21, 0.065)$. **Example 2.2.** **Minimize** $g_0(t) = t_1t_2t_3 + 25t_1t_2^{-1}t_3$ **Subject to** $g_1(t) = 0.1t_1t_4 \le 1$ $g_2(t) = 5t_1t_3^{-1}+ t_2^2t_3^{-1}\le 1$ **Solution.** D=1. The dual problem is to **Maximize** $v(\delta) = (\frac{1}{\delta_{01}})^{\delta_{01}} (\frac{25}{\delta_{02}})^{\delta_{02}} (\frac{0.1}{\delta_{11}})^{\delta_{11}} (\frac{5}{\delta_{21}})^{\delta_{21}} (\frac{1}{\delta_{22}})^{\delta_{22}} \lambda_1^{\lambda_1} \lambda_2^{\lambda_2}$ **Subject to** $-\delta_{01} + \delta_{02} + \delta_{11} + \delta_{21}= 0$ $v(\delta^*) = 10.5 = g_0(t^*)$ and $t^* = (1.32, 7.58, 61.56)$.

Use Quizgecko on...
Browser
Browser