🎧 New: AI-Generated Podcasts Turn your study notes into engaging audio conversations. Learn more

DSD Full PPT Module1-6.pdf

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Full Transcript

DIGITAL SYSTEM DESIGN Text Book: Stephen Brown and Zvonko Vranesic, Fundamentals of Digital Logic with Verilog Design (3e), Tata McGraw Hill 2014. The slides in this document are prepared using the above text book Variables and Functions We say that L(x) = x is a logic fun...

DIGITAL SYSTEM DESIGN Text Book: Stephen Brown and Zvonko Vranesic, Fundamentals of Digital Logic with Verilog Design (3e), Tata McGraw Hill 2014. The slides in this document are prepared using the above text book Variables and Functions We say that L(x) = x is a logic function and that x is an input variable. L(x1, x2) = x1 · x2 where L = 1 if x1 = 1 and x2 = 1, L = 0 otherwise. The “·” symbol is called the AND operator, and the circuit in Figure (a) is said to implement a logical AND function. L(x1, x2) = x1 + x2 where L = 1 if x1 = 1 or x2 = 1 or if x1 = x2 = 1, L = 0 if x1 = x2 = 0. The + symbol is called the OR operator, and the circuit in Figure 2.3b is said to implement a logical OR function. L(x , x , x ) = (x + x ) · x 1 2 3 1 2 3 L(x) = x’ The complement operation can be applied to a single variable or to more complex operations. For example, if f (x1, x2) = x1 + x2 then the complement of f is f’(x1, x2) = (x1 + x2)’ Truth Tables x x’ 0 1 1 0 AND gate gives high output if and only if all of its inputs are high OR gate output is LOW if and only if all of its inputs are LOW. A B Q A B Q 0 0 1 0 0 1 0 1 1 0 1 0 1 0 1 1 0 0 1 1 0 1 1 0 NAND NOR NAND gate gives LOW output if and only if all of its inputs are HIGH NOR gate output is HIGH if and only if all of its inputs are LOW. Exclusive OR (XOR) XOR gate output is HIGH if it has odd number of HIGH inputs. Boolean Algebra Single-Variable Theorems Axioms 1a. 0 · 0 = 0 5a. x · 0 = 0 1b. 1 + 1 = 1 5b. x + 1 = 1 2a. 1· 1 = 1 6a. x · 1 = x 2b. 0 + 0 = 0 6b. x + 0 = x 3a. 0 · 1 = 1.0 = 0 7a. x · x = x 3b. 1 + 0 = 0 + 1 = 1 7b. x + x = x 4a. If x=0, then 𝑥 = 1 8a. x · 𝑥 = 0 4b. If x=1, then 𝑥 = 0 8b. x + 𝑥 = 1 9. 𝑥 = x Duality Given a logic expression, its dual is obtained by replacing all + operators with · operators, and vice versa, and by replacing all 0s with 1s, and vice versa. The dual of any true statement in Boolean algebra is also a true statement. Two- and Three-Variable Properties Precedence of Operations In the absence of parantheses, operation in a logic expression must be performed in the order: NOT, AND and OR. Given f=a.b+a’.b’ first, a’ and b’ are calculated, then ab and a’b’ and finally ab+a’b’ Qn: Prove the validity of the logic equation Solution: (distributive property) which is the same as the right-hand side of the initial equation. Qn: Prove the validity of the logic equation (x + x’ = 1) (x + x’y = x + y) Sum-of-Products and Product-of-Sums Forms If a function f is specified in the form of a truth table, then an expression that realizes f can be obtained by considering either the rows in the table for which f = 1, or by considering the rows for which f = 0. Minterms For a function of n variables, a product term in which each of the n variables appears once, either in uncomplemented or complemented form is called a minterm. For a given row of the truth table, the minterm is formed by including xi if xi = 1 and by including xi’ if xi = 0. Sum-of-Products Form Any function f can be represented by a sum of minterms that correspond to the rows in the truth table for which f = 1. Example: f A logic expression consisting of product (AND) terms that are summed (ORed) is said to be in the sum-of products (SOP) form. If each product term is a minterm, then the expression is called a canonical sum-of-products for the function f Example2: This is the minimum-cost sum-of-products expression for f The cost of a logic circuit is the total number of gates plus the total number of inputs to all gates in the circuit. The cost of the previous network is 13, because there are five gates and eight inputs to the gates. The previous function f can also be specified as f (x1, x2, x3) = (m1,m4,m5,m6) or even more simply as f (x1, x2, x3) = m(1, 4, 5, 6) Qn1: Simplify f (x1, x2, x3) = m(2, 3, 4, 6, 7) Qn2: Simplify f (x1, x2, x3, x4) = m(3, 7, 9, 12, 13, 14, 15) Maxterms The principle of duality suggests that if it is possible to synthesize a function f by considering the rows in the truth table for which f = 1, then it should also be possible to synthesize f by considering the rows for which f = 0. This alternative approach uses the complements of minterms, which are called maxterms. Product-of-Sums Form If a given function f is specified by a truth table, then its complement f’ can be represented by a sum of minterms for which f’ = 1, which are the rows where f = 0. Example: If we complement this expression using DeMorgan’s theorem, the result is Example2: Then f can be expressed as A logic expression consisting of sum (OR) terms that are the factors of a logical product (AND) is said to be of the product-of-sums (POS) form. If each sum term is a maxterm, then the expression is called a canonical product-of- sums for the given function. f Using the commutative property and the associative property, (using (x + y) (x + y’) = x) The cost of this network is 13. An alternative way of specifying our sample function is or more simply f (using (x + y) (x + y’) = x) Another way of deriving this product-of-sums expression is to use the sum-of-products form of f’. Thus,  SOP Exercises: 1. 2. f= x2’(x1 + x1’x3’) + x3(x1+x1’x2) = x2’(x1 +x3’) + x3(x1+x2) (Using x+x’y = x + y) = x1x2’ +x2’x3’ +x1x3 + x2x3 = x2’x3’ + x2x3 + x1x2’ or =x2x3 + x1x3 + x2’x3’ (Using xy + yz + x’z = xy + x’z) 3. (x1+x2) (x2’+x3) Karnaugh map It is an alternative to the truth-table form for representing a function. The map consists of cells that correspond to the rows of the truth table. it allows easy recognition of minterms that can be combined Three-Variable Map Minterms in any two cells that are adjacent, either in the same row or the same column, can be combined. The adjacent cells must differ in the value of only one variable. Thus the columns are identified by the sequence of (x1, x2) values of 00, 01, 11, and 10. This makes the second and third columns different only in variable x1. Also, the first and the fourth columns differ only in variable x1, which means that these columns can be considered as being adjacent. (A sequence of codes, or valuations, where consecutive codes differ in one variable only is known as the Gray code.) Four-Variable Map X1, x2 X3, x4 00 01 11 10 00 1 0 0 1 01 0 0 0 0 11 1 1 1 0 10 1 1 0 1 X1, x2 X3, x4 00 01 11 10 00 1 1 1 0 01 1 1 1 0 11 0 0 1 1 10 0 0 1 1 Five-Variable Map X2, x3 X2, x3 X4, x5 00 01 11 10 X4, x5 00 01 11 10 00 0 1 1 0 00 0 0 0 0 01 0 1 1 0 01 1 0 0 0 11 0 1 1 0 11 1 0 0 1 10 0 1 1 0 10 1 0 0 1 x1=0 x1=1 Terminology Some of the terminologies that are useful for describing the minimization process are: Literal Each appearance of a variable, either uncomplemented or complemented, in a logical term is called a literal. For example, the product term x1x2x3 has three literals, and the sum term (x1’+x3+x4’+x6) has four literals. Implicant A product term that indicates the input valuation(s) for which a given function is equal to 1 is called an implicant of the function. The most basic implicants are the minterms. Example: f (x1, x2, x3) = ∑m(0, 1, 2, 3, 7). There are 11 implicants for this function: five minterms: x1’x2’x3’, x1’x2’x3, x1’x2x3’, x1’x2x3, and x1x2x3. five implicants that correspond to all possible pairs of minterms: x1’x2’ (m0 and m1), x1’x3’ (m0 and m2), x1’x3 (m1 and m3), x1’x2 (m2 and m3), and x2x3 (m3 and m7). one implicant that covers a group of four minterms: x1’. Prime Implicant An implicant is called a prime implicant if it cannot be combined into another implicant that has fewer literals. Here there are two prime implicants: x1’ and x2x3. Cover A collection of implicants that account for all valuations for which a given function is equal to 1 is called a cover of that function. A number of different covers exist for most functions. A set of all minterms for which f = 1 is a cover. A set of all prime implicants is a cover. A cover defines a particular implementation of the function. While all of these expressions represent the function f correctly, the cover consisting of prime implicants leads to the lowest-cost implementation. Cost Cost of a logic circuit is the number of gates plus the total number of inputs to all gates in the circuit. If we assume that the input variables are available in both true and complemented forms, then the cost of the expression, is 9. Otherwise its cost is 13. If an inversion is needed inside a circuit, then the corresponding NOT gate and its input are included in the cost. For example, the expression has a cost of 14. Essential Prime Implicant If a prime implicant includes a minterm for which f = 1 that is not included in any other prime implicant, then it is called an essential prime implicant. Minimization Procedure Consider the following examples in which there is a choice as which prime implicants to include in the final cover There are five prime implicants: x1’x3, x2’x3, x3x4’, x2x3’x4, x1’x2x4. The essential ones are x2’x3, (because of m11), x3x4’ (because of m14), x2x3’x4 (because of m13). They must be included in the cover. These three prime implicants cover all minterms for which f = 1 except m7. It is clear that m7 can be covered by either x1’x3 or x1’x2x4 Because x1’x3 has a lower cost, it is chosen for the cover. Therefore, the minimum-cost realization is The process of finding a minimum-cost circuit involves the following steps: 1. Generate all prime implicants for the given function f. 2. Find the set of essential prime implicants. 3. If the set of essential prime implicants covers all valuations for which f = 1, then this set is the desired cover of f. Otherwise, determine the nonessential prime implicants that should be added to form a complete minimum-cost cover. Consider the following example: only x3’x4’ is essential. Then the best choice to implement the minimum cost circuit is Sometimes there may not be any essential prime implicants at all. Example: Two alternatives: Minimization of Product-of-Sums Forms Example1: OR Its cost is greater than the cost of the equivalent SOP implementation SOP COST: 6 (assuming input variables are available in both true and complemented forms) POS COST: 9 Example2: OR Assuming that both the complemented and uncomplemented versions of the input variables x1 to x4 are available, SOP COST: 18 POS COST: 15 SOP and POS implementations of a given function may or may not entail the same cost. EXERCISES: Find the POS implementation for the following and compare the cost with the SOP form 1. 2. 1. SOP: OR Cost: 3 input AND gate: 4 12+4=16 4 input OR gate: 1  4+1=5 cost = 21 POS: (x1’+x2+x3)(x1’+x2’+x4)(x1+x2’+x3’)(x1+x2+x4’) OR (x1’+x3+x4)(x2+x3+x4’)(x1+x3’+x4’)(x2’+x3’+x4) Cost: 3 input OR gate: 4 12+4=16 4 input AND gate: 1 4+1=5 cost = 21 2. SOP: cost: 2 input AND gate 1 2+1 = 3 3 input AND gate 2 6+2 = 8 3 input OR gate 1 3+1 = 4 cost = 15 POS: (x1+x4’)(x1+x3’)(x2+x3+x4’)(x2’+x3’+x4) Cost: 19 1. Obtain the simplest SOP and POS for f (x1,... , x5) =∏ M(1, 4, 6, 7, 9, 12,15, 17, 20, 21, 22, 23, 28, 31). X2, x3 X2, x3 X4, x5 00 01 11 10 X4, x5 00 01 11 10 00 1 0 0 1 00 1 0 0 1 01 0 1 1 0 01 0 0 1 1 11 1 0 0 1 11 1 0 0 1 10 1 0 1 1 10 1 0 1 1 x1=0 x1=1 Incompletely Specified Functions In digital systems it often happens that certain input conditions can never occur. For example, suppose that x1 and x2 control two interlocked switches such that both switches cannot be closed at the same time. Then the input valuations (x1, x2) = 11 is guaranteed not to occur. Then we say that (x1, x2) = 11 is a don’t-care condition, meaning that a circuit with x1 and x2 as inputs can be designed by ignoring this condition. A function that has don’t-care condition(s) is said to be incompletely specified. NAND and NOR Implementation Use Karnaugh maps to find the minimum-cost SOP and POS expressions for the function f (x1,... , x4) = x1’x3’x4’ + x3x4 + x1’x2’x4 + x1x2x3’x4 assuming that there are also don’t-cares defined as D = ∑(9, 12, 14). A three-variable logic function that is equal to 1 if any two or all three of its variables are equal to 1 is called a majority function. Design a minimum-cost SOP circuit that implements this majority function. a b c f 0 0 0 0 0 0 1 0 0 1 0 0 0 1 1 1 1 0 0 0 1 0 1 1 1 1 0 1 1 1 1 1 Multilevel Synthesis Factoring Fan-in Problem No. of inputs that a gate can drive is called fan in Factoring can be used to deal with the fan-in problem. Functional Decomposition Using factoring, multilevel circuits are used to deal with fan-in limitations. However, such circuits may be preferable even if fan-in is not a problem. In some cases the multilevel circuits may reduce the cost of implementation. On the other hand, they usually imply longer propagation delays, because they use multiple stages of logic gates. Complexity of a logic circuit can often be reduced by decomposing a two-level circuit into subcircuits, where one or more subcircuits implement functions that may be used in several places to construct the final circuit Example 1 Assuming that inputs are available only in their true form, this expression has seven gates and 18 inputs to gates, for a total cost of 25. To perform functional decomposition we need to find one or more subfunctions that depend on only a subset of the input variables. Now let g(x3, x4) = x3’x4 + x3x4’ and observe that g’ = x3’x4’ + x3x4. The decomposed function f can be written as f = x1g + x2g’ Example 2 = x3’x4(x1+x2+x5) + x3x4’(x1+x2+x5) + x1’x2’x5’(x3’x4’+x3x4) = (x1+x2+x5)(x3’x4+x3x4’) + x1’x2’x5’(x3’x4’+x3x4) let g=x1+x2+x5, and k= x3’x4+x3x4’. Then f= gk +g’k’ = (g XOR k)’ It requires a total of three gates and seven inputs to gates, for a total cost of 10. The largest fan-in of any gate is three. The minimum cost SOP has a cost of 55 (14 gates and 41 inputs) with largest fan in 8 Implement the XOR function using only NAND gates. x1 XOR x2 = x1’x2 + x1x2’ Exploit functional decomposition to find a better implementation of XOR using only NAND gates. Let the symbol ↑ represent the NAND operation so that x1 ↑ x2 = (x1 · x2)’. To find a decomposition, we can manipulate the term (x1 ↑ x2’) as follows: We can perform a similar manipulation for (x1’ ↑ x2) (x1’ ↑ x2) = (x1’x2)’ = (x2(x1’+x2’))’ = (x2 ↑ (x1’+x2’)) DeMorgan’s theorem states that x1’ + x2’ = x1 ↑ x2; hence we can write Now we have a decomposition Optimal NAND gate implementation Find the minimum-cost circuit for the function f (x1,... , x4) = m(0, 4, 8, 13, 14, 15). Assume that the input variables are available in uncomplemented form only. ARITHMETIC CIRCUITS Addition of Unsigned Numbers The one-bit addition entails four possible combinations, The sum bit s is the XOR function. The carry c is the AND function of inputs x and y. A circuit realization of these functions is shown in Fig. c. This circuit, which implements the addition of only two bits, is called a half-adder. Multibit Addition for each bit position i, the addition operation may include a carry-in from bit position i − 1. This observation leads to the design of a logic circuit that has three inputs xi , yi , and ci , and produces the two outputs si and ci+1. This circuit is known as a full-adder. Decomposed Full-Adder S1=xi XOR yi C1=xiyi S2=Si=S1 XOR ci = xi XOR yi XOR ci C2=S1ci = (xi XOR yi)ci = xi’yici + xiyi’ci Ci+1= xiyi + xi’yici + xiyi’ci = xiyi(ci+ci’) + xi’yici + xiyi’ci = xiyici +xiyici’ + xi’yici + xiyi’ci =xiyi +yici + xici Ripple-Carry Adder For each bit position we can use a full-adder circuit, connected as shown in Figure When the operands X and Y are applied as inputs to the adder, it takes some time before the output sum, S, is valid. Each full-adder introduces a certain delay before its si and ci+1 outputs are valid. Let this delay be denoted as ∆t. Thus the carry-out from the first stage, c1, arrives at the second stage ∆t after the application of the x0 and y0 inputs. The carry-out from the second stage, c2, arrives at the third stage with a 2∆t delay, and so on. The signal cn−1 is valid after a delay of (n − 1) ∆t, which means that the complete sum is available after a delay of n∆t. Because of the way the carry signals “ripple” through the full-adder stages, the circuit is called a ripple-carry adder. Adder and Subtractor Unit A XOR B When Add/Sub = 0, it performs X + Y. When Add/Sub = 1, it performs X + Y’ + 1 = X + 2’s complement of Y =X-Y Arithmetic Overflow The result of addition or subtraction is supposed to fit within the significant bits used to represent the numbers. If n bits are used to represent signed numbers, then the result must be in the range −2n−1 to 2n−1 − 1. If the result does not fit in this range, then we say that arithmetic overflow has occurred. We are using four-bit numbers. If both numbers have the same sign, the magnitude of the result is 9, which cannot be represented with 4 bits in 2’s complement representation. Therefore, overflow occurs. When the numbers have opposite signs, there is no overflow. The key to determine whether overflow occurs is the carry-out from the sign-bit position, called c4 in the figure and from the previous bit position, called c3. The figure indicates that overflow occurs when these carry-outs have different values, and a correct sum is produced when they have the same value. Indeed, this is true in general for both addition and subtraction of 2’s-complement numbers. An alternative and more intuitive way of detecting the arithmetic overflow is to observe that overflow occurs if both summands have the same sign but the resulting sum has a different sign. Let X = x3x2x1x0 and Y = y3y2y1y0 represent four-bit 2’s-complement numbers, and let S = s3s2s1s0 be the sum S = X + Y. Then The carry-out and overflow signals indicate whether the result of a given addition is too large to fit into the number of bits available to represent the sum. The carry-out is meaningful only when unsigned numbers are involved, while the overflow is meaningful only in the case of signed numbers. Fast Adders The addition and subtraction of numbers are fundamental operations that are performed frequently in the course of a computation. The speed with which these operations are performed has a strong impact on the overall performance of a computer. Let us take a closer look at the speed of the adder/subtractor unit. We are interested in the largest delay from the time the operands X and Y are presented as inputs, until the time all bits of the sum S and the final carry-out, cn, are valid. The delay for the carry-out signal in the full adder circuit, ∆t, is equal to two gate delays. The final result of the addition will be valid after a delay of n∆t, which is equal to 2n gate delays. In addition to the delay in the ripple-carry path, there is also a delay in the XOR gates that feed either the true or complemented value of Y to the adder inputs. If this delay is equal to one gate delay, then the total delay of the circuit is 2n + 1 gate delays. For a large n, say n = 32 or n = 64, the delay would lead to unacceptably poor performance. The longest delay is along the path from the yi input, through the XOR gate and through the carry circuit of each adder stage. The longest delay is often referred to as the critical-path delay, and the path that causes this delay is called the critical path. Carry-Lookahead Adder To reduce the delay caused by the effect of carry propagation through the ripple-carry adder, we can attempt to evaluate quickly for each stage whether the carry-in from the previous stage will have a value 0 or 1. If a correct evaluation can be made in a relatively short time, then the performance of the complete adder will be improved. gi is called the generate function. pi is called the propagate function. Expanding in terms of stage i − 1 gives The same expansion for other stages, ending with stage 0, gives This expression represents a two-level AND-OR circuit in which ci+1 is evaluated very quickly. An adder based on this expression is called a carry-lookahead adder. The critical path is from inputs x0 and y0 to the output c2. It passes through five gates, The path in other stages of an n-bit adder is the same as in stage 1. Therefore, the total number of gate delays along the critical path is 2n + 1. A ripple-carry adder All carry signals are produced after three gate delays: one gate delay is needed to produce the generate and propagate signals g0, g1, p0, and p1, and two more gate delays are needed to produce c1 and c2 concurrently. Extending the circuit to n bits, the final carry-out signal cn would also be produced after only three The first two stages of a carry-lookahead adder gate delays. The total delay in the n-bit carry-lookahead adder is four gate delays. The values of all gi and pi signals are determined after one gate delay. It takes two more gate delays to evaluate all carry signals. Finally, it takes one more gate delay (XOR) to generate all sum bits. The key to the good performance of the adder is quick evaluation of carry signals. The complexity of an n-bit carry-lookahead adder increases rapidly as n becomes larger. To reduce the complexity, we can use a hierarchical approach in designing large adders. Suppose that we want to design a 32-bit adder. We can divide this adder into 4 eight-bit blocks, such that block 0 adds bits 7... 0, block 1 adds bits 15... 8, block 2 adds bits 23... 16, and block 3 adds bits 31... 24. Then we can implement each block as an eight-bit carry- lookahead adder. The carry-out signals from the four blocks are c8, c16, c24, and c32. Now we have two possibilities. We can connect the four blocks as four stages in a ripple- carry adder. Thus while carry-lookahead is used within each block, the carries ripple between the blocks. A hierarchical carry-lookahead adder with ripple-carry between blocks. gi, pi : 1 gate delay c1-c8 : 2 gate delay c9-c16 : 2 gate delay c17-c24 : 2 gate delay c25-c32 : 2 gate delay One more gate delay to produce the sum bits form the last block Total : 10 gate dealy. Instead of using a ripple-carry approach between blocks, a faster circuit can be designed in which a second-level carry-lookahead is performed to produce quickly the carry signals between blocks. The structure of this “hierarchical carry-lookahead adder” is shown in the figure below A hierarchical carry-lookahead adder. Each block in the top row includes an eight-bit carry- lookahead adder, based on generate and propagate signals for each stage in the block. Instead of producing a carry-out signal from the most- significant bit of the block, each block produces generate and propagate signals for the entire block. Let Gj and Pj denote these signals for each block j. Now Gj and Pj can be used as inputs to a second-level carrylookahead circuit at the bottom of the Figure, which evaluates all carries between blocks. We can derive the block generate and propagate signals for block 0 by examining the expression for c8 For block 1 the expressions for G1 and P1 have the same form as for G0 and P0 except that each subscript i is replaced by i + 8. The expressions for G2, P2, G3, and P3 are derived in the same way. The expression for the carry-out of block 1, c16, is Similarly, the expressions for c24 and c32 are Using this scheme, it takes two more gate delays to produce the carry signals c8, c16, c24, and c32 than the time needed to generate the Gj and Pj functions. Therefore, since Gj and Pj require three gate delays, c8, c16, c24, and c32 are available after five gate delays. The time needed to add two 32-bit numbers involves these five gate delays plus two more to produce the internal carries in blocks 1, 2, and 3, plus one more gate delay (XOR) to generate each sum bit. This gives a total of eight gate delays. It takes 2n + 1 gate delays to add two numbers using a ripple-carry adder. For 32-bit numbers this implies 65 gate delays. It is clear that the carry-lookahead adder offers a large performance improvement. Technology Considerations consider the expressions for the first eight carries in CLA: c1 = g0 + p0c0 c2 = g1 + p1g0 + p1p0c0... c8 = g7 + p7g6 + p7p6g5 + p7p6p5g4 + p7p6p5p4g3 + p7p6p5p4p3g2 + p7p6p5p4p3p2g1 + p7p6p5p4p3p2p1g0 + p7p6p5p4p3p2p1p0c0 Suppose that the maximum fan-in of the gates is four inputs. Then it is impossible to implement all of these expressions with a two-level AND-OR circuit. The biggest problem is c8, where one of the AND gates requires nine inputs; moreover, the OR gate also requires nine inputs. To meet the fan-in constraint, we can rewrite the expression for c8 as c8 = (g7 + p7g6 + p7p6g5 + p7p6p5g4) + [( p7p6p5p4)(g3 + p3g2 + p3p2g1 + p3p2p1g0)] + ( p7p6p5p4)( p3p2p1p0)c0 To implement this expression we need ten AND gates and three OR gates. The propagation delay in generating c8 consists of one gate delay to develop all gi and pi , two gate delays to produce the sum-of- products terms in parentheses, one gate delay to form the product term in square brackets, and one delay for the final ORing of terms. Hence c8 is valid after five gate delays, rather than the three gate delays that would be needed without the fan-in constraint. Multiplication Two binary numbers can be multiplied using the same method as we use for decimal numbers. Binary-Coded-Decimal Representation BCD Addition z2, z1 z8, z4 00 01 11 10 00 0 0 0 0 01 0 0 0 0 11 1 1 1 1 10 0 0 1 1 Correction should be done when the carry k = 1 or when the expression z8z4 + z8z2 evaluates to 1. Therefore the expression for the correction circuit is Y1 X1 Y0 X0 4 4 Single digit bcd adder1 Single digit bcd adder0 Cout1 Cin Cout0 4 4 S1 S0 Exercise 1 Design a circuit using full adders to generate 2’s complement of a 4 bit number X x3 x2 x1 x0 0 0 0 0 1 y3 y2 y1 y0 y3y2y1y0 is the 2’s complement of X Exercise 2 Design a circuit using full adders to find the no. of 1’s in a three bit unsigned number x2 x1 FA x0 c s CS is the result in binary. Example Problem: In computer computations it is often necessary to compare numbers. Two four-bit signed numbers, X = x3x2x1x0 and Y = y3y2y1y0, can be compared by using the subtractor circuit. The three outputs denote the following: Z = 1 if the result is 0; otherwise Z = 0 N = 1 if the result is negative; otherwise N = 0 V = 1 if arithmetic overflow occurs; otherwise V = 0 Show how Z, N, and V can be used to determine the cases X = Y , X < Y , X ≤ Y , X > Y , and X ≥ Y. Consider first the case X < Y , where the following possibilities may arise: If X and Y have the same sign there will be no overflow, hence V = 0. Then for both positive and negative X and Y the difference will be negative (N = 1). If X is negative and Y is positive, the difference will be negative (N = 1) if there is no overflow (V = 0); but the result will be positive (N = 0) if there is overflow (V = 1). Therefore, if X < Y then N ⊕ V = 1. The case X = Y is detected by Z = 1. Then, X ≤ Y is detected by Z + (N ⊕ V) = 1. X > Y if (Z + (N ⊕ V))’ = 1 X ≥ Y if (N ⊕ V)’ = 1. Arithmetic Comparison Circuits Let A = a3a2a1a0 and B = b3b2b1b0. Define a set of intermediate signals called i3, i2, i1, and i0. Each signal, ik , is 1 if the bits of A and B with the same index are equal. That is, ik = (ak ⊕ bk)’. The comparator’s AeqB output is then given by AeqB = i3i2i1i0 Introduction to Verilog HDL Verilog was originally intended for simulation and verification of digital circuits. Subsequently, using CAD (Computer Aided Design) tools, the Verilog Codes are synthesized into hardware implementation of the described circuit. We are using the Verilog in the lab for simulation and verification of digital circuits There are two ways of describing the circuit in Verilog. 1. Using Verilog constructs, describe the structure of the circuit in terms of circuit elements, such as logic gates. A larger circuit is defined by writing code that connects such elements together. This approach is referred to as the structural representation of logic circuits. 2. Using logic expressions and Verilog programming constructs that define the desired behavior of the circuit, but not its actual structure in terms of gates, describe a circuit more abstractly. This is called the behavioral representation. Structural Specification of Logic Circuits Verilog includes a set of gate-level primitives that correspond to commonly-used logic gates. A gate is represented by indicating its functional name, output, and inputs. For example, a two-input AND gate, with output y and inputs x1 and x2, is denoted as and (y, x1, x2); A four-input OR gate is specified as or (y, x1, x2, x3, x4); The available Verilog gate-level primitives are A logic circuit is specified in the form of a module that contains the statements that define the circuit. A module has inputs and outputs, which are referred to as its ports. The word port is a commonly-used term that refers to an input or output connection to an electronic circuit. module example1(x1,x2,x3,f); input x1,x2,x3; output f; and (g,x1,x2); not (k,x2); and (h,k,x3); or (f,g,h); endmodule The first statement gives the module a name, example1, and indicates that there are four port signals. The next two statements declare that x1, x2, and x3 are to be treated as input signals, while f is the output. The actual structure of the circuit is specified in the four statements that follow. The NOT gate gives k = x2’. The AND gates produce g = x1x2 and h = kx3. The outputs of AND gates are combined in the OR gate to form f = g + h = x1x2 + kx3 = x1x2 + x2’x3. The module ends with the endmodule statement. Example2: It defines a circuit that has four input signals, x1, x2, x3, and x4, and three output signals, f, g, and h. It implements the logic functions g = x1x3 + x2x4 h = (x1 + x3’)(x2’ + x4) f=g+h module example2 (x1, x2, x3, x4, f, g, h); input x1, x2, x3, x4; output f, g, h; and (z1, x1, x3); and (z2, x2, x4); Instead of using explicit NOT gates to or (g, z1, z2); define x2 and x3, we have used the or (z3, x1, ~x3); Verilog operator “∼” (tilde character or (z4, ~x2, x4) on the keyboard) to denote and (h, z3, z4); complementation. Thus, x2 is or (f, g, h); indicated as ∼x2 in the code. endmodule Verilog Syntax The names of modules and signals in Verilog code follow two simple rules: The name must start with a letter, and it can contain any letter or number plus the “_” underscore and “$” characters. Verilog is case sensitive Indentation and blank lines can be used to make separate parts of the code easily recognizable. Comments may be included in the code to improve its readability. A comment begins with the double slash “//” and continues to the end of the line. Behavioral Specification of Logic Circuits Using gate-level primitives can be tedious when large circuits have to be designed. An alternative is to use more abstract expressions and programming constructs to describe the behavior of a logic circuit. One possibility is to define the circuit using logic expressions. Behavioral specification of a logic circuit defines only its behavior. CAD synthesis tools use this specification to construct the actual circuit. The detailed structure of the synthesized circuit will depend on the technology used. module example1(x1,x2,x3,f); input x1,x2,x3; output f; assign f=(x1 & x2)| (~x2 & x3); endmodule The AND and OR operations are indicated by the “&” and “|” Verilog operators, respectively. The assign keyword provides a continuous assignment for the signal f. Behavioral code for example 2 is as follows: module example2 (x1, x2, x3, x4, f, g, h); input x1, x2, x3, x4; output f, g, h; assign g = (x1 & x3) | (x2 & x4); assign h = (x1 | x3) & ( x2 | x4); assign f = g | h; endmodule Operator type Operator Symbols Operation Performed Number of operands Bitwise ~ 1’s complement 1 & Bitwise AND 2 | Bitwise OR 2 ^ Bitwise XOR 2 ~^ or ^~ Bitwise XNOR 2 Logical ! NOT 1 && AND 2 || OR 2 Reduction & Reduction AND 1 ~& Reduction NAND 1 | Reduction OR 1 ~| Reduction NOR 1 ^ Reduction XOR 1 ~^ or ^~ Reduction XNOR 1 Arithmetic + Addition 2 - Subtraction 2 - 2’s complement 1 * Multiplication 2 / Division 2 Relational > Greater than 2 < Lesser than 2 >= Greater than or equal to 2 > Right shift 2 B) AgtB = 1; else AltB = 1; end endmodule Equality Operators The expression (A == B) is evaluated as true if A is equal to B and false otherwise. The != operator has the opposite effect. The result is ambiguous (x) if either operand contains x or z values. Shift Operators A vector operand can be shifted to the right or left by a number of bits specified as a constant. When bits are shifted, the vacant bit positions are filled with 0s. For example, B =A> 2; yields b2 = b1 = 0 and b0 = a2. Concatenate Operator This operator concatenates two or more vectors to create a larger vector. For example, D = {A, B}; defines the six-bit vector D = a2a1a0b2b1b0. Similarly, the concatenation E = {3’b111, A, 2’b00}; produces the eight-bit vector E = 111a2a1a000. Replication Operator This operator allows repetitive concatenation of the same vector, which is replicated the number of times indicated in the replication constant. For example, {3{A}} is equivalent to writing {A, A, A}. The specification {4{2’b10}} produces the eight-bit vector 10101010. The replication operator may be used in conjunction with the concatenate operator. For instance, {2{A}, 3{B}} is equivalent to {A, A, B, B, B}. Synchronous Sequential Circuits Text book: Digital Design by Morris Mano,2nd edition, 6th Chapter The digital circuits considered thus far have been combinational, i.e., the outputs at any instant of time are entirely dependent upon the inputs present at that time. Although every digital system is likely to have combinational circuits, most systems encountered in practice also include memory elements, which require that the system be described in terms of sequential. In the diagram, the memory elements are devices capable of storing binary information within them. The binary information stored in the memory elements at any given time defines the state of the sequential circuit. The sequential circuit receives binary information from external inputs. These inputs, together with the present state of the memory elements, determine the binary value at the output terminals. They also determine the condition for changing the state in the memory elements. The block diagram demonstrates that the external outputs in a sequential circuit are a function not only of external inputs, but also of the present state of the memory elements. The next state of the memory elements is also a function of external inputs and the present state. Thus, a sequential circuit is specified by a time sequence of inputs, outputs, and internal states. Two main types of sequential circuits. : Synchronous sequential circuit Asynchronous sequential circuit A synchronous sequential circuit is a system Whose behavior can be defined from the knowledge of its signals at discrete instants of time. Synchronization is achieved by a timing device called a clock generator, which provides a clock signal having the form of a periodic train of clock pulses. The activity within the circuit and the resulting updating of stored values is synchronized to the occurrence of clock pulses. The behavior of an asynchronous sequential circuit depends upon the order in which its input signals change and can be affected at any instant of time. Flip Flop A flip-flop is a binary storage device capable of storing one bit of information. In a stable state, the output of a flip-flop is either 0 or 1. A sequential circuit may use many flip-flops to store as many bits as necessary. Basic Flip Flop Circuit A FF circuit can be constructed from two cross-coupled NOR gates or two cross-coupled NAND gates, and two inputs labeled S for set and R for reset. Each FF has two outputs, Q and Q’ This type of FF is also called as SR latch. Basic FF circuit with NOR gates The output of a NOR gate is 0 if any input is 1, and that the output is 1 only when all the inputs are 0. It has two useful states. When output Q = 1 and Q’ = 0, it is said to be in the set state. When Q = 0 and Q’ = 1, it is in the reset state. Outputs Q and Q’ are normally the complement of each other. The binary state of the FF is taken to be the value of the normal output. When both inputs are equal to 1 at the same time, a condition in which both outputs are equal to 0 (rather than be mutually complementary) occurs. If both inputs are then switched to 0 simultaneously, the device will enter an unpredictable or undefined state or a metastable state. Consequently, in practical applications, setting both inputs to 1 is forbidden. Under normal conditions, both inputs remain at 0 unless the state has to be changed. The application of a momentary 1 to the S input causes the FF to go to the set state. The S input must go back to 0 before any other changes take place, in order to avoid the occurrence of an undefined next state that results from the forbidden input condition. When both inputs S and R are equal to 0, the FF can be in either the set or the reset state, depending on which input was most recently a 1. If a 1 is applied to both the S and R inputs, both outputs go to 0. This action produces an undefined next state, because the state that results from the input transitions depends on the order in which they return to 0. It also violates the requirement that outputs be the complement of each other. In normal operation, this condition is avoided by making sure that 1’s are not applied to both inputs simultaneously. Basic FF circuit with NAND gates The output of a NAND gate is 1 if any input is 0, and that the output is 0 only when all the inputs are 1. It operates with both inputs normally at 1, unless the state has to be changed. The application of 0 to the S input causes output Q to go to 1, putting the FF in the set state. When the S input goes back to 1, the circuit remains in the set state. After both inputs go back to 1, we are allowed to change the state of the FF by placing a 0 in the R input. This action causes the circuit to go to the reset state and stay there even after both inputs return to 1. The condition that is forbidden for the NAND FF is both inputs being equal to 0 at the same time, an input combination that should be avoided. SR FF with NAND gates and Clock FFs with clock signal are called clocked FFs or simply FFs Q S R Qt+1 0 0 0 0 0 0 1 0 0 1 0 1 0 1 1 Indeterminate 1 0 0 1 1 0 1 0 1 1 0 1 1 1 1 Indeterminate b) Characteristic Table c) Graphical Symbol D Flip Flop JK Flip flop T Flip flop T FF Logic Diagram Triggering of flip flops The state of a flipflop is switched by a momentary change in the input signal. This momentary change is called a trigger and the transition it causes is said to trigger the flipflop. Clocked FFs are triggered by clock pulses Clock is a signal that oscillates between a high and a low state Positive pulse Negative pulse A positive clock source remains at 0 during the interval between pulses and goes to 1 during the occurrence of a pulse The pulse transition from 0 to 1 is called positive edge and that from 1 to 0 is called negative edge. Master Slave FF Edge Triggered FF When the input clock in the positive-edge-triggered flip-flop makes a positive transition, the value of D is transferred to Q. A negative transition of the clock (i.e., from 1 to 0) does not affect the output, nor is the output affected by changes in D when Clock is in the steady logic-1 level or the logic-0 level. Hence, this type of flip- flop responds to the transition from 0 to 1 and nothing else. There is a minimum time called the setup time during which the D input must be maintained at a constant value prior to the occurrence of the clock transition. It is equal to the propagation delay through the gates 4 and 1 since a change in D causes a change in the outputs of these two gates Similarly, there is a minimum time called the hold time during which the D input must not change after the application of the positive transition of the clock. When D=0, it is the propagation delay of gate 3, since it must be ensured that R becomes 0 in order to maintain the output of gate 4 at 1, regardless the value of D. When D=1, it is the propagation delay of gate 2, since it must be ensured that S becomes 0 in order to maintain the output of gate 1 at 1, regardless the value of D The propagation delay time of the flip-flop is defined as the interval between the trigger edge and the stabilization of the output to a new state Graphic Symbols Negative edge triggered D FF Positive edge triggered FFs Excitation Tables J K Qt+1 0 0 Qt 0 1 0 1 0 1 1 1 Qt’ T Qt+1 0 Qt 1 Qt’ D Qt+1 0 0 1 1 S R Qt+1 0 0 Qt 0 1 0 1 0 1 1 1 ? (d) SR FF DESIGN PROCEDURE 1. The problem may be given by the word description or state diagram or state table 2. From the given information obtain the state table 3. Reduce the number of states if necessary. 4. Assign binary values to the states. 5. Determine the number of FFs needed and assign a letter symbol to each 6. Choose the type of flip-flops to be used. 7. Derive the circuit excitation and output tables 8. Derive the simplified flip-flop input equations and output equations. 9. Draw the logic diagram. Example 1: Design the synchronous sequential circuit for the following state diagram using JK FF Solution: Example 2: Design the synchronous sequential circuit for the following state diagram using T FF-(Exercise for the students to solve) Solution: Ta = A’B +Bx’ Tb = B’x’ + Ax’ + A’Bx Design with D FFs Design if A(t+1)= B(t+1)= Solution: State Reduction The reduction in the number of flip-flops in a sequential circuit is referred to as the state-reduction problem. Since m flip-flops produce 2m states, a reduction in the number of states may (or may not) result in a reduction in the number of flip-flops. An unpredictable effect in reducing the number of flip- flops is that sometimes the equivalent circuit (with fewer flip-flops) may require more combinational gates to realize its next state and output logic. Example: There are an infinite number of input sequences that may be applied to the circuit; each results in a unique output sequence. As an example, consider the input sequence 01010110100 starting from the initial state a. Each input of 0 or 1 produces an output of 0 or 1 and causes the circuit to go to the next state. State a a b c d e f f g f g a Input 0 1 0 1 0 1 1 0 1 0 0 Output 0 0 0 0 0 1 1 0 1 0 0 In each column, we have the present state, input value, and output value. The next state is written on top of the next column. The following algorithm for the state reduction of a completely specified state table is used: “Two states are said to be equivalent if, for each member of the set of inputs, they give exactly the same output and send the circuit either to the same state or to an equivalent state.” When two states are equivalent, one of them can be removed without altering the input–output relationships. we look for two present states that go to the same next state and have the same output for both input combinations. States e and g are two such states: They both go to states a and f and have outputs of 0 and 1 for x = 0 and x = 1, respectively. Therefore, states g and e are equivalent, and one of these states can be removed. State a a b c d e d d e d e a Input 0 1 0 1 0 1 1 0 1 0 0 Output 0 0 0 0 0 1 1 0 1 0 0 The sequential circuit of this example was reduced from seven to five states but the no. of FFs are not reduced because 5 states require 3 FFs that is same for 7 states too. In general, reducing the number of states in a state table may result in a circuit with less equipment because the unused states are taken as don’t cares in the design process. However, the fact that a state table has been reduced to fewer states does not guarantee a saving in the number of flip-flops or the number of gates. State Assignment In order to design a sequential circuit with physical components, it is necessary to assign unique coded binary values to the states. For a circuit with m states, the codes must contain n bits, where 2n ≥ m. For example, with three bits, it is possible to assign codes to eight states, denoted by binary numbers 000 through 111. For 5 states, we are left with three unused states. Unused states are treated as don’t-care conditions during the design. Since don’t-care conditions usually help in obtaining a simpler circuit, it is more likely but not certain that the circuit with five states will require fewer combinational gates than the one with seven states. The simplest way to code five states is to use the first five integers in binary counting order. Another similar assignment is the Gray code. Here, only one bit in the code group changes when going from one number to the next. This code makes it easier for the Boolean functions to be placed in the map for simplification. There are many possibilities. Design the synchronous sequential circuit for the following state table: Example: Design a sequential circuit with two JK flip-flops A and B and two inputs E and F. If E = 0, the circuit remains in the same state regardless of the value of F. When E = 1 and F = 1, the circuit goes through the state transitions from 00 to 01, to 10, to 11, back to 00, and repeats. When E = 1 and F = 0, the circuit goes through the state transitions from 00 to 11, to 10, to 01, back to 00, and repeats Solution: JA = KA = (B’F’ + BF)E JB = KB = E Home work: 1. 2 3. Design the sequential circuit for the following state diagram Design of synchronous counters Design a three-bit counter that counts in the sequence 0, 1, 2, 4, 5, 6, 0, …… Problem Problem: Design with the following repeated binary sequence 0, 1, 3, 7, 6, 4. Use T FF. Check if it is self correcting Home work Check all of your design to see if it is self correcting or not Registers Shift Register Home Work 1. Draw the logic diagram of a four‐bit register with four D flip‐flops and four 4 × 1 multiplexers with mode selection inputs s1 and s0. The register operates according to the following function table. 2. Ripple Counters Synchronous up/down counter Ring Counter Timing signals that control the sequence of operations in a digital system can be generated by a shift register or by a counter with a decoder. A ring counter is a circular shift register with only one flip‐flop being set at any particular time; all others are cleared. The single bit is shifted from one flip‐flop to the next to produce the sequence of timing signals. This Figure shows a four‐bit shift register connected as a ring counter. The initial value of the register is 1000 which produces the variable T0. The single bit is shifted right with every clock pulse and circulates back from T3 to T0. Each flip‐flop is in the 1 state once every four clock cycles and produces one of the four timing signals Each output becomes a 1 after the negative‐edge transition of a clock pulse and remains 1 during the next clock cycle. For an alternative design, the timing signals can be generated by a two‐bit counter that goes through four distinct states and a decoder. The decoder decodes the four states of the counter and generates the required sequence of timing signals. To generate 2n timing signals, we need either a shift register with 2n flip‐flops or an n ‐bit binary counter together with an n‐to‐2n ‐line decoder. Johnson Counter A Johnson Counter or a switch‐tail ring counter is a circular shift register with the complemented output of the last flip‐flop connected to the input of the first flip‐flop. The decoding of a k ‐bit Johnson counter to obtain 2k timing signals follows a regular pattern. The all‐0’s state is decoded by taking the complement of the two extreme flip‐flop outputs. The all‐1’s state is decoded by taking the normal outputs of the two extreme flip‐flops. All other states are decoded from an adjacent 1, 0 or 0, 1 pattern in the sequence. Starting from a cleared state, the Johnson counter goes through For example, sequence 7 has an a sequence of eight states. In general, a k ‐bit Johnson counter adjacent 0, 1 pattern in flip‐flops B will go through a sequence of 2k states. Starting from all 0’s, and C. The decoded output is then each shift operation inserts 1’s from the left until the register is obtained by taking the complement filled with all 1’s. In the next sequences, 0’s are inserted from the of B and the normal output of C, or left until the register is again filled with all 0’s. B’C. A k-bit Johnson counter with 2k decoding gates to provide 2k timing signals. The eight AND gates listed in the table, when connected to the circuit, will complete the construction of the Johnson counter. Since each gate is enabled during one particular state sequence, the outputs of the gates generate eight timing signals in succession. Mealy and Moore Models of Finite State Machines The most general model of a sequential circuit has inputs, outputs, and internal states. It is customary to distinguish between two models of sequential circuits: the Mealy model and the Moore model. They differ only in the way the output is generated. Mealy Model In the Mealy model, the output is a function of both the present state and the input. Example: State Diagram: Both the input and output values are shown, separated by a slash along the directed lines between the states. Moore Model In the Moore model, the output is a function of only the present state. A circuit may have both types of outputs. The two models of a sequential circuit are commonly referred to as a finite state machine, abbreviated FSM. Example: State Diagram: It has only inputs marked along the directed lines. The outputs are the flip-flop states marked inside the circles. There may be other outputs also, which are functions of present state only. Design a circuit that meets the following specification: 1. The circuit has one input, w, and one output, z. 2. All changes in the circuit occur on the positive edge of the clock signal. 3. The output z is equal to 1 if during two immediately preceding clock cycles the input w was equal to 1. Otherwise, the value of z is equal to 0. From this specification it is apparent that the output z depends on both present and past values of w. Consider the sequence of values of the w and z signals for the clock cycles shown in Figure 6.2. As seen in the figure, for a given value of input w the output z may be either 0 or 1. For example, w = 0 during clock cycles t2 and t5, but z = 0 during t2 and z = 1 during t5. Similarly, w = 1 during t1 and t8, but z = 0 during t1 and z = 1 during t8. This means that z is not determined only by the present value of w, so there must exist different states in the circuit that determine the value of z. Reset input is used to force the circuit into state A, regardless of what state the circuit happens to be in. Using Verilog Constructs for Storage Elements module flipflop (D, Clock, Q); input D, Clock; output reg Q; always @(posedge Clock) Q = D; endmodule Blocking and Non-Blocking Assignments f = x1 & x2; This notation is called a blocking assignment. A Verilog compiler evaluates the statements in an always block in the order in which they are written. If a variable is given a value by a blocking assignment statement, then this new value is used in evaluating all subsequent statements in the block. module example5_3 (D, Clock, Q1, Q2); input D, Clock; output reg Q1, Q2; always @(posedge Clock) begin Q1 = D; Q2 = Q1; end endmodule Incorrect code for two cascaded flip-flops. Since the always block is sensitive to the positive clock edge, both Q1 and Q2 will be implemented as the outputs of D flip-flops. However, because blocking assignments are involved, these two flip-flops will not be connected in cascade. The first statement Q1 = D; sets Q1 to the value of D. This new value is used in evaluating the subsequent statement Q2 = Q1; which results in Q2 = Q1 = D. Verilog also provides a non-blocking assignment, denoted with

Use Quizgecko on...
Browser
Browser