Lecture 5 - Data Representation 2 (1) PDF
Document Details
Uploaded by StatuesqueTrumpet1577
Egypt University of Informatics
Tags
Related
- Architecture des Ordinateurs & Réseaux PDF
- Computer Organization and Architecture material Unit I & II.pdf
- HCI Lecture 07 PDF
- Computer System Unit Components & Storage - PDF
- Computer Organization and Architecture Lecture 2: Data Representation - Fall 2024 - Mansoura University PDF
- Intro to Computer Technology: System Unit - Lecture 2 PDF
Summary
This document covers lecture notes on data representation within computer architecture and organization, focusing on topics like addition, negation, multiplication, and shifting. It details different types of arithmetic operations including unsigned and signed operations (two's complement), and floating point representation and corresponding encoding/conversion schemes.
Full Transcript
C-CS214 Computer Architecture and Organization Data Representation II http://lol-rofl.com/computer-cartoon/ 1 References ❑ Many slides of this lecture are either from or adapted from: ▪ Jingyan...
C-CS214 Computer Architecture and Organization Data Representation II http://lol-rofl.com/computer-cartoon/ 1 References ❑ Many slides of this lecture are either from or adapted from: ▪ Jingyan Li ▪ Bryant and O’Hallaron ▪ Clark Barrett ▪ “Machine Architecture and Organization” lectures, University of Minnesota, https://cse.umn.edu/ ▪ Prof. Mohamed Zahran lectures 2 Educational Material in Textbook Topics discussed in this lecture are covered in: Textbook: R. Bryant, D. O'Hallaron. Computer Systems: A Programmer’s Perspective. Prentice Hall, 3rd Edition, 2015: ▪ Chapter 2: section 2.3 and 2.4 (it is important to refer to textbook) 3 Addition, Negation, Multiplication, and Shifting Chapter 2: section 2.3 4 Negation: Complement & Increment The complement of x satisfies Two’s Comp(x) + x = 0 Two’s Comp(x) = ~x + 1 Proof sketch x 1 0 0 1 1 1 0 1 – Observation: ~x + x = 1111…111 = -1 + ~x 0 1 1 0 0 0 1 0 → ~x + x + 1 = 0 → (~x + 1) + x = 0 -1 1 1 1 1 1 1 1 1 → Two’s Comp(x) + x = 0 5 Unsigned Addition u Operands: w bits + v True Sum: w+1 bits u+v Discard Carry: w bits UAddw(u , v) ▪ Standard addition function: Ignores carry output ▪ Implementing modular arithmetic: S = UAddw(u , v) = u + v mod 2w This means to discard any bit with weight greater than 2w-1 in the bit-level representation of the sum. 6 Unsigned Addition u Operands: w bits + v True Sum: w+1 bits u+v Discard Carry: w bits UAddw(u , v) Principle: Unsigned Addition For u and v, such that 0 ≤ u, v < 2w If u + v < 2w, then sum = u + v (Normal) If 2w ≤ u + v < 2w+1, then sum = u + v - 2w (Overflow) “Overflow” means that the result of an arithmetic operation requires more bits to be represented than the original data type of operands. 7 Unsigned Addition Example: ❑ Consider the 4-bit variables: x = 910 and y = 1210 ❑ Their bit representations are: x = 910 = 10012 and y = 1210 = 11002 ❑ Their sum is: s = x+y = 2110 = 101012 ❑ The sum has a 5-bit representation ❑ If the high-order bit is discarded, the sum becomes 01012 = 510 ❑ This is equivalent to the value 21 mod 16 = 21 mod 24 = 5. ❑ Overflow … 8 Unsigned Addition Hardware Rules for addition/subtraction The hardware must work with two operands of the same length. The hardware produces a result of the same length as the operands. The hardware does not differentiate between signed and unsigned. 9 Unsigned Subtraction 1000101 0000101 - - 0000101 1000101 _________ _________ 1000000 ??? 10 Two’s Complement Addition Operands: w bits u + v True Sum: w+1 bits u+v Discard Carry: w bits TAddw(u , v) Principle: Two’s-complement Addition For u and v, such that -2w-1 ≤ u, v ≤ 2w-1 -1 If -2w-1 ≤ u + v < 2w-1, then sum = u + v (Normal) If 2w-1 ≤ u + v, then sum = u + v - 2w (Positive Overflow) → becomes -ve If u + v < -2w-1, then sum = u + v + 2w (Negative Overflow) → becomes +ve 11 Two’s Complement Addition Notes about the rules to detect overflow 1 1.If the sum of two positive numbers is a negative result, then overflow occurs. 2.If the sum of two negative numbers is a positive result, then overflow occurs. 3.Else, no overflow happens. “It is important to note the overflow and carry out can each occur without the other. In unsigned numbers, carry out is equivalent to overflow. In two's complement, carry out tells you nothing about overflow. ” 12 1 http://sandbox.mc.edu/~bennet/cs110/tc/add.html Two’s Complement Addition Examples (4-bit): -8 + -5 = -13: -8 + -8 = -16: -8 + 5 = -3: 1 1 1 0 0 0 1 0 0 0 1 0 0 0 + 1 0 1 1 + 1 0 0 0 + 0 1 0 1 0 0 1 1 0 0 0 0 1 1 0 1 Negative overflow, carryout. Negative overflow, carryout. No overflow nor carryout. Sum is not correct. Sum is not correct. Sum is correct. 2 + 5 = 7: 5 + 5 = 10: 1 1 0 0 1 0 0 1 0 1 + 0 1 0 1 + 0 1 0 1 0 1 1 1 1 0 1 0 No overflow nor carryout. Positive overflow, no carryout. Sum is correct. Sum is not correct. 13 Multiplication Exact Product of w-bit numbers x, y – Either signed or unsigned – This means that the bit-level representation of the product operation is identical for both unsigned and signed (two’s complement) multiplication. 14 Unsigned Multiplication Operands: w bits u * v True Product: 2w bits u*v Discard w bits: w bits UMultw(u , v) Principle: Unsigned Multiplication ❑ For u and v, such that 0 ≤ u, v ≤ 2w-1 ▪ Range: 0 ≤ u * v ≤ (2w – 1) 2 = 22w – 2w+1 + 1 ▪ Standard multiplication function: Ignores high order w bits ▪ Implementing modular arithmetic: P = UMultw(u , v) = u * v mod 2w This means to discard high order w bits in the bit-level representation of the product. 15 Signed Multiplication Operands: w bits u * v True Product: 2w bits u*v Discard w bits: w bits TMultw(u , v) Principle: Signed Multiplication ❑ For u and v, such that -2w-1 ≤ u, v < 2w-1-1 ▪ Range: (–2w–1)*(2w–1–1) ≤ u * v ≤ (–2w–1) 2 Two’s complement min: u * v = (–2w–1)*(2w–1–1) = –22w–2 + 2w–1 Two’s complement max: u * v = (–2w–1) 2 = 22w–2 ▪ Standard multiplication function: Ignores high order w bits ▪ Implementing modular arithmetic, then converting from unsigned to 2’s complement to get the truncated 2’s complement number (see next slide): P = TMultw(u , v) = U2Tw(u * v mod 2w) 16 Multiplication Examples (3-bit): Mode u v u*v Truncated u*v Unsigned 5 3 15 7 Two’s complement -3 3 -9 -1 Unsigned 4 7 28 4 Two’s complement -4 -1 4 -4 Unsigned 3 3 9 1 Two’s complement 3 3 9 1 ▪ Get the product: u * v = -3*3 = -9 ▪ Implement modular arithmetic: u * v mod 2w = -9 mod 23 = 7 ▪ The bit representation of 7 is (unsigned) ▪ Convert from unsigned to 2’s complement: represents -1 (signed) ▪ Thus, the number -9 becomes -1 after truncation, while the bit representation for both numbers is the same. 17 Power-of-2 Multiply with Shift Operation – u > k gives x / 2k – Uses arithmetic shift Examples Division Computed Hex Binary y -15213 -15213 C4 93 11000100 10010011 y >> 1 -7606.5 -7607 E2 49 11100010 01001001 y >> 4 -950.8125 -951 FC 49 11111100 01001001 y >> 8 -59.4257813 -60 FF C4 11111111 11000100 24 Floating Points Chapter 2: section 2.4 Some slides and information about FP are adopted from Prof. Michael Overton book: Numerical Computing with IEEE Floating Point Arithmetic 25 Carnegie Mellon Background: Fractional Binary Numbers What is 1011.1012? 26 Carnegie Mellon Background: Fractional Binary Numbers 2i 1011.1012 2i-1 4 2 1 bi bi-1 b2 b1 b0 b-1 b-2 b-3 b-j 1/2 1/4 1/8 2-j Bits to right of binary point represent fractional powers of 2 Value: 27 Carnegie Mellon Background: Fractional Binary Numbers Example: Value Representation 5 3/4 101.112 2 7/8 10.1112 1 7/16 1.01112 Observations ▪ Divide by 2 by shifting right (unsigned) ▪ Multiply by 2 by shifting left ▪ Numbers of form 0.111111…2 are just below 1.0 ▪ 1/2 + 1/4 + 1/8 + … +1/2i + … → 1.0 28 Carnegie Mellon Why Not Fractional Binary Numbers? ❑Not efficient ▪ 5 * 2100 → 1010000000 ….. 0 100 zeros ▪ Given a finite length (e.g. 32-bits), cannot represent very large nor very small numbers (ε → 0) ▪ Can only exactly represent numbers of the form x/2k ▪ Representation of some fractions that have repeating bit patterns 1/3 = 0.0101010101…2 1/5 = 0.001100110011 …2 29 Carnegie Mellon IEEE Floating Point IEEE Standard 754 – Supported by all major CPUs – The IEEE standards committee consisted mostly of hardware people, plus a few academics led by W. Kahan at Berkeley. Main goals: – Consistent representation of floating point numbers by all machines. – Correctly rounded floating point operations. – Consistent treatment of exceptional situations such as division by zero. 30 Carnegie Mellon Floating Point Representation Numerical Form V = (–1) s M 2E A number can be represented as: – Sign bit s determines whether number is negative or positive, where the interpretation of the sign bit for numerixc value 0 is handled as a special case. – Significand M is a fractional binary value in range [1,2) or [0,1) – Exponent E weights value by a (possibly negative) power of 2 Encoding – MSB s is sign bit s – The k-bit exp field encodes E (but is not equal to E) – The n-bit frac field encodes M (but is not equal to M) s exp frac 31 Carnegie Mellon Precisions How many bits are required to encode E and M? 32 Carnegie Mellon Precisions Single precision: 32 bits s exp frac 1 8-bits 23-bits Double precision: 64 bits s exp frac 1 11-bits 52-bits Extended precision: 80 bits (Intel only) s exp frac 1 15-bits 63 or 64-bits 33 Encoding Schemes How do we perform encoding? Or How do we represent a number using floating point? 34 Encoding Schemes Based on exp we have 3 encoding schemes exp ≠ 0..0 or 11…1 → normalized encoding exp = 0… 000 → denormalized encoding exp = 1111…1 → special value encoding – frac = 000…0 – frac = something else 35 Carnegie Mellon Encoding Schemes V= (–1)s M 2E 1. Normalized Encoding Condition: exp ≠ 000…0 and exp ≠ 111…1 Exponent field is coded as signed integer in biased form: E = e – Bias – e is the unsigned value of exp field, which has bit representation ek−1... e1e0 – Bias = (2k-1 – 1), k is the # of exponent bits Range(E)=[-126,127] – Single precision: E = e – 127 (e: 1…254, E: -126…127) Range(E)=[-1022,1023] – Double precision: E = e – 1023 (e: 1…2046, E: -1022…1023) frac Significand field is coded with implied leading 1: M = 1.xxx…x2 – Minimum when frac = 000…0 (M = 1.0) – Maximum when frac = 111…1 (M = 2.0-ε) – Get extra leading bit for free because M can be viewed as the number with binary representation 1.fn−1 fn−2... f0. 36 Encoding Schemes V= (–1)s M 2E 1. Normalized Encoding Example (single precision) Value: Float F = 15213.0; E = e – Bias 1521310 = 111011011011012 = 1.11011011011012 x 213 13 elements Note: Significand: 13 + 10 = 23 bits that are required to M = 1.11011011011012 represent the frac field in single precision. frac = 110110110110100000000002 13 elements 10 elements Exponent: E = e – Bias = e - 127 = 13 Note: ➔ e = 140 → exp = 100011002 8 bits are required to represent the exp field in single precision. 8 elements Result: 0 10001100 11011011011010000000000 s exp frac 37 8-bits 23-bits Carnegie Mellon Encoding Schemes V= (–1)s M 2E 2. Denormalized Encoding (called subnormal in revised standard 854) E = 1 – Bias Condition: exp = 000…0 Exponent value: E = 1 – Bias (instead of E = 0 – Bias) Significand is: M = 0.xxx…x2 (instead of M=1.xxx2) frac Cases – exp = 000…0, frac = 000…0 Represents zero Note distinct values: +0 and –0 – exp = 000…0, frac ≠ 000…0 Numbers very close to 0.0 38 Carnegie Mellon Encoding Schemes 3. Special Values Encoding Condition: exp = 111…1 Case: exp = 111…1, frac = 000…0 – Represents value (infinity) – Operation that overflows – E.g., 1.0/0.0 = −1.0/−0.0 = +, 1.0/−0.0 = − Case: exp = 111…1, frac ≠ 000…0 – Not-a-Number (NaN) – Represents case when no numeric value can be determined – E.g., sqrt(–1), − , 0 39 Carnegie Mellon Encoding Schemes Visualization: Floating Point Encodings − + −Normalized −Denorm +Denorm +Normalized NaN NaN −0 +0 40 Carnegie Mellon Example (from textbook) 41 Carnegie Mellon Example (from textbook) V= (–1)s M 2E E = 1 – Bias E = e – Bias 42 Carnegie Mellon Example (from textbook) Now, it is your turn to fill this table. This is the best way to understand floating-point. 43 Carnegie Mellon Floating Point in C C: – float single precision – double double precision Conversions/Casting: – Casting between int, float, and double changes bit representation. Examples: – double/float → int Truncates fractional part Not defined when out of range or NaN – int → double Exact conversion 44