DCIT 207 Computer Organisation and Architecture Course Outline PDF

Summary

This document provides a course outline for DCIT 207 Computer Organisation and Architecture. It covers topics such as data representation (including binary, decimal, hexadecimal conversions), digital logic (AND, OR, NOT gates), architecture design (ALU, control units, memory hierarchy), instruction sets, and course assessment details. The document is accompanied by recommended textbooks.

Full Transcript

DCIT 207 Computer Organisation and Architecture COURSE OUTLINE 1. Data representation: I’s and 2’s Complement, Signed and unsigned values, Big-endian and little- endian 2. Digital logic: Gates, Combinatorial circuits, Transistors, Clocks 3. Architectur...

DCIT 207 Computer Organisation and Architecture COURSE OUTLINE 1. Data representation: I’s and 2’s Complement, Signed and unsigned values, Big-endian and little- endian 2. Digital logic: Gates, Combinatorial circuits, Transistors, Clocks 3. Architecture Design ALU, Control Units, Data and address buses, Registers, Single and double clock Pipelines Memory hierarchy and addressing, Von-Neumann architecture and bottleneck 4. Instruction sets Intel Processor 8086 instruction set LECTURER: Dr. ERNEST GYEBI Course Assessment Activity Mark % Attendance to Lectures and Tutorials 10 Mid semester Test 10 Tutorial Exercises and Assignments 10 End of Semester Final Examination 70 Total Marks for Grading 100 Recommended Textbooks 1. Thomas Floyd L. Digital Fundamentals, 8th Edition, Pearson Education, Inc. Upper Saddle River, New Jersey, 2008 Data Representation Data Representation refers to the form in which data is stored, processed, and transmitted. Data refers to the symbols that represent people, events, things, and ideas. Data can be a name, a number, or photograph ALL data stored and processed by digital computers is in binary code (1s and 0s). The 0s and 1s are referred to as bits The 1s and 0s are represented by voltages in the computer e.g. +5 volts = binary 1 and 0 volts = binary 0. Devices such as smartphones, iPods, and computers store data in digital f formats that can be handled by electronic circuitry Number Systems We use number systems based on positional number systems where the position of a digit within the number determines the digits value. For example, in the number 123410, the 2 represents 2x102 = 200 For the decimal number system the positional values are: 103 102 101 100 The Radix or Base determines the number of different values in the number system. For example, decimal radix = 10 digits = 0-9; Binary radix=2 digits = 0,1; Octal radix = 8 digits = 0-7; Hexadecimal radix=16 digits = 0-F. Binary Numbers Radix/Base = 2 Symbols: 0, 1 Positional value 26 25 24 23 22 21 20 Decimal equivalent 64 32 16 8 4 2 1 Examples 1012= 1x22+0x21+1x20= 4 + 0 + 1 = 510 To indicate a number is binary, use the prefix % or the subscript 2. e.g. % 1110 or 11102 Exercises Convert to decimal: 11012, %0110, 11101102 Conversion from Decimal to Binary Repeatedly divide by 2 To convert decimal number 53 to the binary system, we divide 53 successively by 2 until the result is less than 2.i.e 1 To get the result we work backwards as shown by the arrow. The answer for the above example is 1101012. Let’s do a check to see if this result is right. To do this we convert the result back to base 10. We should get 53. %110101 =1 x 25 + 1 x 24 + 0 x 23 + 1 x 22 + 0 x 21 + 1 x 20 =32 + 16 + 0 + 4 + 0 + 1 = 53 Using sum of weights method for Decimal to Binary conversion Another method used in the conversion of a decimal number to binary is to use the sum of weights method. It is more convenient to use in cases where the number is very large and will make repeatedly dividing by two too cumbersome. The following steps are followed in this method: 1. Write the number in powers of 2. The highest power plus 1 gives the number of bits in the binary. 2. Check the positions and place 1s in the appropriate weight positions where there are powers of 2. 3. Place 0s in the remaining positions (where there are no powers of 2) Examples 1. Convert the following decimal numbers into binary: 129, 59 and 417 Solution 1. 129 By writing 129 in powers of 2, we have 129 = 128 + 1 = 27 + 20, There is a position 27, therefore put 1 at this position. There are no positions from 26 to 21. We therefore assign zeros in these positions. There is position 20 so we assign 1 at that position. The result is therefore 129 = %10000001 (highest power is 7; 7+1=8; this gives 8 bits in the result as you can see) Examples 2. 59 = 32 + 16 + 8 + 2 + 1 = 25 + 24 + 23 + 21 + 20 =1110112 Examples 3. 417 = 256 + 128 + 32 + 1 = 2 8 + 27 + 2 5 + 20 = %110100001 Exercises Convert the following decimal numbers to binary: 8, 17, 32, 53, 64, 99, 127, 13,679 and 63,397 OCTAL and HEXADECIMAL Numbers You have probably realised from the few examples given here how unsuited people are to the routine handling of binary numbers. Their length (six digits to represent decimal value 43) and their monotony (nothing but zeros and ones) make the dealing with them tedious and error-prone for humans. Therefore two compromises between binary and decimal - the octal system (base 8) and the hexadecimal system (base 16) are often used for communication between people and computers. Hexadecimal Base or Radix=16 Symbols are: 0,1,2,3,4,5,6,7,8,9,A,B,C,D,E,F Positional values are: 164 163 162 161 160 Decimal values are: 65,536 4096 256 16 1 For example, 1D7E16=1 x 163 + D x 162 + 7 x 161 + E x 160 =1 x 4096 + 13 x 256+7 x 16 + 14 x 1=755010 Exercises Convert to decimal: $9E, $C8 and $13BF To indicate a hexadecimal value use prefix $ or subscript 16.e.g. $1FED or 1FDD16 Conversion from Hexadecimal to Binary Since each hexadecimal digit is equivalent to four binary digits (24 is equal to 16), the hexadecimal system is also a shorthand way to express binary numbers. Therefore to convert a hexadecimal number to decimal, write each hexadecimal digit as a 4-bit binary number. Example $FD69A=%1111 1101 0110 1001 1010; since F=15=1111; D=13=1101; 6=0110; 9=1001 and A=10=1010 Exercises Convert the following hexadecimal values to binary: 1. $FE45 2. $BBB775 3. ABCDEF16 Conversion from Binary to Hexadecimal Write each group of 4 binary bits as the equivalent Hexadecimal value. Example Convert %101 1110 1010 1011 1111 0001 to hexadecimal. Solution NOTE: Start at the right hand end and fill the left hand end with zeros to get 4 bits. That is 0001= 20=1; 1111=23+22+21+20=8+4+2+1=15=F; 1101=23+22+20=11=B 1010=23+21=10=A; 1110=23+22+21=13=E and 0101=22+20=5 The answer is $5EABF1 (write the answer starting from the left) Exercises Convert the following binary values to hexadecimal: 1. % 100001010111011110010 2. 0001000100111010101112 Signed Numbers Digital systems, such as the computer, must be able to handle both positive and negative numbers. A signed binary number consists of both sign and magnitude information. The sign indicates whether a number is positive or negative and the magnitude is the value of the number. There are three forms in which signed integer (whole) numbers can be represented in binary. They are: sign-magnitude, 1’s complement and 2’s complement. The Sign Bit The left-most bit in signed binary number is the sign bit, which tells you whether the number is positive or negative. This is also referred to as the Most Significant Bit (MSB). Positive=0 and Negative=1 Sign Magnitude form When signed binary number is represented in sign-magnitude, the left-most bit is the sign bit and the remaining bits are the magnitude bits. e.g. +25 is expressed as an 8-bit signed binary number -25=1 0011001 the sign bit this time is 1 and the magnitude bits is 0011001, which is the same as that of +25. Complement The complement of a number is the number derived by subtracting it from a base number. For example, the tens complement of 8 is (10-8) which is 2. Complements are used in digital circuits, because it’s faster to subtract by adding complements than by performing true subtraction. The binary complement of a number is created by reversing all bits and adding 1. The carry from the high-order position is discarded. 9’s and 10’s Complements Assume that X and Y are positive (decimal) digits. THEN: X – Y is equal to X -10 + (10 - Y). Note that (10 - Y) is the "10's complement" of Y. Also, (10 -Y) = ((9 - Y) + 1) where (9 - Y) is the "9's complement". Therefore 10’s complement = 9’s complement + 1. 9’s and 10’s Complements The 9’s complement of a decimal number is obtained by subtracting each digit of the number from 9. To obtain the 10’s complement of the number just add 1 to the result. I’s and 2’s Complement For binary representation in the computer the problem is easier since the number of digits is only 2, and we always know how many digits are in each representation. The two complementary representations are called “2's complement” which represents 10’s complement And 9’s complement represents “1's complement”. Therefore from the relation 10’s complement = 9’s complement + 1, we can say that 2’s complement = 1’s complement + 1 I’s and 2’s Complement The 1’s complement and 2’s complement are represented in the tables below. It shows that finding the 1’s complement is just reversing the bit in the digit. That is 0 becomes 1 and 1 becomes 0 and also adding 1 to the 1’s complement gives the 2’s complement which agrees with the relation 2’s complement = 1’s complement + 1 1’s Complement Form Positive numbers in 1’s complement form are represented the same way as the positive sign magnitude numbers. Negative numbers, however, are the 1’s complements of the corresponding positive numbers. Finding the 1’s complement of a number is to invert each of the bits in the number. Examples Example: Find the 1’s complement of -25 and +25 (use 8-bits) Solution To find 1’s complement of -25 we use +25 By Converting +25 into 8-bits, we have 00011001 Then find its 1’s complement by flipping the bits 11100110 Therefore 1’s complement of –25 = 11100110. Note that the sign bit is 1 which indicates that the number is negative. But the 1’s complement of +25 = 00011001 which is the same as its sign magnitude. The sign bit is 0 because the number is positive. Note that the positive number +25 was used to find 1’s complement of -25. 2’s Compliment Form Computers will find the 1’s complement and add 1 to the result to obtain the 2’s complement. The following steps show how the computer’s circuit will find the 2’s complement of a number. Steps 1. Convert the number into binary (use 8 bits if the number of bits is not specified). Always use positive number because the idea of using the 2’s complement is to avoid negative numbers in the computer’s circuit. 2. Convert to 1’s complement by flipping the bits. 3. Add 1 at the LSB (Least significant bit) position. 4. Result is the 2’s complement Example Example: Find the 2’s complement of –35 Solution 1. First, write the 8-bit number for +35. +3510=0010 00112 2. 1’s comp = 1101 11002 after flipping each of the bits above 3. add 1 at the extreme right bit or the least significant bit (LSB) position 4. 1101 1100 + 1 1101 1101 2’complement of –35 2’s Complement of a positive number 2’s complement of a positive is just the sign magnitude form NB: For a positive number, The 1’s complement = 2’s complement = The sign magnitude of the number. Positive numbers in 2’s complement form are represented the same way as in the sign- magnitude and 1’s complement forms. Negative numbers are the 2’s complements of the corresponding positive numbers. Computers use the 2’s complement for negative integer numbers in all arithmetic operations. The reason is that subtraction of a number from another number is the same as adding the 2’s complement of the number to the first one. (The carry out bit is disregarded) Proof Let’s see how an 8-bit computer will calculate 84 – 35. Steps 84 = 01010100 in binary And 35 = 00100011 1’s complement of 35 = 11011100 2’s complement of 35 = 11011101 (we need to find the 2’s complement because the number is negative (-35)) By adding the 2’complement of 35 to 84, we have 01010100 + 11011101 = 00110001 = 4910. (The carry out bit which is 1 is discarded as it is an 8- bit computer and therefore has a data size of 8) 84 – 35 = 49 that proves it. Why computers use 2’s complement instead of 1’s complement 1. Signed numbers are represented using 2’s complement representation because it simplifies the hardware (the same circuit that adds unsigned integers can be used to add signed integers as the minus sign is not stored in memory). 2. Also the 2’s complement system is preferred to the 1’s complement because the 1’s complement system requires adding 1 to the summation of weights for negative numbers when converting into decimal but not for positive numbers. To convert to decimal in 2’s complement simply needs the summation of weights regardless of whether the number is positive or negative. 3. Again using 2’s complement to represent numbers allows only one representation of zero, and has effective addition and subtraction while still having the most significant bit as the sign bit. 1’s complement has two representations of zero as can be seen from the table next. Zero representations in 1’s and 2’s complement Exercises 1. Convert the following decimal values to 2’s compliment format. +47, +63, -123, -77, 96 Binary Fractions Binary fractions can also be represented: Position Value: 2-1 2-2 2-3 2-4 2-5 etc Decimal:.5.25.125.0625.03125 fractions 1/2 1/4 1/8 1/16 1/32 Conversion of Binary Fractions to Decimal Decimal whole numbers can be converted to binary by repeated division by 2. Therefore decimal fractions can be converted to binary by repeated multiplication by 2. To convert a fraction to binary, multiply the fraction by 2 until the fractional product is zero or until the desired number of bits, say 8 for an 8-bit computer is obtained. Let’s consider the examples below. Examples Example 1: Convert 0.5 in decimal to binary Solution Examples Example 2: Convert 0.63710 to binary Solution.63710 x 2 1.274 x 2 Write the answer 0.548 x 2 in this direction 1.096.. 0.192.. 0.384.. 0.768.. 1.536.. 1.072.. Answer = 0.101000112 Exercises Convert decimal to binary fraction (8-bit max) 0.75, 0.123, 0.994, 0.001 Conversion of Binary Fractions to Decimal Example1: Convert 0.101000112 to decimal Solution 0.101000112 = 2-1+2-3+2-7+2-8=½ +1/8+1/128+1/256 =0.5+0.125+0.0078125+0.00390625=0.6367187510 Examples Example 3: Convert 0.1101010 to hexadecimal and octal respectively. Solution Group into four bits starting from left to right for hexadecimal, since the number is a fraction We have 1101 = 23 + 22 + 20 = 8 + 4 + 1 = 13 = $D And 010 which is 3 bits so add 0 to get 0100 = 22 = $4 0.1101010 = $0.D4 For Octal, group the bits into 3 groups starting from left to right. We have 110 = 22 + 21 = 68 101 = 58 And 0008 = 0 (note that two zeros have been added to get 3 bits) 0.1101010 = 0.6508 Examples Example 4: Convert 11011. 1101010 into hexadecimal. Solution 11011. 1101010 is made up of two parts (whole number and fraction parts). For the whole number part 11011 we group into four bits starting from right to left. That is 1011 = $11 = $B And 0001 = $1 (Three 0s are added to make the bits 4) 11011 = $1B For the fraction part 0.1101010 We have 1101 = 23 + 22 + 20 = 8 + 4 + 1 = 13 = $D And 010 which is 3 bits so add 0 to get 0100 = 22 = $4 0.1101010 = $0.D4 11011. 1101010 = $1B.D4 FIXED-POINT and FLOATING-POINT Numbers A fixed-point number is one whose point (decimal or binary) is in a fixed place in relation to the digits. For example, both integers and fractions can be represented as fixed-point numbers. For integers, the point is fixed at the extreme right; for fractions, the point is fixed at the extreme left. Since the point position is fixed and predefined, it does not have to be stored in memory with the value. A floating-point number (also called a real number, one in which the position of the point can vary from number to number), in contrast, can be used to represent a mixed number (integer part plus fractional part). One number is used for the fractional part, called the mantissa, and another number is used for the power of the base, or the exponent. For example the mixed number 53.95, which can't be stored as a fixed-point number because it has both an integer part and a fractional part, can be stored as a floating- point number like this: 0.5395 (Mantissa) 2 (exponent), since 53.95 = 0.5395 x 102 FIXED-POINT and FLOATING-POINT Numbers The decimal point in the mantissa is assumed to be at the extreme left, and thus is not explicitly stored. The exponent indicates that the decimal point is really located two positions from the left of the mantissa. In other words, the exponent indicates that the mantissa (.5395) is to be multiplied by 102 to get the real value. Most of the time, it is desirable to have the mantissa shifted all the way to the left, that is, to have a mantissa with no leading zeros. A floating-point number stored in this way is said to be normalized. Binary Integers in a computer are of fixed and limited size and are unable to represent very small or very large values. Floating point numbers or scientific notation allows an almost limitless range of small and large numbers to be represented. Floating-point numbers lose accuracy, as they are only approximations to the actual value. Format: a x re a=mantissa, r=radix/base, e=exponent In the computer two fields are stored as exponent and mantissa. The exponent and mantissa can use any of the binary systems. The Radix is not stored, as it is known value. INSTITUTE OF ELECTRICAL AND ELECTRONIC ENGINEERS (IEEE) Standard 754 Floating Point Format This is the standard used to represent floating point numbers. Computers use coprocessors in addition to the CPU to perform complicated mathematical calculations using floating-point numbers. The purpose is to increase performance by freeing up the CPU for other tasks. The mathematical coprocessor also known as the floating-point unit (FPU) is incorporated into the modern CPU chip. Floating Point Number Precisions Since Floating point numbers give approximations, precisions are used to represent how these numbers are stored in memory. It indicates the rounding off the numbers Single precision According to IEEE standard, 32-bits are used for single Precision. A floating-point number N is defined as N = (-1)S x 1.F x 2E – 127 where S is the sign bit, F the fractional (mantissa) and E the exponent. This means for the 32 bits: 1 bit for the sign stored as S, 8 bit exponent in excess of 127 format stored as E and 23 bit mantissa stored as F. Double precision It uses 64 bits to represent data and it is distributed as follows: 1 bit for the sign, 11 bit exponent in access 127 format and 52 bit mantissa Extended precision It uses 80 bits to represent data and it is distributed as follows: 1 bit for the sign, 15 bit exponent in excess of 127 format and 64 bit mantissa. The computer arithmetic unit uses it internally to reduce rounding errors. Conversion from decimal to IEEE 754 Floating Point Format Example 1: Convert 10.510 to Single Precision Format Example 2 Convert –13.25 in decimal to single Precision Format. Exercises Convert the following decimal numbers to IEEE 754 FP Format 1. 56.125 2. -73.01 3. -65.666 Conversion from IEEE 754 Floating Point Format to decimal Example3: Convert the IEEE 754 Single Precision Floating Point Format $40C4 0000 to Decimal. Exercises Convert the following IEEE 32-bit FPF to decimal: $C013 0000 $BEC0 0000 $C3C0 0000 ALPHANUMERIC CODES (A Coding System to Represent Alphanumeric Characters) In order to store and represent text in a computer, some scheme must be employed to convert letters, punctuation marks, and special characters to binary numbers. Character data is composed of letters, symbols, and numerals that are not used in calculations. Examples of character data include names and addresses Character data is commonly referred to as “text” and must be changed to binary As we know the octal and hexadecimal system can be used for binary representions In fact, one way to think of octal numbers is as a set of 3-bit codes that represent characters 0 through 7 (8 bits) Similarly, the hexadecimal system can be thought of as a 4-bit system for representing the characters 0 through 9 and A through F (16 bits) To represent more than 16 characters, some scheme is required that uses codes of more than 4 bits. Various coding systems have been developed over the years using 6-bit, 7-bit, and 8-bit codes. Of these, two are common today: EBCDIC and ASCII. The EBCDIC System The EBCDIC (Extended Binary Coded Decimal Interchange Code) system is an 8- bit BCD code that allows 256 (28) possible bit combinations. This code can be used to represent uppercase and lower-case letters, decimal digits, punctuation marks, and special characters. Eight bits represent each character in this system. EBCDIC is used on IBM mainframe operating systems, like z/OS, OS/360, VM and VSE, as well as IBM minicomputer operating systems like OS/400 and i5/OS. It is also employed on various non-IBM platforms such as Fujistu-Siemens’, BS/2000/OSD, HP MPE/ix, and Unisys MCP. It descended from punched cards and the corresponding six bit binary-coded decimal code that most of IBM's computer peripherals of the late 1950s and early 1960s used. ASCII (American Standard Code for Information Interchange) Code It is a 7-bit code co-operatively developed by several computer manufacturers whose objective was to produce a standard code for all computers. Their objective has been fulfilled; at least for microcomputers. Virtually every such machine marketed today uses the ASCII code. Although ASCII is officially a 7-bit code, in practice it is an 8-bit code because an extra bit is almost invariably added as a parity bit (for error detection) The ASCII code is built into the computer system which allows the keyboard to convert alphanumeric codes The computer keyboard is equipped with a dedicated microprocessor that constantly scans keyboard circuits to detect when a key has been pressed and released. A unique scan code is produced by computer software representing that particular key. The scan code is then converted to an alphanumeric code (ASCII) for use by the computer. ASCII Code The following are the characteristics of the ASCII code: 1. It is a 7 bit code used worldwide 2. It uses 128 different characters (27) distributed as follows: 10 digits, 0-9 26 uppercase letters 26 lowercase letters 7 punctuation marks 33 control characters 26 other characters Extended ASCII (ASCII-8) It is a superset of ASCII that uses eight bits for each character. For example, Extended ASCII represents the uppercase letter A as 01000001. Using eight bits instead of seven bits allows Extended ASCII to provide codes for 256 characters. ASCII codes are used for numerals, such as Social Security numbers and phone numbers. Plain, unformatted text is sometimes called ASCII text and is stored in as text file with a name ending in.txt. On Apple devices these files are labelled “Plain Text.” In Windows, these files are labelled “Text Document” Extended ASCII ASCII codes are used for numerals, such as Social Security numbers and phone numbers. Plain, unformatted text is sometimes called ASCII text and is stored in a so-called text file with a name ending in.txt. On Apple devices these files are labelled “Plain Text.” In Windows, these files are labelled “Text Document” ASCII-8 and EBCDIC In ASCII-8 and EBCDIC, the first 4 bits are known as zone bits and the remaining 4 bits represent digit values for weight. The table below shows the representation of the zone bits for ASCII and EBCDIC CODE LOWERCASE UPPERCASE ASCII 0110 0100 EBCDIC 1000 1100 The example below shows how an ASCII code is formulated. Example 1 Example 1: Determine the ASCII code representation of S and s. Solution S is the 19th alphabet and therefore has the weight of 19. 19 = 16 + 2 + 1 = 24+ 21 + 20 = 10011 The zone bits for capital letters in ASCII is 0100. 0100 (zone bits) 0011 (weight) Shift the 1 to replace the 0 in the zone bits indicated to have 0101 0011. For s the zone bits are 0110. Therefore by shifting the 1 in the weight bits gives 0111 0011 for ‘s’ 1000 Note that the zone bits are written first before the weight which should be 4 bits. Unicode The ASCII and EBCDIC coding schemes are sufficient for English and most languages but are not sufficient to represent other languages like the Asian languages that use different alphabets. To represent these languages the Unicode is used. It is a 16-bit coding scheme that can represent 216 = 65,536 characters and symbols. This coding scheme is very useful because it is capable of representing almost the entire world’s current written languages, as well as classic and historical languages like classical Persian, Greek and Latin. Unicode reserves 30,000 codes for future use and 6,000 codes for private use. Unicode is implemented in operating system languages, including Windows Xp, Windows 10, Mac OS X and Linux. BITS, BYTES, AND WORDS All data handled by computers are ultimately reduced to bits. The byte is simply a group of 8 bits. A single character coded in EBCDIC or ASCII constitutes a byte. A nibble is half of a byte (or 4 bits). Bits, bytes, and nibbles are constant from machine to machine A bit is always a 0 or a 1; a byte is always 8 bits; a nibble is always 4 bits. A computer word is a group of adjacent bits that are manipulated and stored as a unit. It is transferred as a unit between primary storage and the registers of the ALU and the control unit. A computer with 32-bit word length might have registers with a capacity of 32 bits, and transfer data and instructions within the computer in groupings of 32 bits. BITS, BYTES, AND WORDS The number of bits in a word depends on the computer. A computer's word length is a key element of its design. The smallest computers handle 4-bit words, and the largest can deal with words of 128 bits. Computers that use fixed-length words store a fixed number of characters at each address. For example, a fixed-length word of 16 bits contains 2 bytes (characters) that are stored at a single address. On the other hand, computers that use variable-length words have a separate address for each byte of storage. In this type of machine, the word length can be tailored to the individual job, and each byte of a word is accessed by its own address. BITS, BYTES, AND WORDS Finally, words can be defined by the types of numbers they hold. Fixed-point words hold fixed-point numbers, and the binary point is assumed to be at one end or the other. Floating-point words hold floating-point numbers and are comprised of two parts: one part for the mantissa, the other for the exponent. Big-endian and little-endian As we have learnt, computers operate using binary code, a language made up of 0s and 1s. This binary code forms the foundation of all computer operations A single bit is a 0 or a 1, and eight bits make up a byte. While some data, such as certain English characters, can be represented by a single byte, other data types require multiple bytes. The concept of endianness explains how these bytes are read and interpreted by computers. Endianness Endianness refers to the order in which bytes are arranged in memory. Different languages read their text in different orders. For example, English reads from left to right, while Arabic reads from right to left. Endianness works similarly for computers. Thus, if one computer reads bytes from left to right and another reads them from right to left, issues arise when these computers need to communicate. Therefore, endianness ensures that bytes in computer memory are read in a specific order. Each computer system is internally consistent with its own data, but the advent of the internet has led to more data sharing than ever before, and not all systems read data in the same order. Endianness comes in two primary forms: Big-endian (BE) and Little-endian (LE). Endianness Big-endian (BE): Stores the most significant byte (the “big end”) first. This means that the first byte (at the lowest memory address) is the largest, which makes the most sense to people who read left to right. Little-endian (LE): Stores the least significant byte (the “little end”) first. This means that the first byte (at the lowest memory address) is the smallest, which makes the most sense to people who read right to left. Memory addressing Computer memory addressing shows the lowest address at the top and the highest address at the bottom as indicated in the fig below. This means location $00 will contain the Most Significant byte and $04 the Least Significant byte for Big-endian, whilst location $00 will contain the Least Significant byte and $04 the Most Significant byte for Little-endian. $00 $01 $02 $03 $04 Example Represent the 32-bit integer $0A0B0C0D in memory using Big-endian and Little-endian respectively. Solution For $0A0B0C0D, Most significant Byte is 0A. This is stored in first memory location (lowest memory, $00) and 0D (Least Significant byte) is stored in the last memory location (highest memory, $03) in Big-endian Memory location $00 will contain the Least Significant byte, 0D and $03 the highest memory location will contain the Most Significant byte 0A for Little- endian. So we have 0A $00 0D $00 0B $01 0C $01 0C $02 0B $02 0D $03 0A $03 Big-endian Little-endian What do the solutions tell you in each case? Applications of Endianness To ensure interoperability among computer systems endianness is used for data operations in computers This makes it easy for computers to communicate with each other For example, Big-endianness is the dominant ordering in networking protocols, such as in the Internet protocol suite, and it is used in computer networks Little-endianness, on the other hand is the dominant ordering for processor architectures. For instance, Intel x86 processors and base RISC-V implementations and their associated memories use little-endianness. DIGITAL LOGIC Digital logic is a fundamental concept in electronics and telecommunications It is basically used for data representation, manipulation, and processing using discrete signals or binary digits (bits). Digital logic operates on binary values, which are represented by two states: 0 (low) and 1 (high). This binary representation allows for the encoding of information in a sequence of bits This makes it possible to perform logical operations, data retrieval and storage Logic Gates A core component of digital logic is the use of logic gates. The gates are the building blocks of digital circuits and allows current to pass through them For example, logic systems, which include computers, are built using logic gates. There are three basic logic gates: AND, OR, NOT and the derived gates: NAND, NOR, XOR, XNOR. The Basic gates –AND, OR, NOT AND Gate The logic circuit that implements the AND operation is known as an AND gate, shown below. Symbol A AB B The Boolean Expression is AB and it is read as, A and B There are two (A and B) or more inputs (on the left side of the gate) and only one output (AB) on the right side of the gate. For the AND gate: The output is true only when ALL inputs are true. Thus, when all inputs are a 1, then the output will be a 1 (for True=1). It therefore produces a HIGH output only if all the inputs are HIGH. When any or all inputs are LOW, the output is LOW or False or 0. OR GATE Symbol A A + B (output) B Inputs Boolean Expression: A+B read as, A or B The OR operation produces a HIGH output when any of the inputs is HIGH. When one input is HIGH or the other input is HIGH or both inputs are HIGH, the output is HIGH or True or 1. When both inputs are LOW, the output is LOW or false or 0. The OR operation is implemented by a logic circuit called OR gate, shown above NOT GATE Symbol Output X A means bubble and indicates inversion Input Boolean Expression: X = A (read as not A) The NOT operation changes one logic level to the opposite logic level. When the input is HIGH (1), the output is LOW (0). When the input is LOW, the output is HIGH. The NOT operation is implemented by using a logic circuit known as an inverter. The NOT gate is used to implement single inputs. Truth Table A Truth Table defines the output for ALL possible combinations of input. The total number of possible combinations of binary inputs to a gate is determined by the following formula: M = 2m Where M is the number of possible input combinations and m is the number of input variables. 2-Inputs AND gate Truth Table The number of Inputs are A and B. Therefore the number of possible combinations = 22 = 4 as can be seen from the table below ROW NO INPUTS OUTPUT A B X 0 0 0 0 1 0 1 0 2 1 0 0 3 1 1 1 Truth Table 0 and 1 are referred to as logic levels where 0 represents low voltage and 1 a high voltage. Note the sequence in which the inputs variables A and B have been written. This will be the standard adopted in this course The Row Number indicates the binary sequence. For example, 3 =1 12. This gives inputs of A = 1 and B = 1 as shown in Table. In much the same way, a Row Number 7 will give inputs A = 1, B = 1 and C = 1 since 7 = 1 1 12 for a 3-input gate. 2-Inputs OR gate Truth Table ROW NO INPUTS OUTPUT A B X 0 0 0 0 1 0 1 1 2 1 0 1 3 1 1 1 NOT gate Truth Table INPUT OUTPUT 0 1 1 0 Combinatorial or Combinational Logic Combinational logic consists of a logic circuit built with logic gates where the output is determined by the logic gates and the current set of inputs. There is no memory unlike sequential logic which has memory and the output does not depend only on the inputs. Logic gates are usually implemented using logic Circuits. Fig 2.1 shows the logic circuit for the Boolean expression A.B + C.D. It is made up of two AND gates joined together with an OR gate. A B AB + CD AB C D CD Fig 2.1 Implementation of a combinational Logic Circuit Example 1 Draw the logic circuit for the Boolean expression (CA+B)D Solution Start drawing from the left noting that the first term in brackets is made up of both an AND gate and OR gate. This then combines with another AND gate. C CA A CA + B (CA + B) D B D Example 2 Draw logic circuit for the logic function AB + CA and represent the function on a truth table. Solution A AB AB + AC B B AC C Notice that A is common to both terms in the logic function therefore a line is picked from a line connecting the first A point instead of drawing another A line. The Truth Table To draw the truth table, we first find the number of possible combinations by using the inputs A, B and C. Possible combinations = 23 = 8 since the number of inputs is 3. Row INPUTS OUTPUT No A B C B C AB AC AB + AC 0 0 0 0 1 1 0 0 0 1 0 0 1 1 0 0 0 0 2 0 1 0 0 1 0 0 0 3 0 1 1 0 0 0 0 0 4 1 0 0 1 1 1 0 1 5 1 0 1 1 0 1 1 1 6 1 1 0 0 1 1 0 1 7 1 1 1 0 0 1 1 1 Exercises Draw both logic circuits and truth tables for the following Boolean expressions: 1. AB + BC + AD 2. AB + B.C.D 3. AB + C D UNIVERSAL LOGIC GATES The term Universal logic gate refers to the property of a gate that permits any logic function to be implemented by that gate or by the combination of gates of that kind. There are two main types of such gates. The NAND and the NOR gates. The universality of the NAND gate means it can be used as an inverter and that combinations of NAND gates can be used to implement the AND, OR and NOR operations. Similarly, the NOR gate can be used to implement the inverter, AND, OR and NAND operations. The universal logic gates are used to build logic systems instead of the basic AND, OR, NOT as they are more flexible. NAND gate Symbol: OR Boolean Expression: X = AB (read as, Not (A and B) Truth Table INPUTS OUTPUT A B AB X = AB 0 0 0 1 0 1 0 1 1 0 0 1 1 1 1 0 NOR gate Symbol: OR Boolean expression for NOR gate: X = A + B Truth Table INPUTS OUTPUT A B A+B X =A+ B 0 0 0 1 0 1 1 0 1 0 1 0 1 1 1 0 Exclusive OR Gate Symbol: Boolean expression: X =A B Truth Table: INPUTS OUTPUT A B X = AB The output is true when 0 0 0 the inputs are different 0 1 1 1 0 1 1 1 0 Exclusive NOR Gate Symbol: Boolean Expression: AB Truth Table: INPUTS OUTPUT A B X = AB The output is true when 0 0 1 the inputs are the same. 0 1 0 1 0 0 1 1 1 Exercise (i) Write a Boolean expression for the output X of the logic circuit below. (ii) Represent your logic expression on a truth table and mention the significance of representing logic functions on truth tables. A B X C Laws and Rules of Boolean Algebra Single Variable Rules Since we are familiar with the use of Sets, let’s use the set theorem to explain the single variable rules. The universal set will be represented by  and the null set as . The null set is the set which contains no elements whilst the universal set contains all the elements that can be found in all the other sets. The following properties of intersection and union of sets can be established for Set A as illustrated in the Venn diagram below. ’  (The Universal set) is the combination A of A’ and A. A   Laws and Rules of Boolean Algebra Intersection of Sets 1. A   =  2. A   = A 3. A  A = A 4. A  A =  where A is the complement of set A. Union of Sets 1. A   = A 2. A  =  3. A  A = A 4. A  A =  where A is the complement of set A. Laws and Rules of Boolean Algebra In Boolean Algebra,  is represented as 1 and  as 0.  (.) as AND gate and  (+) as OR Gate and complement as NOT Gate. We therefore have the following rules from above. AND GATE OR GATE 1. 0.A = 0 1. 0 + A = A 2. 1.A = A 2. 1 + A = 1 3. A.A = A 3. A + A = A 4. A.A = 0 4. A + A = 1 These Boolean Algebra rules are referred to as single variable rules as they involve only one variable. DEMORGAN’s THEOREMS DeMorgan, a mathematician, proposed two theorems that are an important part of Boolean Algebra. The theorems are: 1. The complement of a product of variables is equal to the sum of the complements of the variable. OR the complement of two or more variables ANDed is equivalent to the OR of the complements of the individual variables. The formula for expressing this theorem is AB..N = A + B + C +..+ N For example, AB = A + B DEMORGAN’s THEOREMS Row INPUTS OUTPUT No A B AB A B AB A+B 0 0 0 0 1 1 1 1 1 0 1 0 1 0 1 1 2 1 0 0 0 1 1 1 3 1 1 1 0 0 0 0 Since the entries of AB and (A + B) are the same from table 3.6, we say that the two expressions are equivalent. Therefore AB = A + B DEMORGAN’s THEOREMS 2. The complement of a sum of variables is equal to the product of the complements of the variables. OR the complement of two or more variables ORed is equivalent to the AND of the complement s of individual variables. The formula for expressing this theorem is A + B +.. + N = A B C..N For example, A + B = A B DEMORGAN’s THEOREMS Row INPUTS OUTPUT No A B A+B A B A +B A B 0 0 0 0 1 1 1 1 1 0 1 1 1 0 0 0 2 1 0 1 0 1 0 0 3 1 1 1 0 0 0 0 The entries for A + B and A B are the same and are therefore both expressions are equivalent. Simplifying Boolean Functions A simplified Boolean function uses fewest gates possible to build in a given function. This makes it less expensive and complicated to implement. All the rules and theorems of Boolean algebra are applied to simplify the functions. The simplified function is equivalent to the original and for any combination of levels on the inputs; the same output from original and simplified circuits is obtained. Simplifying Boolean Functions (i) A + AB = A (1 + B) 1+ B = 1 A + AB = A.1 = A (ii) (A + B)(A + C) = AA + AC + AB + BC = A + AC + AB + BC = A(1 + C + B) + BC = A + BC Generally, A + BC = (A + B) (A + C) and this is a very important property that is used in the simplification of Logic functions. Simplifying Boolean Functions Apply De Morgan’s theorem to the following expression: (A + B + C) (D ) Solution: Let A + B + C = X and D = Y (A + B + C) (D) = X Y = X + Y From De Morgan’s law = (A + B + C) + D by substitution = ABC+ D Simplifying Boolean Functions Simplify the Boolean expression below and draw the corresponding circuit diagram: (E F G) G +E F G + H E Solution (E F G) G +E F G + H E = (E + F + G) G + E + F + G + H E = (E + F + G) G + E + F + G + H E =EG+FG+GG +E+F+G+HE = G (E + F + 1 +1) + E + F + H E = G + F + E + H E = G + F+(E+H)(E+E) =E+F+G+H E F E+F+G+H G H Simplified Circuit diagram Exercises 1. Draw the logic diagrams for each of the following Boolean expressions: (i) a.b.c + a.b.x (ii) PQW+RES+ PQS (iii) ABCD + A B C D + A B C D 2. Simplify the logic function below and draw the corresponding logic circuit AB + A C + A B C + C 3. Use Boolean Algebra to minimize the expression: A B D + A C D + ABD 4. Use DeMorgan’s theorems to simplify the logic function below: (A + B ) (C + D) + AC Draw the simplified logic diagram. 5. Simplify the following Boolean expressions and represent each result on a logic diagram: A B C + A B C + A B C + A B C + ABC (A + B)(A + C) Architecture Design Computer architecture describes the structure of a computer system that determines how its components interact with each other to execute the machine’s purpose of processing data. It provides the attributes of a system that are visible to the user and deals with what the system can do Some of the attributes include: memory hierarchy, addressing techniques, instruction sets, and bits used for data. In this section we will study the basic aspects of architecture design. Topics include: Memory organisation, Systems buses, Registers, Pipelining techniques, Von-Neumann architecture and Harvard architecture Memory Organisation Memory is the portion of a system for storing binary data in large quantities. Computer memories are basically made from semiconductor materials. Semiconductor memories consist of arrays of storage elements that are generally either latches (flip flop) or capacitors. Unit of Binary Data: Bits, Byte, Nibble and Words As a recap, memories store data in unit that have from one to eight bits. The smallest unit of binary data is the bit. In many applications, data are handled in an 8-bit called a byte or in multiples of 8-bit units. The byte can be split into two 4-bit units that are called nibbles. A complete unit of information is called a word and generally consists of one or more bytes. The general definition of a word is a complete unit of information consisting of a unit of binary data. Unit of Binary Data When applied to computer instructions, a word is more specifically defined as two bytes (16 bits). As a rule, the following assembly language instructions used in computers are defined: 1. DW (Define Word) directive means to define data in 16-bit units. This definition is independent of the particular microprocessor (CPU) or the size of its data bus. 2. DB directive in Assembly language also allows definitions of bytes (8-bits) 3. DD directive defines double word (32-bits) 4. QD directive defines quad-words (64 bit) in Assembly language Memory address Each storage element in a memory can retain a 1 or 0 and is called a cell. Memories are made up of arrays of cells. The location of a unit of data in a memory array is called its address. We can think of memory as a series of little boxes each of which can store a piece of computer data. These little boxes are the locations or the addresses ( refer to Fig 3.1) Normally, an address may contain data (number or letter) and an instruction. Memory address a 1 2 3 (i) Address of shaded cell ‘b’ is 4 row 5, column 4 5 b (ii) The address of the shaded byte ‘a’ is row 6 b 1 7 8 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 Fig 3.1 Memory address Memory capacity The capacity of a memory is the total number of data unit that can be stored. For example, in the fig 3.1, the capacity is 8 x 8 = 64 bits (number of cells). This also gives 8-bytes capacity. Computer memory is measured in units of thousands of locations and in units of millions of locations. The following can be defined: Memory capacity 1024 locations  1 KB (kilobytes) i.e. 1 KB = 1024 bytes  1000 bytes (each location contains 1 byte) This is equivalent to ½ page of text 1 MB = 1024 x 1024 bytes = 1,048, 576 bytes 1,000,000 bytes= approximately 106 (Million) bytes This is equivalent to 500 pages of text 1GB (Gigabyte) = 109 (billion) bytes approximately = 1,073,741,824 exact amount of bytes This is equivalent to 500,000 pages of text; 1TB (Terabyte) = 1012 (Trillion) bytes approximately = 1,099,511,627,776 exact amount of bytes Which is equivalent to 500,000,000 number of pages of text. Basic Memory Operations Since a memory stores binary data, data must be put into the memory and data must be copied from the memory when needed. The write operation puts data into a specified address in the memory The read operation on the other hand, takes data out of a specified address in the memory. Data units go into the memory during a write operation and come out of memory during the read operation on a set of lines called the data bus. The data bus is bi-directional, which means that data can go into the memory or out of the memory. In the case of byte-organised memory, the data bus has at least eight lines so that all eight bits in a selected address are transferred in parallel (together). Basic Memory Operations For a write or read operation, an address is selected by placing a binary code representing the desired address on a set of lines called address bus. The address code is decoded internally, and the appropriate address is selected. Note that the address bus is unidirectional. That’s always to the memory. This is because it’s responsible for the location of the data in memory. The number of lines in the address bus depends on the capacity of the memory. For example, a 15-bit address code can select 32,768 (215) locations in the memory, A 16-bit address code can select 65,536 locations in the memory, and so on. Block diagram of a memory Address Memory Array Decoder Data Bus Address (2-way) Bus Read Write Fig 3.2 Block diagram of a memory showing address bus, address decoder, bidirectional data bus and read/write inputs. The Write operation To store a byte of data in the memory, a code held in the address register is placed on the address bus. (Refer to Fig 3.3) Once the address code is on the bus, the address decoder decodes the address and selects the specified location which is 2 (0102 = 210) in the memory. The memory then gets a write command and the data byte held in the data register is placed on the data bus and stored in the selected memory address, 2 When a new data byte is written into a memory address, the current data byte stored at that address is overwritten and destroyed. Illustration of the write operation Address Register Data Register 010 Byte-organised memory array 10001101 Address 0 1 1 0 0 0 0 1 0 Decoder 1 1 1 0 1 0 0 0 1 1 2 2 1 0 0 0 1 1 0 1 3 1 1 0 0 0 0 0 1 4 1 1 1 0 1 0 0 0 5 0 0 0 0 1 1 0 1 Address bus Data bus 6 0 1 1 0 0 1 1 0 77 0 0 0 0 0 0 0 0 write 3 Fig 3.3 Illustration of the write operation The Read Operation Again, a code held in the address register is placed on the address bus. Once the address code is on the bus, the address decoder decodes the address and selects the specified location in the memory. The memory then gets a read command and a ‘copy’ of the data byte that is stored in the selected memory address is placed on the data bus and loaded into the data register. When a data byte is read from a memory address, it also remains stored at that address 4 and not destroyed. This is called a non-destructive read. (Note that address location 4 was selected after the decoding because 100 in binary is equivalent to 4 in decimal) The Read operation Fig 3.4 Illustration of the Read operation Exercise Draw a block diagram to illustrate how the contents of a register of 1000 1101 will be stored in memory given that the address register Contains 1011 The Memory Stack (Last In-First Out (LIFO) memories) A stack is a storage device in which the information or item stored last is retrieved first. It is therefore referred to as Last In-First Out (LIFO) memories Basically, a computer system follows a memory stack organisation, and it can be implemented using registers or the RAM The LIFO memory is found in application involving microprocessors and other computing systems. It allows data to be stored and then recalled in reverse order, that is, the last data byte to be stored is the first data byte to be retrieved. Register Stacks A LIFO memory is commonly referred to as a pushdown stack. In some systems, it is implemented with a group of registers. A stack can consist of any number of registers, but the register at the top is called the top-of-stack as shown in Fig 3.5 1 Top-of-stack 2.. nth Register Fig 3.5 Register stack Loading Data into the Register Stack To show how data is loaded into the stack, consider a byte of data 11011011, which is to be loaded into the stack. It is loaded in parallel onto the top of the stack. Each successive btye pushes the previous one down into the next register as shown in the fig 3.6. Notice that the new data byte is always loaded into the top register and the previously stored bytes are pushed deeper into the stack. The name push down stack comes from this characteristic. Loading Data into the Register Stack 1 1 0 1 1 0 1 1 1 1 1 1 0 1 1 0 01 0 1 0 0 11 1 0 1 1 0 1 1 0 1 1 11 1 1 0 1 1 0 01 0 1 0 0 1 1 1 11 0 1 1 0 1 1 11 1 1 0 1 1 0 1 1 0 1 1 0 1 1 Fig 3.6 First data byte Second data byte Third data byte 01010011 11011011 is pushed onto the stack 11110110 pushed onto stack pushed onto stack Pulling data from the stack The bytes are retrieved in the reversed order. The last byte entered is always at the top of the stack, the other bytes pop up into the next higher locations. This is shown in fig 3.7. 0 1 0 1 0 0 1 1 1 1 1 1 0 1 1 0 1 0 0 1 0 0 1 1 1 0 1 0 1 0 0 1 1 1 1 1 1 0 1 1 0 1 0 0 1 0 0 1 1 1 1 1 1 1 0 1 1 0 1 0 0 1 0 0 1 1 1 1 0 1 1 0 1 1 1111 1 1 Fig 3.7 Storage of 3 data bytes. Third byte is pulled from stack. After second byte was pulled from The last byte is at the top of The stored second byte then stack, the first byte that was stored, stack pops up to the top of stack pops up to the top of stack RAMSTACK Another approach to LIFO memory used in microprocessor-based systems is the allocation of a section of RAM as the stack rather than the use of dedicated set of registers. In the RAM stack, unlike the register stack, the data itself does not move but the top of the stack moves under control of a register called the stack pointer (SP). This special separate register contains the address of the top of the stack. Consider the illustration in Fig 3.8 The stack pointer therefore points at the top of the stack RAMSTACK Fig 3.8a The stack Pointer is initially at Fig 3.8b The stack pointer is decremented FFCD before the word by two and the data word is placed in 0001001000110100 is two locations prior to the pushed onto the stack original stack printer location How Data is pushed onto the RAMSTACK The stack pointer is initially at address $FFCD, which is the top of the stack as shown in Fig. 3.8a. The stack pointer is then decremented (decreased) by two to $FFCB. This moves the top of the stack to a lower memory address as shown in Fig 3.8b. Notice that the top of the stack is not stationery as in the fixed register stack but moves downward (to a lower memory addresses) in the RAM Data words are then pushed onto the stack. After the data is stored, the top of the stack is at $FFCB as shown in fig 3.8b. The POP Operation for the RAMSTACK ▪ The POP operation reads data from the RAMSTACK In the POP operation for RAM stack, the last word stored in the stack is read first. The stack printer FFCB is incremented (increased) by two to address $FFCD and a POP operation is performed as shown in fig 3.9 The POP Operation for the RAMSTACK Data word stored, 0001001000110100 is copied (popped) from the stack. Keep in mind that RAMs are non-destructive when read, so the data word still remains in the memory after a pop operation. A data word is destroyed only when a new word is written over it. A RAM stack can be of any depth, depending on the number of continuous memory addresses assigned for that purpose. POP OPERATION IN THE RAMSTACK Byte Organised RAM Let’s consider a random-access memory that is byte organized, that is one in which each address contains eight bits. If a 16-bit address is used, then the binary address 0000 0000 0000 1111, for example, which can be written, as 000F in hexadecimal, will be one of the memory addresses. A 16-bit address can have a minimum hexadecimal value of 000016 ($0000) and a maximum value of FFFF16 ($FFFF). In this notation, a 64kbyte memory array can be represented as in fig. 3.10 With the lowest memory address $0000 and the highest memory address as $FFFF. Note also that the number of memory location is 216. IKB = 1024 bytes 64KB = 1024 x 64 = 65,536 = 216. And this corresponds to the highest memory location = FFFF=Fx163+Fx162+Fx161 +Fx160=61440+3840+240+15=65,535 = 65,536 inclusive 0000. Byte Organised RAM LOW MEMORY 0000 0001 0002 0003 0004 0005..... FFFA FFFB FFFC FFFD FFFE FFFF HIGH MEMORY FFFF Fig 3.10 Representation of a 64-kbyte memory with the 16-bit addresses Example Calculate the capacity in megabytes of a memory that has 5196 addresses and can store a word at each address. Solution Usually a word consists of 16 bits That means every memory location will contain 16 bits Total memory locations = 5196 Therefore memory capacity = 5196x16 = 83,136 bits IMB = 1024 x 1024 bytes = 1,048, 576 bytes and 1 byte = 8 bits Thus 1MB = 1,048,576x8 bits = 8368608bits Therefore 8368608bits = 83136/8368608 = 0.01MB

Use Quizgecko on...
Browser
Browser