Summative Architecture and Organization Reviewer PDF

Document Details

Uploaded by Deleted User

Tags

karnaugh maps boolean algebra logic simplification computer architecture

Summary

This document covers Karnaugh maps, a graphical method for simplifying Boolean expressions. It details the rules and procedures for K-map simplification, along with example illustrations, truth tables, and comparisons to boolean algebra techniques.

Full Transcript

Summative Architechture and Organization Reviewer: Module 5: Karnaugh Maps 💡 Using Boolean Identities for reduction is awkward and can be very difficult. Kmaps, on the other hand, provide a precise set of steps to...

Summative Architechture and Organization Reviewer: Module 5: Karnaugh Maps 💡 Using Boolean Identities for reduction is awkward and can be very difficult. Kmaps, on the other hand, provide a precise set of steps to follow to find the minimal representation of a function, and thus the minimal circuit that function represents. Karnaugh Maps or Kmaps are a graphical way to represent Boolean functions. A map is simply a table used to enumerate the values of a given Boolean expression for different input values. The rows and columns correspond to the possible values of the function’s inputs. Each cell represents the outputs of the function for those possible inputs. Minterm is a Boolean expression resulting in 1 for the output of a single cell, and 0s for all other cells in a Karnough map, or truth table If a product term includes all of the variables exactly once, either complemented or not complemented. For example, if there are two input values, x and y, there are four minterms, x'y', x' y, xy', and xy, which represent all of the possible input combinations for the function. If the input variables are x, y, and z, then there are eight minterms: x'y'z', x'y'z, x' yz', x'yz, xy' z',xy' z, xyz', and xyz Minterm x y X’ Y’ 0 0 Summative Architechture and Organization Reviewer: 1 X’ Y 0 1 X Y’ 1 0 XY 1 1 Minterm x y x X’Y’Z’ 0 0 0 X’Y’Z 0 0 1 X’YZ’ 0 1 0 X’YZ 0 1 1 XY’Z’ 1 0 0 XY’Z 1 0 1 XYZ’ 1 1 0 XYZ 1 1 1 Example: In truth table we do: x y xy 0 0 0 0 1 0 1 0 0 1 1 0 In Kmap we do: Summative Architechture and Organization Reviewer: 2 Who develop the Karnaugh Maps? Maurice Karnaugh a telecommunications engineer, develop the Karnaugh map at Bell Labs in 1953 while designing digital based telephone switching circuits. The use of Karnaugh Maps Karnaugh maps reduce logic functions more quickly and easily compared to Boolean algebra. By reduce we mean simplify, reducing the number of gates and inputs. We like to simplify logic to a lowest cost form to save costs by elimination of components. We define lowest cost as being the lowest number of gates with the lowest number of inputs per gate. Given a choice, most students do logic simplification with Karnaugh maps rather than Boolean algebra once they learn this tool. Rules for KMAP Simplification 1. The groups can only contain 1’s ; no 0’s. 2. Only 1’s in adjacent cells can be grouped; diagonal grouping is not allowed. 3. The number of 1’s in a group must be a power of 2. 4. The groups must be as large as possible while still following all rules. 5. All 1’s must be belong to a group, even if it is a group of one. 6. Overlapping group is allowed. 7. Wrap around groups is allowed 8. Use the fewest number of groups possible. Example: Two Variable Map F(x,y) = xy Truth Table: Summative Architechture and Organization Reviewer: 3 x y xy 0 0 0 0 1 0 1 0 0 1 1 1 Kmaps: KMAP classifications: Summative Architechture and Organization Reviewer: 4 Kmap Simplification example: Summative Architechture and Organization Reviewer: 5 Kmap example for 3 variable: Summative Architechture and Organization Reviewer: 6 Module 6: CPU Basics and Organization, The Bus, Clocks, I/O Subsystem, Memory Organization and Addressing CPU Basics and Organization Summative Architechture and Organization Reviewer: 7 We know that a computer must manipulate binary-coded data. We also know that memory is used to store both data and program instructions (also in binary). Somehow, the program must be executed and the data must be processed correctly. The central processing unit (CPU) is responsible for fetching program instructions, decoding each instruction that is fetched, and performing the indicated sequence of operations on the correct data. To understand how computers work, you must first become familiar with their various components and the interaction among these components. All computers have a central processing unit. This unit can be divided into two pieces. The first is the datapath, which is a network of storage units (registers) and arithmetic and logic units (for performing various operations on data) connected by buses (capable of moving data from place to place) where the timing is controlled by clocks. The second CPU component is the control unit, a module responsible for sequencing operations and making sure the correct data is where it needs to be at the correct time. Together, these components perform the tasks of the CPU: fetching instructions, decoding them, and finally performing the indicated sequence of operations. The performance of a machine is directly affected by the design of the datapath and the control unit. Registers are used in computer systems as places to store a wide variety of data, such as addresses, program counters, or data necessary for program execution. A register is a hardware device that stores binary data. Registers are located on the processor so information can be accessed very quickly. Data processing on a computer is usually done on fixed size binary words that are stored in registers. Therefore, most computers have registers of a certain size. Common sizes include 16, 32, and 64 bits. The number of registers in a machine varies from architecture to architecture, but is typically a power of 2, with 16 and 32 being most common. Registers contain data, addresses, or control information. Some registers are specified as “special purpose” and may contain only data, only addresses, or only control information. Other registers are more generic and may hold data, addresses, and control information at various times. Information is Summative Architechture and Organization Reviewer: 8 written to registers, read from registers, and transferred from register to register. Registers are not addressed in the same way memory is addressed ( each memory word has a unique binary address beginning with location 0). Registers are addressed and manipulated by the control unit itself. In modern computer systems, there are many types of specialized registers: registers to store information, registers to shift values, registers to compare values, and registers that count. There are “scratchpad” registers that store temporary values, index registers to control program looping, stack pointer registers to manage stacks of information for processes, status registers to hold the status or mode of operation (such as overflow, carry, or zero conditions), and general purpose registers that are the registers available to the programmer. Most computers have register sets, and each set is used in a specific way. For example, the Pentium architecture has a data register set and an address register set. Certain architectures have very large sets of registers that can be used in quite novel ways to speed up execution of instructions. The arithmetic logic unit (ALU) carries out the logic operations (such as comparisons) and arithmetic operations (such as add or multiply) required during the program execution. Generally an ALU has two data inputs and one data output. Operations performed in the ALU often affect bits in the status register (bits are set to indicate actions such as whether an overflow has occurred). The ALU knows which operations to perform because it is controlled by signals from the control unit. The control unit (CU) is the “policeman” or “traffic manager” of the CPU. It monitors the execution of all instructions and the transfer of all information. The control unit extracts instructions from memory, decodes these instructions, making sure data is in the right place at the right time, tells the ALU which registers to use, services interrupts, and turns on the correct circuitry in the ALU for the execution of the desired operation. The control unit uses a program counter register to find the next instruction for execution and a status register to keep track of overflows, carries, borrows, and the like. A bus can be point-to-point, connecting two specific components or it can be multipoint, a common pathway that connects a number of devices, requiring these devices to share the bus. Summative Architechture and Organization Reviewer: 9 Point-to-point buses Summative Architechture and Organization Reviewer: 10 Multipoint Buses Component of a typical Buses The figure above shows a typical bus consisting of data lines, address lines, control lines, and power lines. Often the lines of a bus dedicated to moving data are called the data bus. These data lines contain the actual information that must be moved from one location to another. Control lines indicate which device has permission to use the bus and for what purpose (reading or writing from memory or from an I/O device, for example). Control lines also transfer acknowledgments for bus requests, interrupts, and clock synchronization signals. Address lines indicate the location (in memory, for example) that the data should be either read from or written to. The power lines provide the electrical power necessary. Summative Architechture and Organization Reviewer: 11 Due to the different types of information buses transport and the various devices that use the buses, buses themselves have been divided into different types. Processor-memory buses are short, high-speed buses that are closely matched to the memory system on the machine to maximize the bandwidth (transfer of data) and are usually very design specific. I/O buses are typically longer than processor-memory buses and allow for many types of devices with varying bandwidths. These buses are compatible with many different architectures. A backplane bus is actually built into the chassis of the machine and connects the processor, the I/O devices, and the memory (so all devices share one bus). Many computers have a hierarchy of buses, so it is not uncommon to have two buses (for example a processor-memory bus and an I/O bus) or more in the same system. High-performance systems often use all three types of buses. A backplane bus With asynchronous buses, control lines coordinate the operations and a complex handshaking protocol must be used to enforce timing. To read a word of data from memory, for example, the protocol would require steps similar to the following: 1. ReqREAD: This bus control line is activated and the data memory address is put on the appropriate bus lines at the same time. 2. ReadyDATA: This control line is asserted when the memory system has put the required data on the data lines for the bus. 3. ACK: This control line is used to indicate that the ReqREAD or the Ready-DATA has been acknowledged. Summative Architechture and Organization Reviewer: 12 Using a protocol instead of the clock to coordinate transactions means that asynchronous buses scale better with technology and can support a wider variety of devices. To use a bus, a device must reserve it, because only one device can use the bus at a time. Bus masters are devices that are allowed to initiate transfer of information (control bus) whereas bus slaves are modules that are activated by a master and respond to requests to read and write data (so only masters can reserve the bus). Both follow a communications protocol to use the bus, working within very specific timing requirements. In a very simple system, the processor is the only device allowed to become a bus master. This is good in terms of avoiding chaos, but bad because the processor now is involved in every transaction that uses the bus. In systems with more than one master device, bus arbitration is required. Bus arbitration schemes must provide priority to certain master devices while, at the same time, making sure lower priority devices are not starved out. Bus arbitration schemes fall into four categories: Daisy chain arbitration: This scheme uses a “grant bus” control line that is passed down the bus from the highest priority device to the lowest priority device. (Fairness is not ensured, and it is possible that low priority devices are “starved out” and never allowed to use the bus.) This scheme is simple but not fair. Centralized parallel arbitration: Each device has a request control line to the bus, and a centralized arbiter selects who gets the bus. Bottlenecks can result using this type of arbitration. Distributed arbitration using self-selection: This scheme is similar to centralized arbitration but instead of a central authority selecting who gets the bus, the devices themselves determine who has highest priority and who should get the bus. Distributed arbitration using collision detection: Each device is allowed to make a request for the bus. If the bus detects any collisions (multiple simultaneous requests), the device must make another request. (Ethernet uses this type of arbitration.) Summative Architechture and Organization Reviewer: 13 The Clock Every computer contains an internal clock that regulates how quickly instructions can be executed. The clock also synchronizes all of the components in the system. As the clock ticks, it sets the pace for everything that happens in the system, much like a metronome or a symphony conductor. The CPU uses this clock to regulate its progress, checking the otherwise unpredictable speed of the digital logic gates. The CPU requires a fixed number of clock ticks to execute each instruction. Therefore, instruction performance is often measured in clock cycles—the time between clock ticks—instead of seconds. The clock frequency (sometimes called the clock rate or clock speed) is measured in MHz where 1MHz is equal to 1 million cycles per second (so 1 hertz is 1 cycle per second). The clock cycle time (or clock period) is simply the reciprocal of the clock frequency. For example, an 800MHz machine has a clock cycle time of 1/800,000,000 or 1.25ns. If a machine has a 2ns cycle time, then it is a 500MHz machine. Most machine instructions require 1 or 2 clock cycles, but some can take 35 or more. We present the following formula to relate seconds to cycles: It is important to note that the architecture of a machine has a large effect on its performance. Two machines with the same clock speed do not necessarily execute instructions in the same number of cycles. For example, a multiply operation on an older Intel 286 machine required 20 clock cycles, but on a new Pentium, a multiply operation can be done in 1 clock cycle, which implies the newer machine would be 20 times faster than the 286 even if they both had the same internal system clock. In general, multiplication requires more time than addition, floating point operations require more cycles than integer ones, and accessing memory takes longer than accessing registers. Generally, when we mention the term clock, we are referring to the system clock, or the master clock that regulates the CPU and other components. Summative Architechture and Organization Reviewer: 14 However, certain buses also have their own clocks. Bus clocks are usually slower than CPU clocks, causing bottleneck problems. System components have defined performance bounds, indicating the maximum time required for the components to perform their functions. Manufactures guarantee their components will run within these bounds in the most extreme circumstances. When we connect all of the components together in a serial fashion, where one component must complete its task before another can function properly, it is important to be aware of these performance bounds so we are able to synchronize the components properly. However, many people push the bounds of certain system components in an attempt to improve system performance. Overclocking is one method people use to achieve this goal. Although many components are potential candidates, one of the most popular components for overclocking is the CPU. The basic idea is to run the CPU at clock and/or bus speeds above the upper bound specified by the manufacturer. Although this can increase system performance, one must be careful not to create system timing faults, or worse yet, overheat the CPU. The system bus can also be overclocked, which results in overclocking the various components that communicate via the bus. Overclocking the system bus can provide considerable performance improvements, but can also damage the components that use the bus or cause them to perform unreliably. The Input/Output Subsystem Input and output (I/O) devices allow us to communicate with the computer system. I/O is the transfer of data between primary memory and various I/O peripherals. Input devices such as keyboards, mice, card readers, scanners, voice recognition systems, and touch screens allow us to enter data into the computer. Output devices such as monitors, printers, plotters, and speakers allow us to get information from the computer. These devices are not connected directly to the CPU. Instead, there is an interface that handles the data transfers. This interface converts the system bus signals to and from a format that is acceptable to the given device. The CPU communicates to these external devices via input/output registers. This exchange of data is performed in two ways. In memory-mapped I/O, the registers in the interface appear in the computer’s memory map and there is no real difference between accessing memory and accessing an I/O device. Clearly, this is advantageous Summative Architechture and Organization Reviewer: 15 from the perspective of speed, but it uses up memory space in the system. With instruction-based I/O, the CPU has specialized instructions that perform the input and output. Although this does not use memory space, it requires specific I/O instructions, which implies it can be used only by CPUs that can execute these specific instructions. Interrupts play a very important part in I/O, because they are an efficient way to notify the CPU that input or output is available for use. Memory Organization and Addressing You can envision memory as a matrix of bits. Each row, implemented by a register, has a length typically equivalent to the word size of the machine. Each register (more commonly referred to as a memory location) has a unique address; memory addresses usually start at zero and progress upward. The figure bellow illustrates it. a) N 8-Bit Memory Locations b) M 16-Bit Memory Locations An address is almost always represented by an unsigned integer. Normally, memory is byte-addressable, which means that each individual byte has a unique address. Some machines may have a word size that is larger than a single byte. For example, a computer might handle 32-bit words (which means it can manipulate 32 bits at a time through various instructions), but still employ a byte- addressable architecture. In this situation, when a word uses multiple bytes, the byte with the lowest address determines the address of the entire word. It is also possible that a computer might be word-addressable, which means each word (not necessarily each byte) has its own address, but most current machines are byte-addressable (even though they have 32-bit or larger words). A memory address is typically stored in a single machine word. Summative Architechture and Organization Reviewer: 16 High-order interleaving, the more intuitive organization, distributes the addresses so that each module contains consecutive addresses, as we see with the 32 addresses in the figure above. Low-order interleaved memory places consecutive words of memory in different memory modules. The figure below shows low-order interleaving on 32 addresses. The key concepts to focus on are: (1) Memory addresses are unsigned binary values (although we often view them as hex values because it is easier), and (2) The number of items to be addressed determines the numbers of bits that occur in the address. Although we could always use more bits for the address than required, that is seldom done because minimization is an important concept in computer design. MODULE 7: Instruction Set Architechture Instruction Set Architecture The ISA specifies what the processor is capable of doing and the ISA, how it gets accomplished. So the instruction set architecture is basically the interface between your hardware and the software. The only way that you can interact with the hardware is the instruction set of the processor. To command the computer, you need to speak its language and the instructions are the words of a computer’s Summative Architechture and Organization Reviewer: 17 language and the instruction set is basically its vocabulary. Unless you know the vocabulary and you have a very good vocabulary, you cannot gain good benefits out of the machine. ISA is the portion of the machine which is visible to either the assembly language programmer or a compiler writer or an application programmer. It is the only interface that you have, because the instruction set architecture is the specification of what the computer can do and the machine has to be fabricated in such a way that it will execute whatever has been specified in your ISA. The only way that you can talk to your machine is through the ISA. This gives you an idea of the interface between the hardware and software. Let us assume you have a high-level program written in C which is independent of the architecture on which you want to work. This high-level program has to be translated into an assembly language program which is specific to a particular architecture. Let us say you find that this consists of a number of instructions like LOAD, STORE, ADD, etc., where, whatever you had written in terms of high-level language now have been translated into a set of instructions which are specific to the specific architecture. All these instructions that are being shown here are part of the instruction set architecture of the MIPS architecture. These are all English like and this is not understandable to the processor because the processor is after all made up of digital components which can understand only zeros and ones. So this assembly language will have to be finely translated into machine language, object code which consists of zeros and ones. So the translation from your high- level language to your assembly language and the binary code will have to be done with the compiler and the assembler. SAP-1(Simple as Possible-1) Architecture The Simple-As-Possible (SAP)-1 computer is a very basic model of a microprocessor explained by Albert Paul Malvino. The SAP-1 design contains the basic necessities for a functional Microprocessor. Its primary purpose is to develop a basic understanding of how a microprocessor works, interacts with memory and other parts of the system like input and output. The instruction set is very limited and is simple. The features in SAP-1 computer are: W bus – A single 8 bit bus for address and data transfer. 16 Bytes memory (RAM) Summative Architechture and Organization Reviewer: 18 Registers are accumulator and B-register each of 8 bits. Program counter – initializes from 0000 to 1111 during program execution. Memory Address Register (MAR) to store memory addresses. Adder/ Subtracter for addition and subtraction instructions. A Control Unit A Simple Output. 6 machine state reserved for each instruction The instruction format of SAP-1 Computer is (XXXX) (XXXX) Summative Architechture and Organization Reviewer: 19

Use Quizgecko on...
Browser
Browser