Computer Organization and Architecture PDF
Document Details
Uploaded by BeneficiaryBlessing6732
null
null
null
Tags
Related
- Computer-Organization-and-Architecture-10th-William-Stallings.pdf
- Computer Organization and Architecture (11th Edition Global) PDF
- Computer Organization and Architecture Notes & Ebook PDF
- Computer Organization and Architecture Past Paper PDF
- Chapter 4 Boolean Algebra PDF
- Summative Architecture and Organization Reviewer PDF
Summary
This document is a collection of notes on computer organization and architecture, covering various topics including Boolean algebra, Turing machines, digital systems, and computer types. It provides an overview of key concepts in digital computer design.
Full Transcript
Video chapters Chapter-1 (Introduction): Boolean Algebra, Types of Computer, Functional units of digital system and their interconnections, buses, bus architecture, types of buses and bus arbitration. Register, bus and memory transfer. Processor organization, general registers organization, stack or...
Video chapters Chapter-1 (Introduction): Boolean Algebra, Types of Computer, Functional units of digital system and their interconnections, buses, bus architecture, types of buses and bus arbitration. Register, bus and memory transfer. Processor organization, general registers organization, stack organization and addressing modes. Chapter-2 (Arithmetic and logic unit) : Look ahead carries adders. Multiplication: Signed operand multiplication, Booth's algorithm and array multiplier. Division and logic operations. Floating point arithmetic operation, Arithmetic & logic unit design. IEEE Standard for Floating Point Numbers Chapter-3 (Control Unit) : Instruction types, formats, instruction cycles and sub cycles (fetch and execute etc), micro-operations, execution of a complete instruction. Program Control, Reduced Instruction Set Computer,. Hardwire and micro programmed control: micro programme sequencing, concept of horizontal and vertical microprogramming. Chapter-4 (Memory) : Basic concept and hierarchy, semiconductor RAM memories, 2D & 2 1/2D memory organization. ROM memories. Cache memories: concept and design issues & performance, address mapping and replacement Auxiliary memories: magnetic disk, magnetic tape and optical disks Virtual memory: concept implementation. Chapter-5 (Input / Output) : Peripheral devices, 1/0 interface, 1/0 ports, Interrupts: interrupt hardware, types of interrupts and exceptions. Modes of Data Transfer: Programmed 1/0, interrupt initiated 1/0 and Direct Memory Access., 1/0 channels and processors. Serial Communication: Synchronous & asynchronous communication, standard communication interfaces. Chapter-6 (Pipelining): Uniprocessing, Multiprocessing, Pipelining, Speed UP, Structural hazards, Control hazards, Data hazards, Operand Forwarding. www.knowledgegate.in Boolean algebra The signal in most present day electronic digital system uses just two discrete values and are therefore said to be binary. Boolean algebra was introduced by George Boole in his first book The Mathematical Analysis of Logic (1847), and set forth more fully in his An Investigation of the Laws of Thought (1854). George boole introduced the concept of binary number system in the studies of the mathematical theory of logic and developed its algebra known as Boolean algebra. www.knowledgegate.in Boolean algebra 1. In mathematics and mathematical logic, Boolean algebra is the branch of algebra in which the values of the variables are the truth values true and false, usually denoted 1 and 0 respectively. 2. Instead of elementary algebra where the values of the variables are numbers, and the prime operations are addition and multiplication. The main operations of Boolean algebra are The conjunction and denoted as ∧ The disjunction or denoted as ∨ The negation not denoted as ¬ www.knowledgegate.in Turing Machine The Church-Turing thesis states that any algorithmic procedure that can be carried out by human beings/computer can be carried out by a Turing machine.(1936) It has been universally accepted by computer scientists that the Turing machine provides an ideal theoretical model of a computer. www.knowledgegate.in Digital System In the 1930s, while studying switching circuits, Claude Shannon observed that one could also apply the rules of Boole's algebra in this setting, and he introduced switching algebra as a way to analyze and design circuits by algebraic means in terms of logic gates. These logic concepts have been adapted for the design of digital hardware since 1938 Claude Shannon (father of information theory), organized and systematized Boole’s work. www.knowledgegate.in Shannon already had at his disposal the Boolean algebra, thus he cast his switching algebra as the two-element Boolean algebra. Efficient implementation of Boolean functions is a fundamental problem in the design of combinational logic circuits. Boolean constants are denoted by 0 or 1. Boolean variables are quantities that can take different values at different times. They may represent input, output or intermediate signals. Here we can have n number of variables usually written as a, b, c…. (lower case) and it satisfy all Boolean laws, which will be discussed later. www.knowledgegate.in Historically there have been 2 types of Computers: Fixed Program Computers / Dedicated device / Embedded system – Their function is very specific and they couldn’t be reprogrammed, e.g. Calculators washing machine. Stored Program Computers / General purpose computer / von Neumann architecture These can be programmed to carry out many different tasks, applications are stored on them, hence the name. www.knowledgegate.in Fixed Program Computers / Dedicated device / Embedded system They are designed to perform a specific task their functionality is written in terms of program which is permanently fused in a chipset. E.g. washing machine, microwave etc. www.knowledgegate.in Embedded system are kind of simple computers designed to do Specific task for e.g. microwave, washing machine etc., they have small microprocessors inside them. These microprocessors are hardwired and can not do everything, except specific task. www.knowledgegate.in Stored Program Computers / General purpose computer / von Neumann architecture Modern computers are based on a stored-program concept introduced by John Von Neumann, Which can perform anything which can be theoretically done by a Turing machine. These computer works on stored program concept where a programmer, writes a program store it in the memory and then computer executes it, based on the requirement program can be changed and so does the functionality. www.knowledgegate.in Machine / CPU architecture: - In general a CPU contain 3 important components Control unit: - Which generates control signal (master generator) to control every other part of the CPU. It directs all input and output flow, fetches code for instructions, and controls how data moves around the system. Registers: - few important registers are in every processor like program counter which contains address of the next instruction then instruction register which contain address of the current register and base register which contain base address of program. ALU: - ALU is a complex combination circuit which can perform arithmetic Operations, Bit Shifting Operations, and logical operation. e.g. Addition, Subtraction, Comparisons etc. www.knowledgegate.in Now the question is what are the main elements we need Memory (store a program) ALU (circuit which perform operations) Register (fast memory, sequence of flip-flop) (load, clear, increment pin) www.knowledgegate.in Timing circuit (sequence counter) (to order certain operations, like fetch, decode, execute), will generate timing signals Control unit will generate control signals-to select registers, select other circuit, to give inputs to registers, So we need a special unit called control unit, which will give signals to all the components Flags – one-bit information Bus – using which we will connect different component together, and perform data www.knowledgegate.in transfer using multiplexer how in general operation are performed? memory -> register -> ALU (perform operation ) register -> memory. www.knowledgegate.in Address bus: - Is used to identify the correct i/o devices among the number of i/o device , so CPU put an address of a specific i/o device on the address line, all devices keep monitoring this address bus and decode it, and if it is a match then it activates control and data lines. www.knowledgegate.in Control bus: - After selecting a specific i/o device CPU sends a functional code on the control line. The selected device(interface) reads that functional code and execute it. E.g. i/o command, control command, status command etc. www.knowledgegate.in Data bus: - In the final step depending on the operation either CPU will put data on the data line and device will store it or device will put data on the data line and CPU will store it. www.knowledgegate.in Bus Arbitration Bus Arbitration: This is the method used to decide which device gets access to the common bus when multiple devices request it simultaneously, ensuring data integrity and system stability. Conflict Resolution: Without bus arbitration, simultaneous access could result in data corruption and system malfunctions, making this mechanism essential for orderly and reliable data transfer. www.knowledgegate.in Daisy Chaining method: It is a simple and cheaper method where all the bus masters use the same line for making bus requests. The bus grant signal serially propagates through each master until it encounters the first one that is requesting access to the bus. This master blocks the propagation of the bus grant signal, therefore any other requesting module will not receive the grant signal and hence cannot access the bus. www.knowledgegate.in Polling Address Generation: In this method, a controller generates unique addresses for each master (device) based on their priority. The number of address lines correlates with the number of masters in the system. Sequence & Activation: The controller cycles through the generated addresses. When a master recognizes its own address, it activates a "busy" signal and gains access to the bus for data transfer. www.knowledgegate.in Fixed priority or Independent Request method In this, each master has a separate pair of bus request and bus grant lines and each pair has a priority assigned to it. The built-in priority decoder within the controller selects the highest priority request and asserts the corresponding bus grant signal. www.knowledgegate.in Processor Organization www.knowledgegate.in Registers - Registers refer to high-speed storage areas in the CPU. The data processed by the CPU are fetched from the registers. There are different types of registers used in architecture. www.knowledgegate.in The data register (DR) is a register which is used to store any data that is to be transferred to memory or which are fetched from memory. The accumulator (AC) register is a general- purpose processing register. Will hold the intermediate arithmetic and logic results of the operation performed in the ALU, as ALU is not directly connected to the memory. www.knowledgegate.in Instruction register- is used to store instruction which we fetched from memory, so that we analyses the instruction using a decoder and can understand what instruction want to perform. The temporary register (TR) is used for holding temporary data during the processing. www.knowledgegate.in The memory address register (AR) has 12 bits since this is the width of a memory address (always used least significant bits of bus). It stores the memory locations of instructions or data that need to be fetched from memory or stored in memory. The program counter (PC) also has 12 bits and it holds the address of the next instruction to be read from memory after the current instruction is executed. The PC goes through a counting sequence and causes the computer to read sequential instructions previously stored in memory. www.knowledgegate.in Two registers are used for input and output. The input register (INPR) receives an 8-bit character from an input device, and pass it to ALU then accumulator and then to memory. The output register (OUTR) holds an 8-bit character for an output device, screen, printer etc. www.knowledgegate.in The outputs of seven registers and memory are connected to the common bus. The specific output that is selected for the bus lines at any given time is determined from the binary value of the selection variables S2, S1, and S0. The number along each output shows the decimal equivalent of the required binary selection. Example: the number along the output of DR is 3. The 16-bit outputs of DR are placed on the bus lines when S2S1S0 = 011. www.knowledgegate.in www.knowledgegate.in www.knowledgegate.in www.knowledgegate.in Decoder 2X4 www.knowledgegate.in The particular register whose LD (load) input is enabled receives the data from the bus during the next clock pulse transition. The memory receives the contents of the bus when its write input is activated. The memory places its 16-bit output onto the bus when the read input is activated and S2S1S0 = 111. www.knowledgegate.in The input register INPR and the output register OUTR have 8 bits each and communicate with the eight least significant bits in the bus. INPR is connected to provide information to the bus but OUTR can only receive information from the bus. There is no transfer from OUTR to any of the other registers. www.knowledgegate.in Some additional points to notice o The 16 lines of the common bus receive information from six registers and the memory unit. o The bus lines are connected to the inputs of six registers and the memory. o Five registers have three control inputs: LD (load), INR (increment), and CLR (clear) www.knowledgegate.in General registers organization www.knowledgegate.in General registers organization www.knowledgegate.in Stack organization It is an ordered list in which addition of a new data item and deletion of already existing data item is done from only one end known as top of stack (TOS) The element which is added in last will be first to be removed and the element which is inserted first will be removed in last, so it is called last in first out (LIFO) or first in last out (FILO) type of list. Most frequently accessible element in the stack is the topmost element, whereas the least accessible element is the bottom of the stack. www.knowledgegate.in Stack Pointer register (SP): It contains a value in binary each of 6 bits, which is the address of the top of the stack. Here, the stack pointer SP contains 6 bits and SP cannot contain a value greater than 111111 i.e. 63 Full Register: it can store 1 bit of information, set to 1 when stack is full. Empty Register: it can store 1 bit of information, set to 1 when the stack is empty. Data Register: It hold the data to be written into into be read from the stack. www.knowledgegate.in Memory Stack Memory stacks operate on a Last-In, First-Out (LIFO) principle, where the most recently added element is the first to be removed. A special register called the Stack Pointer (SP) points to the top of the stack, and it can contain binary values up to a limit, often determined by the architecture, such as 6 bits equating to 63 in decimal. To monitor the stack's status, Full and Empty Registers are used, each storing a single bit to indicate whether the stack is full or empty. A Data Register holds the data to be written into or read from the stack, serving as an intermediary between the CPU and the stack. These elements collectively form the essential organization of a memory stack. www.knowledgegate.in Addressing Mode It specifies the different ways possible in which reference to the operand can be made. Effective address: - It is the final address of the location where the operand is stored. Calculation of the effective address can be done in two ways Non-computable addressing Computable addressing (which involve arithmetic’s) www.knowledgegate.in Criteria for different addressing mode It should be fast The length of the instruction must be small They should support pointers They should support looping constructs, indexing of data structure Program relocation www.knowledgegate.in Immediate mode addressing It means the operand is itself part of the instruction. E.g. ADD 3 Means Add 3 to the accumulator www.knowledgegate.in Advantage Can be used for constants, where values are already known. Extremely fast, no memory reference is required. Disadvantage Cannot be used with variables whose values are unknown at the time of program writing. Cannot be used for large constant whose values cannot be stored in the small part of instruction. Application Mostly used when required data is directly moved to required register or memory www.knowledgegate.in Direct mode addressing (absolute address mode) It means instruction contain address of the memory location where data is present (effective address). Here only one memory reference operation is required to access the data. Address Operand www.knowledgegate.in Advantage With variable whose values are unknown direct addressing is the simplest one. No restriction on the range of data values and largest value which system can hold, can be used in this mode. Can be used to access global variables whose address is known at compile time. Disadvantage Relatively slow compare to immediate mode No of variable used are limited In large calculation it will fail www.knowledgegate.in Indirect mode addressing Here in the instruction we store the address where the (address of the variable) effective address is stored using which we can access the actual data. Effective Address Here two references are required. Operand 1streference to get effective address and 2nd reference to access the data. www.knowledgegate.in Advantage No limitation on no of variable or size of variables. Implementation of pointer are feasible and relatively more secure. Disadvantage Relatively slow as memory must be referred more than one time. www.knowledgegate.in Implied mode addressing In implied addressing mode, the operands are specified implicitly in the definition of the instruction. All the instructions which reference registers that use an accumulator are implied mode instructions. Zero address instructions in a stack organized computer are also implied mode instructions. Thus, it is also known as stack addressing mode. E.g. Increment accumulator, Complement accumulator www.knowledgegate.in Register mode addressing It means variable are stored in register of the CPU instead of memory, in the instruction we will give the register no Opcode Register No Operand Advantage Will be extremely fast as register access time will be less then cache access time. Because no of registers is less, so bits required to specify a register is also less. Disadvantage Because no of registers is very less so only with few variables this method can be used www.knowledgegate.in Register indirect mode addressing In this mode in the instruction we will specify the address of the register where inside the register actual memory address Opcode Register No of the variable is stored effective address. Effective Address When same address is required again and again, this approach is very useful Operand Here the same register can be used to provide different data by charging the value of register, i.e. it can reference memory without paying the price of having a full memory in the instruction. In pointer arithmetic, it provides more flexibility compare to register mode. Register indirect mode can further be improved by auto increment and auto decrement command where we read or work on some continuous data structure like array or matrix. www.knowledgegate.in Base register (off set) mode In multiprogramming environment, the location of the process in the memory keeps on changing from one place to another. But if we are using direct address in the process then it will create a problem. So, to solve this problem we try to save the starting of the program address in a register called base register. It is a very popular approach in most of the computer. www.knowledgegate.in Then instead to giving the direct branch address we give off set in the instruction Effective address = base address + off set (instruction) Now the advantage is even if we try to shift process in the memory, we only need to change the content of the base register. Final address will be the sum of what is given in instruction and given in base register. www.knowledgegate.in Index addressing mode This mode is generally used when CPU has a number of registers, and out of which one can be used an index register This mode is especially useful, when we try to access large sixed array elements. www.knowledgegate.in Idea is, give the base address of the array in the instruction and the index which we want to access will be there in the register. So by changing the index in the index register we can use the same instruction to access the different element in the array. Base in present inside the instruction and index is present inside the register www.knowledgegate.in Relative Addressing Mode Effective address of the operand is obtained by adding the content of program counter with the address part of the instruction. www.knowledgegate.in Full adder 1. A full adder is a combinational logic circuit that performs the arithmetic sum of three input bits. 2. Where An, Bn are the nth order bits of the number A and B respectively and Cn is the carry generated from the addition of (n-1)th order bits. 3. It consists of three input bits, denoted by A (First operand), B (Second operand), Cin (Represents carry from the previous lower significant position). www.knowledgegate.in Two output bits are same as of half adder, which is Sum and Carryout. When the augend and addend number contain more significant digits, the carry obtained from the addition of two bits is added to the next higher order pair of significant bits. INPUTS OUTPUTS A B C in C out Sum 0 0 0 0 0 1 0 1 0 0 1 1 1 0 0 1 0 1 1 1 0 1 1 1 www.knowledgegate.in ab a’b’ a’b ab ab’ ab a’b’ a’b ab ab’ cin 00 01 11 10 cin 00 01 11 10 cin’ 0 0 2 6 4 cin’ 0 0 2 6 4 cin 1 1 3 7 5 cin 1 1 3 7 5 INPUTS OUTPUTS A B C in Sum C out 0 0 0 0 0 0 0 1 1 0 0 1 0 1 0 0 1 1 0 1 1 0 0 1 0 1 0 1 0 1 1 1 0 0 1 1 1 1 www.knowledgegate.in 1 1 www.knowledgegate.in Full subtractor www.knowledgegate.in www.knowledgegate.in Four-bit parallel binary adder / Ripple adder As we know that full adder is capable of adding two 1 bit number and 1 previous carry, but in order to add binary numbers with more than one bits, additional full adders must be employed. For e.g. a four bit binary adder can be constructed using four full adders. Theses four full adders are connected in cascade, carry output of each adder is connected to the carry input of the next higher-order adder. So a n-bit parallel adder is constructed using ‘n’ number of full adders. www.knowledgegate.in There are some scope of improvement in parallel binary adder / Ripple adder is it is very slow Carry propagation delay Look ahead Carry Generator www.knowledgegate.in Look ahead carry adder The carry propagation time is an important attribute of the adder because it limits the speed with which two numbers are added. The solution to delay is to increase the complexity of the equipment in such a way that the carry delay time is reduced. To solve this problem most widely used technique employs the principle of ‘look ahead carry’. This method utilizes logic gates to look at the lower order bits of the augend and addend to see if a higher order carry is to be generated. It uses two functions carry generate Gi and carry propagate Pi A(A3 A2 A1 A0) B(B3 B2 B1 B0) www.knowledgegate.in Gi is called a carry generate, and it produces a carry of 1 when both Ai and Bi are 1, regardless of the input carry Ci. Pi is called a carry propagate, because it determines whether a carry into stage i will propagate into stage i + 1. We now write the Boolean functions for the carry outputs of each stage and substitute the value of each Ci from the previous equations: Pi = Ai ⊕ Bi Gi= Ai. Bi Si = Pi ⊕ Ci Ci+1 = Gi + Pi Ci C0 = 0 C1 = G0 + P0 C0 C2 = G1 + P1G0 + P1P0 C0 C3 = G2 + P2G1 + P2P1G0 + P2P1 P0 C0 C4 = G3 + G2.P3 + G1.P2.P3 + G0.P1.P2.P3 + C0.P0.P1.P2.P3 www.knowledgegate.in Since the Boolean function for each output carry is expressed in sum-of-products form. Each function can be implemented with one level of AND gates followed by an OR gate (or by a two- level NAND). C0 = 0 C1 = G0 + P0 C0 C2 = G1 + P1G0 + P1P0 C0 C3 = G2 + P2G1 + P2P1G0 + P2P1 P0 C0 C4 = G3 + G2.P3 + G1.P2.P3 + G0.P1.P2.P3 + C0.P0.P1.P2.P3 www.knowledgegate.in All output carries are generated after a delay through two levels of gates. Thus, outputs S1 through S3 have equal propagation delay times. www.knowledgegate.in Four-bit ripple adder/subtractor The subtraction A - B can be done by taking the 2’s complement of B and adding it to A. A + (-B) www.knowledgegate.in The circuit for subtracting A - B consists of an adder with inverters placed between each data input B and the corresponding input of the full adder. The mode input M controls the operation. When M = 0, the circuit is an adder, and when M = 1, the circuit becomes a subtractor. When M = 0, we have B ⊕ 0 = B. The full adders receive the value of B, the input carry is 0, and the circuit performs A plus B. When M = 1, we have B ⊕ 1 = B’ and C0 = 1. The B inputs are all complemented and a 1 is added through the input carry. The circuit performs the operation A plus the 2’s complement of B. www.knowledgegate.in Booth’s algorithm Andrew Donald Booth (11 February 1918 – 29 November 2009) was a British electrical engineer, physicist and computer scientist, who was an early developer of the magnetic drum memory for computers. He is known for Booth's multiplication algorithm. Booth algorithm optimizes binary multiplication by reducing the number of additions and subtractions based on the multiplier bits. The process involves examining multiplier bits, then either adding, subtracting, or leaving the multiplicand unchanged before shifting the partial product. www.knowledgegate.in www.knowledgegate.in www.knowledgegate.in Array multiplier Array multiplier uses a grid of full and half adders to perform nearly simultaneous addition of product terms. AND gates are used to form these product terms before they are fed into the adder array. Unlike sequential multipliers that check bits one at a time, array multipliers form the product bits all at once, making the operation faster. The speed advantage comes at a cost of requiring a large number of gates, which became economical www.knowledgegate.in with the advent of integrated circuits. www.knowledgegate.in 1100 0110 www.knowledgegate.in Restoring Division Algorithm n M A Q Action/Operation 4 00011 00000 1011 Initialization 00001 011? SL AQ 11110 011? A = A-M 3 00011 00001 0110 QO ß 0 00010 110? SL AQ 11111 110? A = A-M 2 00011 00010 1100 QO ß 0 00101 100? SL AQ 00010 100? A = A-M 1 00011 00010 1001 QO ß 1 00101 001? SL AQ 00010 001? A = A-M 0 00011 00010 www.knowledgegate.in 0011 QO ß 1 Non-Restoring Division Algorithm n M A Q Action/Operation 4 00011 00000 1011 Initialization 00001 011? SL AQ 11110 011? A = A-M 3 00011 11110 0110 QO ß 0 11100 110? SL AQ 11111 110? A = A+M 2 00011 11111 1100 QO ß 0 11111 100? SL AQ 00010 100? A = A+M 1 00011 00010 1001 QO ß 1 00101 001? SL AQ 00010 001? A = A-M 0 00011 www.knowledgegate.in 00010 0011 QO ß 1 Q Add -35 and -31 in binary using 8-bit registers, in signed 1's complement and signed 2's complement? www.knowledgegate.in Floating point representation Problem with representation we have already studied is that it do not works well if the number to be stored is either too small or too large, is it take very large amount of memory. Imagine a number 6.023 * 10 23 will require around 70 bits to be stored. So in scientific application or statics it is a problem to store very small or very large number. www.knowledgegate.in Floating point representation a special kind of sign magnitude representation. Floating point number is stored in mantissa/exponent form i.e. m*re Mantissa is a signed magnitude fraction for most of the cases. The exponent is stored in biased form. www.knowledgegate.in The biased exponent is an unsigned number representing signed true exponent. If the biased exponent field contains K bits, the biased = 2k-1 True value expression is V = (-1)S(.M)2*2E-Bias , note it is explicit representation True value expression is V = (-1)S(1.M)2*2E-Bias , note it is implicit representation, it has more precision than explicit normalization. www.knowledgegate.in Biased exponent(E) = True exponent(e) + Bias Range of true exponent: from -2k-1 to +2k-1-1, where k is number of bits assigned for exponent After adding bias 2k-1, new range go from 0 to 2k-1 www.knowledgegate.in How to convert a signed number into floating point representation The floating-point normalized number distribution is not uniform. Representable number are dense towards zero and spared towards max value This uneven distribution result negligible effect of rounding towards zero and dominant towards max value. www.knowledgegate.in Q Represent -21.75 (s=1, k=7, m=8) www.knowledgegate.in IEEE 754 floating point standard The IEEE Standard for Floating-Point Arithmetic (IEEE 754) is a technical standard for floating-point arithmetic established in 1985 by the Institute of Electrical and Electronics Engineers (IEEE). The standard addressed many problems found in the diverse floating-point implementations that made them difficult to use reliably and portably. Many hardware floating-point units use the IEEE 754 standard. www.knowledgegate.in IEEE 754 floating point standard IEEE 754 standard represent floating point number standards It gives a provision for +-0 & +-∞ (by reserving certain pattern for Mantissa/Exponent pattern) there are number of modes for storage from half precision (16 bits) to very lengthy notations based of the system is 2 the floating-point number can be stored either in implicit normalized form or in fractional form If the biased exponent field contains K bits, the biased = 2k-1 -1 Certain Mantissa/Exponent pattern does not denote any number (NAN i.e. not a number) www.knowledgegate.in Name Common name Mantissa Bits Exponent bits Exponent bias E min E max binary16 Half precision 10 5 25-1−1 = 15 −14 +15 binary32 Single precision 23 8 28-1−1 = 127 −126 +127 binary64 Double precision 52 11 211-1−1 = 1023 −1022 +1023 binary128 Quadruple precision 112 15 215-1−1 = 16383 −16382 +16383 binary256 www.knowledgegate.in Octuple precision 236 19 219-1−1 = 262143 −262142 +262143 Single pression Sign bit (1) Exponent (8) Mantissa (23) Value 0 00…….0(E=0) 00…….0(M=0) +0 1 00…….0(E=0) 00…….0(M=0) -0 0 11…...1(E=255) 00…….0(M=0) +∞ 1 11…...1(E=255) 00…….0(M=0) -∞ 0/1 1