Chapter 1: Computer System Overview PDF

Summary

This is a textbook excerpt titled 'Chapter 1: Computer System Overview'. It covers operating systems, internals and design principles, hardware resources, system services and management of secondary memory and I/O devices. The textbook also explores basic elements such as processors, I/O modules, main memory and the system bus.

Full Transcript

Operating Systems: Internals Chapter 1 and Design Principles Computer System Overview Operating System â—¼ Exploits the hardware resources of one or more processors â—¼ Provides a set of services to system users â—¼ Manages secondary memory and I...

Operating Systems: Internals Chapter 1 and Design Principles Computer System Overview Operating System ◼ Exploits the hardware resources of one or more processors ◼ Provides a set of services to system users ◼ Manages secondary memory and I/O devices Basic Elements I/O Processor Modules Main System Memory Bus Processor Controls the Performs the operation of the data processing computer functions Referred to as the Central Processing Unit (CPU) Main Memory ◼Volatile ◼Contents of the memory is lost when the computer is shut down ◼Referred to as real memory or primary memory I/O Modules storage (e.g. hard drive) Moves data between the computer and communications external equipment environments such as: terminals System Bus ◼Provides for communication among processors, main memory, and I/O modules Microprocessor ◼ Invention that brought about desktop and handheld computing ◼ Processor on a single chip ◼ Fastest general purpose processor ◼ Multiprocessors ◼ Each chip (socket) contains multiple processors (cores) Graphical Processing Units (GPU’s) ◼ Provide efficient computation on arrays of data using Single-Instruction Multiple Data (SIMD) techniques ◼ Used for general numerical processing ◼ Physics simulations for games ◼ Computations on large spreadsheets Digital Signal Processors (DSPs) ◼ Deal with streaming signals such as audio or video ◼ Used to be embedded in devices like modems ◼ Encoding/decoding speech and video (codecs) ◼ Support for encryption and security System on a Chip (SoC) ◼ To satisfy the requirements of handheld devices, the microprocessor is giving way to the SoC ◼ Components such as DSPs, GPUs, codecs and main memory, in addition to the CPUs and caches, are on the same chip Instruction Execution ◼A program consists of a set of instructions stored in memory Two steps: processor reads (fetches) instructions from memory (one instruction per time) processor executes each instruction Basic Instruction Cycle Interrupts ◼ An interrupt is a mechanism used by the operating system and hardware to pause the current CPU task and handle more urgent or higher-priority tasks. ◼ It’slike having an emergency stop button that can interrupt whatever the CPU is doing to handle more important tasks (e.g., hardware events or user input). Interrupts ◼ Provided to improve processor utilization ◼ most I/O devices are slower than the processor ◼ processor must pause to wait for device ◼ wasteful use of the processor Interrupts ◼ Provided to improve processor utilization ◼ most I/O devices are slower than the processor ◼ processor must pause to wait for device ◼ wasteful use of the processor ◼ Interruptshelp the CPU focus on important tasks while letting the OS handle less urgent tasks in the background. Common Classes of Interrupts I/O Generated by an I/O controller, to Example: signal normal completion of an A keyboard sends an operation or to signal a variety of error interrupt when a key is conditions. pressed. Software Triggered by software instructions. Example: Used by programs or OS to request A program needs to perform system-level services or handle errors a system call, like reading a file or allocating memory Timer Generated by a timer within the Example: processor – happens at regular OS uses timer interrupts to intervals. implement process This allows the OS to perform certain scheduling, allowing multiple functions on a regular basis. programs to run concurrently Hardware Generated by failure in hardware Example: failure Power failure or memory parity error Interrupts ◼ How do Interrupt Work? ◼ Event Occurs ◼ A hardware device (keyboard, mouse, timer, etc.) or software (system call, error) signals the CPU with an interrupt. ◼ Interrupt Signal ◼ The signal (interrupt request or IRQ) tells the CPU that a device needs attention. ◼ CPU Pauses Current Task ◼ The CPU finishes the current instruction and saves its state (the process it was working on). ◼ Interrupt Service Routine (ISR) ◼ The CPU jumps to a special routine, called the Interrupt Service Routine (ISR), to handle the interrupt. ◼ This routine is a small piece of code that performs the required task (e.g., reading a keypress or transferring data from a disk). ◼ CPU Resumes Previous Task ◼ After the ISR finishes, the CPU restores its previous state and continues its interrupted task. Instruction Cycle With Interrupts Memory Hierarchy ◼ Major constraints in memory  amount  speed  expense ◼ Memory must be able to keep up with the processor ◼ Cost of memory must be reasonable in relationship to the other components Memory Relationships Greater capacity = smaller cost per bit Faster Greater access time capacity = = greater slower access cost per bit speed The Memory Hierarchy ▪ Going down the hierarchy: decreasing cost per bit increasing capacity increasing access time decreasing frequency of access to the memory by the processor ◼ Small, high-speed memory ◼ Intermediate buffer between normal main memory and CPU ◼ Stores frequently used data and instructions ◼ May be located on CPU chip Small capacity, fast Large capacity, slow Cache and Main Memory Cache/Main-Memory Structure ◼ CPU requests contents of memory location ◼ Check cache for this data ◼ If present, get from cache (fast) -> known as Cache Hit ◼ If not present, read required block from main memory to cache -> known as Cache Miss ◼ Then deliver from cache to CPU ◼ Cache includes tags to identify which block of main memory is in each cache slot Cache Read Operation cache size number of cache block size levels Main categories are: write mapping policy function replacement algorithm Cache and Block Size Cache Size Block Size The unit of data Small caches have exchanged significant impact between cache and on performance main memory ◼ The size of cache is small compared to memory, so there must be some mechanism to fill the data in the cache. ◼ Mapping functions are used as a way to decide which main memory block occupies which line of cache. ◼ Mapping functions dictate the organization of the cache Mapping Function ∗ Determines which cache location the block will occupy When one block is read in, another may have to be replaced Two constraints affect design: The more flexible the mapping function, the more complex is the circuitry required to search the cache Replacement Algorithm ◼ When the address accessed by CPU is not in cache, access has to be made to main memory. ◼ Along with the required word, the entire block is transferred to cache ◼ But if cache is full, some existing cache memory is deleted to create space for the new entry. ◼ So, some replacement algorithm is needed. Replacement Algorithm ◼ 4 most common: 1. Least Recently Used (LRU) ▪ replaces the candidate line cache memory that has been there the longest with no reference to it 2. First In First Out (FIFO) ▪ Replaces the candidate line in the cache that has been there the longest 3. Least Frequently Used (LFU) ▪ Replaces the candidate line in the cache that has had the fewest references ▪ Associate a counter with each line 4. Random ▪ Randomly chooses a line to be replaced from among the candidate lines Write Policy ◼ Must not overwrite a cache block unless main memory is up to date Two cases to consider when block that is in cache needs to be updated: 1) Write through Write the result in both the main memory and cache 2) Write back Write in cache memory only to minimize memory writes. But, mark a flag in the cache to remember that the corresponding main memory content is obsolete (cannot be use as it is not updated) Secondary Memory Also referred to as auxiliary memory External Nonvolatile Used to store program and data files I/O Commands When the processor encounters an instruction relating to I/O, it executes that instruction by issuing a command to the appropriate I/O device. The CPU and I/O devices speak different "languages." I/O commands act as a bridge that lets the CPU control, send, or receive data from devices like the keyboard, mouse, printer, etc. Types of I/O Commands 1.Control Command Tells the device what to do (e.g., "start printing", "initialize"). 2.Status Command Asks the device about its state (e.g., "Are you ready?", "Is there an error?"). 3.Read Command CPU gets data from the device (e.g., keyboard input). 4.Write Command CPU sends data to the device (e.g., send a document to the printer). I/O Techniques There are several I/O techniques used by OS to manage data transfer between devices and memory. Three techniques are possible for I/O operations: Programmed Interrupt-Driven Direct Memory I/O I/O Access (DMA) Programmed I/O ◼ In Programmed I/O, the CPU is directly responsible for managing the data transfer between the I/O device (like a disk or keyboard) and the main memory. Programmed I/O ◼ How It Works: The CPU issues an I/O command to the device. The CPU then continuously polls (checks) the I/O device to see if it’s ready to send or receive data. Once the device is ready, the CPU manually transfers the data between the device and memory. The CPU is actively involved in every step of the I/O operation, and the device waits for the CPU to complete the transfer. Programmed I/O ◼ Example: ◼ When a program reads from a keyboard, the CPU checks if a key has been pressed. If so, the CPU manually fetches the key and stores it in memory. ◼ Drawback: ◼ Since the CPU is fully involved, it spends a lot of time on the transfer, which could be used for other tasks, making it inefficient. Interrupt Driven I/O ◼ In Interrupt-Driven I/O, instead of the CPU constantly polling the I/O device, the device signals the CPU when it is ready to transfer data. Interrupt Driven I/O ◼ How It Works: CPU issues an I/O command to the device and goes to do some other work. The device sends an interrupt signal to the CPU when it’s ready for data transfer. The CPU pauses its current task and responds to the interrupt by executing a special routine called the Interrupt Service Routine (ISR). After completing the transfer, the CPU resumes its previous task. Interrupt-Driven I/O Processor issues an I/O The processor command to a executes the module and data transfer then goes on and then to do some resumes its other useful former work processing The I/O module will More efficient than then interrupt the Programmed I/O but processor to request still requires active service when it is intervention of the ready to exchange processor to transfer data with the data between memory processor and an I/O module Interrupt Driven I/O ◼ Example: ◼ When you click a mouse, the mouse sends an interrupt to the CPU, notifying it that there’s a click event. The CPU stops its current task, handles the mouse click, and then continues its work. ◼ Advantage: ◼ Less CPU time is spent checking devices, since the CPU only acts when needed (when an interrupt occurs). Interrupt Driven I/O ◼ Disadvantage: ◼ Although more efficient than programmed I/O, still requires the active intervention of the processor to transfer data between memory and an I/O module, and any data transfer must traverse a path through the processor. Transfer rate is limited by the speed with which the processor can test and service a device The processor is tied up in managing an I/O transfer a number of instructions must be executed for each I/O transfer Direct Memory Access (DMA) ◼ Direct Memory Access (DMA) allows I/O devices to directly transfer data to or from memory without involving the CPU for every data transfer operation. Direct Memory Access (DMA) ◼ How It Works: The device or CPU sends a request to the DMA controller to transfer data directly between the device and memory. The DMA controller manages the data transfer without needing the CPU to intervene. The CPU is not involved in the transfer, so it remains free to handle other tasks. ◼ Transfersthe entire block of data directly to and from memory without going through the processor ◼ processor is involved only at the beginning and end of the transfer ◼ processor executes more slowly during a transfer when processor access to the bus is required ◼ More efficient than interrupt-driven or programmed I/O Direct Memory Access (DMA) ◼ Example: ◼ When a network card receives data from the internet, it can use DMA to send that data directly into memory without burdening the CPU with each individual byte transfer. ◼ Advantage: ◼ Frees up CPU resources, allowing the CPU to perform other tasks while data is being transferred. ◼ Speeds up the transfer process, especially for large data blocks, because the DMA controller works in parallel with the CPU. ◼A microprocessor is a compact integrated circuit (IC) that contains the CPU (Central Processing Unit) of a computer. It performs arithmetic, logic, control, and input/output (I/O) operations. ◼ 1st Generation (1971–1975): The Beginning Example: Intel 4004 (1971) Key Features: 4-bit processor Clock speed: ~740 kHz 2,300 transistors Could process simple instructions Usage: Calculators, basic embedded systems Milestone: First commercially available microprocessor ◼ 2nd Generation (1975–1978): 8-bit Revolution Examples: Intel 8080, Zilog Z80, Motorola 6800 Key Features: 8-bit data bus Clock speed: 2–8 MHz Up to 6,000–8,500 transistors More instructions and better performance Usage: Early computers like Altair 8800 ◼ 3rd Generation (1978–1982): 16-bit Microprocessors Examples: Intel 8086, 8088, Motorola 68000 Key Features: 16-bit data bus (8086); 8-bit (8088) Clock speeds up to 10 MHz Memory addressing up to 1 MB More powerful instruction sets Usage: IBM PC (used Intel 8088) ◼ 4th Generation (1982–1990): 32-bit Era Begins Examples: Intel 80286, 80386 Key Features: 32-bit internal architecture Clock speeds: 12–33 MHz Introduced multitasking, protected mode Can run modern OSes like early versions of Windows Usage: IBM PCs, early workstations ◼ 5th Generation (1990–2000): High Performance Examples: Intel Pentium, Pentium Pro, AMD K5 Key Features: Superscalar architecture (multiple instructions per clock) 64-bit internal registers (in some cases) Clock speeds: up to 1 GHz On-chip cache memory (L1, L2) Usage: PCs, servers, gaming systems ◼ 6th Generation (2000–2010): Multi-Core Processors Examples: Intel Core 2 Duo, AMD Athlon 64 Key Features: Dual-core and multi-core processors Enhanced parallelism Clock speeds: 1–3 GHz Introduction of 64-bit computing Usage: Consumer and business desktops/laptops ◼ 7th Generation and Beyond (2010–Present): Smart, Efficient, AI-ready Examples: Intel Core i3/i5/i7/i9, Apple M1/M2 chips, AMD Ryzen Key Features: Multi-core and multi-threading (up to 16 cores and more) Turbo boost, power-saving features Integrated graphics, AI accelerators Advanced manufacturing (7nm, 5nm tech) Usage: PCs, laptops, smartphones, data centers, IoT, AI, gaming Early Modern Aspects Microprocessors Microprocessors Data width 4-bit, 8-bit 64 bit > 5 GHz (with turbo Clock speed < 1 MHz boost) Multi-core Cores Single-core (up to 64+ in servers) Transistors Thousands Billions AI, graphics, Features Basic computing virtualization, etc. Summary ◼ Basic Elements ◼ processor, main memory, I/O modules, system bus ◼ GPUs, SIMD, DSPs, SoC ◼ Instruction execution ◼ processor-memory, processor-I/O, data processing, control ◼ Interrupts ◼ Memory Hierarchy ◼ Cache Memory ◼ I/O Techniques ◼ Evolution of the Microprocessor