Computer System Organization Lecture

Summary

This document contains a recorded video lecture on computer system organization, discussing storage structures, IOS structures, and operating system concepts. It covers topics such as file systems, memory hierarchy, I/O structures, and multi-processor systems. The lecture is intended as preparation for midterm examinations.

Full Transcript

Hello everyone, as I have said, I am going to send you a recorded video lecture in our discussion Computer System Organizations and its different structures its storage structure, the IOS structure and the direct memory access structure also, we operating system structure Okay, so I hope that you wi...

Hello everyone, as I have said, I am going to send you a recorded video lecture in our discussion Computer System Organizations and its different structures its storage structure, the IOS structure and the direct memory access structure also, we operating system structure Okay, so I hope that you will find time in watching this video because this will be a part of your midterm examinations later on. Okay, so I hope that... this video you can watch it by way guys if you hear some unnecessary noises later on as I record this lecture please bear with me because people will be okay so there so let's start so when we say computer system organization in operating systems This refers now to the structure and components of a computer system that works together to execute tasks and manage the resources efficiently. So this computer system organization here encompasses not only our hardware elements but also our software components. I will read it anymore. I will be directly discussing each slides for you. When we say storage structure, this defines how our data is being managed, stored and retrieved in our system. each storage device uses different methods. For example, It's either magnetic or flash to store this data. So of course, your choice of storage technology and architecture will significantly impact both system speed and the integrity of your data. Since we have variety of data across the system, how do you think are we going to manage, access, modify and organize? this data okay so it is through what we call as the file systems okay so what are these file systems that we call here guys so a file system is a method or structure that our OS to organize store manage and retrieve data on a storage device so this file system here defines how data is being stored, how we classify them in terms of their file types, so how they are named and how they can be accessed or modified. So essentially, this file system acts as an interface between the user and the physical storage device, enabling us for easy manipulation of files. So we have here some key functions of a file system in terms of file organization. So in terms of file organization, this file system here organizes data into files and directories. So if you see the directories when we open our folders, the link at top, we can see the path directory. file system here in terms of file organization organizes data into files and directories okay these files and directories now serves as our or this directories now serves as our folders okay so a file system allows the users to organize files into directories or folders with a hierarchy or structure so that's the I was you earlier that if you for example, document, open it and then if you see here in the part of the text box here in the part of the home you can see the hierarchy of your directories, right? So for example, you in my home and then inside the home, you will the user and documents etc. Each file also has a name, right? Each file has also its attributes like size, permissions and timestamps. Of course, contents of our file. These files are also stored in blocks or clusters which are managed by the file system to optimize storage and retrieval. In terms of access control, this file system here controls access to file based on permissions. these permissions define who can read, write or execute a file. Different operating systems have different permission models. For example, read, write or execute a file. Each file also has metadata. what is this metadata? when we say metadata it is the data about that data or it is the information about that certain file so what is the metadata of that data you hold? these are the file name the size of your what permission is set in that its creation date last modified date and the physical location of that file on a certain base so those the metadata of our files the metadata is typically stored in a table known as the inode or file allocation table or FAT in older file systems. a file system or file system decides how to allocate and manage this space for files. you divide your storage into blocks or sectors which are the smallest unit of data storage. Files that are stored in one or more blocks and the file system keeps track of which blocks are in use and which are free. So that is some of the key functions of our file system. So in terms also of other file operations, file system is in charge in terms of renaming and moving our data across directories. this file system here is also the one in charge that allows our files to be renamed and move between directories. key function of our file system in terms of file operations. Okay, so we have these different types of file system. So what are the different types of file system? Remember that different OS use different file systems. So some of the most common include our FAT. So FAT32 that usually uses our older OS NTFS or our New Technology File System that is used by Windows OS NTFS supports large files, file compression, encryption and better access control Another type of our file system is the exFAT or Extended File Allocation Table A file system for flash drives and SD cards that is optimized for larger file size and volumes. why do you think storage structure is important? Why do you think that this structure here is important? Storage structure including this main memory, secondary storage and magnetic disk hold significant importance in our OS in terms of data management. They are who facilitate allocation, retrieval and organization of our data in our main memory and secondary storage to ensure efficient access by our applications and users. In terms also of memory management allocating or they are in charge in allocating and Deallocating memory space in our main memory for running the processes, okay, so This storage structure also is important in terms of resource allocation Okay bucket because it is the one in charge of allocating storage for resources effectively to meet the demand the demands of the running processes and applications. So here is storage structure, the file system and why it is important in terms of some of the management in our whole computer system. So storage systems are often organized in a hierarchical manner based on factors such as speed, cost, volatility and caching. So when we say volatility, this volatility here is a crucial aspect of the storage hierarchy with higher levels being volatile, like our registers. our cache, our main memory and lower levels being considered as non- volatile. So to bridge the speed gap between these storage chairs, caching is employed. When we say caching, it involves now the copying of frequently accessed data from slower storage systems into faster, more accessible memory. So for example, from a hard drive to RAM or from RAM to CPU cache. So you the hierarchy later on. By employing this caching method, the CPU reduces the time it takes to retrieve the data that is repeatedly and frequently used. So in this context, our main memory or RAM can be viewed or serve as the last cache for secondary storage, where our frequently accessed data is stored temporarily for faster access by the CPU. So as we store those frequently used data in high-speed cache, the system can maintain efficient operation despite the slower speeds of secondary or tertiary storage system. Thus, storage systems now in a hierarchical structure are designed to balance speed, cost and volatility with faster and more expensive storage at the top and slower but more affordable storage at the bottom all while utilizing caching mechanisms to enhance the performance of this different storage Okay, so let's proceed with our next slide. So, computer systems have a memory hierarchy that includes multiple levels of memory with varying access speeds and capacities. I'll now. The hierarchy you see here on the represents a layered approach to data storage with each level of hierarchy offers varying speed, cost, capacity and Volatility in the storage hierarchy is determined by the technology used in each level of the storage. So the in higher levels like our main memory or RAM, the cache and the registers, they are volatile because they are designed for speed and quick access. So meanwhile, the data that is stored in RAM is temporary and intended for the current and active processing task so that memory when the power is turned off to enable faster and more efficient computation and so that CPU it later on for the current and active processing task of the user so on the other hand lower levels of secondary storage such as HDD or SSD are considered non-volatile why? because these secondary storage here are designed to store data persistently over the long term so these storage systems retain the data even when the power is lost making them suitable for storing important files, applications and operating systems that need to be preserved even when the computer is turned off for a long time. So the higher levels are volatile to prioritize the speed and efficiency for active tasks while lower level storage systems are non-volatile to ensure that the data is safely stored for longer periods. Let's go through the hierarchy. Let's first go to the registers. At the top of this hierarchy are the CPU registers which plays a vital role in enhancing our CPU's performance. are considered the fastest and smallest storage units because it is located within the CPU itself. These registers here hold the data and instructions that is currently being processed by our CPU. Registers are small high-speed storage units located inside a computer's CPU once again. They are used as temporary storage data or temporary storage for data that is actively being processed. the registers we are those operands for arithmetic, logic, and other operations that the CPU So, they store the... those for registers in order for the CPU to quickly access and manipulate during computation so since registers are directly integrated in our CPU they provide fast access time or extremely fast access times which is crucial for efficient processing so these registers acts as an intermediate stage between the CPU's internal operations and larger slower memory systems like RAM because of their small size and high speed registers are essential for optimizing the performance of our CPU let's proceed to the second storage system which is the cache so this cache memory here is a small but faster form of volatile memory located between the CPU and our main memory so it's the next one to be registered in our hierarchy so what does this cache do? so this cache here serves as a buffer or local storage between the CPU and main memory storing frequently accessed data and instructions to reduce access times okay so cache memory is divided into multiple levels level one level two and level three with each level providing progressively larger capacity but slower access times compared to registers okay so the most important use of cache memory is that it is used to reduce the average time to access data that is stored from the main memory. So cache memory holds frequently requested data and instructions so that they are immediately available to the CPU when needed. So for example, when you visit a website for the first time, your CPU cache stores parts of the instructions that is needed to load and render that webpage so the instructions that loaded from the RAM is stored in the CPU cache so the cache the images, CSS files, Javascripts the funds that used on our website to speed up the future visits or yes so this process speeds up now the future visits by not downloading those images, those CSS files, JavaScript and funds and other resources all over again okay so that's use of our cache so so that when you visit that website once again instead of downloading those images for example the logo, the background image or the other styles once again the browser loads those resources from the local cache so means that page will be in in a faster way especially so if you have slow internet connections. One good example of that is when you open Facebook or Instagram or any other applications. Our images and profile pictures load instantly because they are cached from your last or previous visit. Without this cache memory here it would download those resources all over again every time you open that certain application. So how do we explain the cache performance? So when the processor needs to read or write a location in the memory or in the main memory it first checks for a corresponding entry in our cache. If the processor finds that the memory location is in the cache, a cache hit will occur and the data that is needed will be read from that cache. If the processor find the memory location in our cache, a cache miss has occurred. If is a cache miss, our cache allocates a new entry and copies in the data from the main memory then the request is fulfilled from the contents of the cache. So that is the cache performance. Now let's proceed to the main memory. So the main memory is the fundamental storage unit in a computer system. It is associatively large and quick memory and saves programs and information during computer operations. our main memory also known as our RAM is the primary form of volatile memory in our computer system. This main memory holds the data and instructions that the CPU needs to access quickly during program execution. But remember that even before this main memory here, have cache and registers, So that means all program instructions to fulfill a user's request is stored in this RAM. now a bridge for faster access time for instructions to fulfill the user's request. So this main memory here offers faster access times compared to the secondary storage devices like SSDs, HDDs, and magnetic disks. but is more expensive and has limited capacity. if we 8GB RAM, 16GB RAM, if you notice, the higher the RAM of your computer system, the faster your computer's computational power will work even in your heaviest requests and operations. Even if same time, So have two types of We have the DRAM and DSRAM. Okay so when we say DRAM, these are the Dynamic Random Access Memory. So this DRAM here is the most common type of RAM that is used in modern computers. And this DRAM is further divided into various generations as the DDR, DDR2, DDR3, DDR4 and DDR5, each offering improvements in speed, efficiency and capacity compared to its predecessors. This DRAM requires a refreshing of data every few milliseconds to maintain its contents. On the other hand, static random access memory is faster and more expensive than DRAM. Why is that? Because unlike DRAM, SRAM doesn't require refreshing to maintain the data. making it faster but also more expensive and less dense. It's commonly used in cache memory where speed is essential. RAM is volatile because it relies on electrical charges to store information. When the power of our computer system is off, the charge or electricity dissipates causing now the RAM data or the data in our main memory to be lost okay so some examples of volatile memory usage in our computer applications include when you open a program in your computer, the programs code and data are loaded into ram okay so when you run a certain program, even if your programs and applications are at time, that program's code and data are all loaded into the RAM. So while using or while you're using the application, data or contents that you in the application or program, for example, when you are currently typing a document or your browser There many tabs, the data from that application is stored in RAM for quick access. So when you close that program or the data or when turn off the computer, the data in that RAM is being erased. So example also of volatile memory usage is of course when you multitask in your computer system. So if you are multitasking in your computer system, of course you are switching between different programs and tasks, right? Volatile memory is used to store the data and states of each program. For example, if you have multiple browser tabs open along with the document editor while playing music, while playing games, the data for each of those programs are being stored in our RAM. So that's our main memory. Let's proceed to our SSDs. These SSDs are non-volatile storage devices that use flash memory to store data. They offer faster access times and higher reliability compared to traditional magnetic disk or HDD SSDs now are commonly used as primary storage devices in modern computers and are faster than magnetic disk but typically more expensive on upper gigabyte basis Let's the price SSD to HDD far more expensive because it uses flash memory 2 store data making it offer faster access times and higher reliability compared to our HDD that stores data in a rotating disk. So our SSD is a popular alternative to traditional HDDs. As I have said, they offer several advantages including faster access time, lower power consumption and improved durability due to the absence of moving parts. Unlike in our HDDs that have rotating disk for reading and writing data There are several types of SSDs based on the type of flash memory they use and their form factor One type of SSD is the SATA SSD These SSDs use the serial ATTA the SATA interface to connect to the computer. They are commonly found in laptops and desktop computers and offer significantly faster read and write speeds compared to HDDs. However, their performance is limited by the SATA interface bandwidth. Second type of SSD is this NVMe SSD. NVMe stands for Non-volatile memory express so to my SSDs not all uses the NVMe protocol now on it is optimized for flash memory to achieve even higher performance than SATA SSDs so to my NVMe SSDs not they directly to PCI okay, now connect directly on to the computer's PCI or the peripheral component interconnect express bus allowing for faster data transfer speeds NVMe SSDs are commonly used in high performance desktops, workstations and servers. Another type of SSD is this M.2 SSD. M.2 is a form factor for SSDs that connect directly to the computer's motherboard. So if NVMe is connected to PCIe of our computer, M.2 is directly connected to our motherboard via the M.2 slot. So M.2 SSDs can use either the SATA or NVMe interface, offering a compact and versatile storage solution for laptops, ultrabooks, and small form factor desktops. also have PCIe SSDs. These SSDs connect to the computer's PCIe slots and use the NVMe protocol for high-speed data transfer. PCIe SSDs come in various form factors including add-in cards and U.2 drives. and are commonly used in high performance desktops, workstations and servers. Another type of SSDs is these enterprise SSDs. These SSDs are designed for use in data centers and enterprise environment. reliability, performance and endurance are critical. It offers advanced features such as power loss protection and end-to-end data protection. Another category of SSD is the consumer SSD. These are the kind SSDs designed for use in consumer electronics devices such as laptops, desktop and gaming consoles. the SSDs are under the consumer SSDs. They offer balance of performance, reliability and affordability for mainstream users. Next is this magnetic disk. Magnetic disk commonly known as hard drives use rotating magnetic platters to store. data. They offer relatively slower access times compared to SSDs but provide higher storage capacity at a lower cost. the storage capacity of the HDD is compared to the At the same time, cheaper. Still, these HDDs here are still widely used. for secondary storage in computer if you have large or enormous amounts of data to store for longer periods of time. it can't be destroyed so next is this optical disk optical disk such as CDs DVDs and Blu- ray disk so this optical disk uses laser technology to read and write data that is stored on the disk surface so store data or they store data in the form of pits and lands on this surface. notice, CD, if you notice the texture of those, you will see lines or circles and then in the middle of that there is something like gap. right? so that's what call pits and lands on the surface of our disk so these pits and lands on our optical disk they represent binary information which is zeros and ones so a laser beam is used to read and write data on the disk by detecting these pits and lands so compared to the previous Storages we have discussed this optical disk here offers lower access time and lower storage capacity compared to the previous storage So these optical disks can offer large storage capacities with CDs typically storing around 700 MB DVDs can store up to 4.7GB in a single layer or 8.5GB in a dual layer. Blu-ray discs can store up to 25GB in single layer or 50GB in dual layer. These are commonly used for distributing software, music, movies and other types of multimedia content so remember if you reach your teens that one cd you can watch 100 movies on that cd you have of playlists that you can listen to different songs from different artists you can also distribute softwares through cd if you want to upgrade from windows 7 to windows 10 is cd so that's what use for our optical disk usually also this optical disk here are being used for backup and archival purposes. So these are used for backup and archival purposes. So next in our or last on our storage hierarchy is this magnetic tapes. So magnetic tape is another form of secondary memory that is used for long-term data storage. This magnetic tape consists of a thin strip of plastic coated with a magnetic material typically iron oxide If remember the on the cassette tape the that is spinning around on the cassette tape that is an example of our magnetic tapes One characteristic of this tape is that it is sequential access in nature Unlike our random access memory or our SSDs that allow for direct access to any location in the memory, our magnetic tape must be read or written sequentially from one end to the other. Our magnetic tape is best suited for applications where the data is written once and read infrequently. such as for your data backup and long term archival storage. So what do we mean when we say sequential in nature? Unlike our RAM or our SSDs, once again, that allow us to have direct access to the data stored in them at any time or any location in our memory, this magnetic tape, you need to carefully data from start up to end. You need to through all the data stored in that before you can retrieve the data that you want to What if it's still at the end of the tape? Yong! data so you have to go through one by one those data that is stored in the magnetic tape that is the last storage in our hierarchy now let's proceed to our I O structure let's proceed to our I O structure so this io structure here is the way the data is moved between the computer's cpu and external devices sending and receiving data to and from the various devices. this IOS structure is one way for to interact with external devices that are connected to computer system or simply to interact with users. So in our IOS structure, we have the input devices, the output devices, the storage devices and our So our IOS structure also includes hardware components such as ports, controllers, bus, well as software components such as device drivers and operating system services that manage and coordinate the flow of data between the CPU and external devices. So where do connect our external devices? Ports. For the ports of our laptop, have USB port, HDMI port, charger port. In terms of software components, the device drivers For example, if you install a printer on your you will a driver will consist of our I.O. structure. Now, the input or output structure once again describes how the computer system handles the transfer of data between these external devices and the system's memory or CPU. So when an I.O. operation is initiated, the control may return to the user's program after the I-O operation is complete or the control can be given back to the user's program without waiting for the completion of this I-O operation. So, what happens in this scenario? Let's dissect the first scenario. So, in synchronous I-O or wait for the I-O completion, the control returns to the user program only upon IO completion. So meaning that when an IO operation is initiated like either reading data from a disk or receiving data over the network, our CPU will wait for the IO operation before continuing to execute the user's program. What happens there is a wait instruction where The CPU is being idled and it cannot execute any other processes until the IO operation is completed. CPU also a process called Wait Loop. Wait Loop where the CPU stays in a loop, continually checking if the I-O operation has been completed or not. During this process, there may be contention of memory access, meaning the CPU can't use the memory for other tasks. Waiting, okay, so only one I or request is processed at a time So there is no simultaneous IO operation happening here. The system is inefficient Or the system is inefficient in this approach because it uses CPU resources while Waiting for external operations to complete. Okay, so imagine more on about IO operation that is initiated by the user the user will wait for the CPU to before he could continue executing the task of the user. It is inefficient in this approach because it uses the CPU resources while waiting for those operations to complete. On the other hand, in asynchronous I O or the when the control is in asynchronous I O the control returns to the user program without waiting for the I O completion so the program now can continue its execution while the I O operation is being processed in the background so how does that happen? where the system call How can return to the user program while the IOU is in the background? will have a system call now. What is this system call? It is the system call of our operating system to handle the IOU. Since we a system call, the task to handle the IO operation to OS. As a result, the user program now can request that it be notified once the IO is completed, often through an interrupt. that is handled by our OS, the user that your IOU is through an interrupt. If still the concept of interrupt. As a result, this prevents the CPU from being blocked and allows it to perform other tasks while waiting for the IOU to finish. The device status table here keeps track of the current state of all I.O. Every entry in the device status table contains the information of all the I.O. devices. those information? Type of device, address of the and its state. So is the device BZ or IDLE? The OS now uses the IO device table or the device status table to index and determine the status of each device. If an IO operation completes, the operating system updates the table and entry and triggers an interrupt, signaling the completion of the task. So the key difference between synchronous and asynchronous I-O is that synchronous I-O or the user program or the control when the control returns to the user program only upon I-O completion It blocks the CPU from executing other tasks. It blocks the CPU from executing other tasks. It blocks the CPU from using the main memory for other tasks. Making it wait for I-O completion. Well, a synchronous I-O or the control returns to the user program without waiting for the I-O completion allows the CPU to continue working and notifies it once the operation is complete through an interrupt. system call mechanism and device status table are used to manage and track the status of I.O.O. operations in the asynchronous model. The efficiency of our computer system because not idle. So let's proceed to our direct memory access structure. Okay, so let's first understand the basic concept of input and output in computing and then delve in to the direct memory access or the DMA structure. Okay, so once again, let's have a review on the previous slide. or our IO structure refers to the process of transferring the data between a computer, CPU and external devices. these external devices are the we to send or receive data to or from our computer. So the IO structure manages this data transfer, ensuring that the data moves between the CPU and external devices. So that is our I.O. structure. Now when we say direct memory access structure or DMA structure, it technique used to improve the efficiency of the data transfer between these external devices and Main memory without involving now the CPU. uh, so traditional IO operations just like what we have discussed in the previous slide Okay, so it'll malalani discussion that in detail. There are time to methods. There's a now these are the traditional methods Okay, Meron not I'm Tina. Tawag. No, you're not DMA structure where it as technique to improve the data transfer between these external devices and our main memory without involving the CPU. You will that in the diagram how it is represented. Once again, our traditional I-O operations, the CPU is responsible to manage the data transfer between the external devices and memory. It's like the CPU is the data to the... or yes, it's who manages the data transfer from the external device and our main memory. However, once again, this process here can be inefficient and... time consuming especially for large data transfers that is when they developed this DMA structure mechanism so DMA allows the external devices to transfer the data directly to or from main memory without the CPU's intervention so the direct memory access structure has DMA controller. So this DMA controller is a specialized hardware component that is responsible for managing DMA transfers. This control coordinates with the data transfer between external devices and memory independently of the CPU. So when an external device needs to transfer data from our main memory, sends a request to the DMA controller. So the DMA controller then access the system bus and initiates the data transfer directly to or from the memory by passing now our CPU. So by offloading the data transfer task to the DMA controller now, by transferring the process of data transfer task to the DMA controller, our CPU is freed up to perform other processing tasks this can significantly improve system performance and efficiency especially for high speed data transfers or multi-tasking scenario once again, our I.O. structure manages the data transfer between the CPU and external devices while the DMA is a technique that is used to enhance the efficiency of these data transfers by allowing the devices to access the memory directly without the CPU intervention. This is a mechanism used for facilitating the high- speed I-O operations between peripheral devices and the main memory okay so yeah in a system utilizing this DMA mechanism the device controller gains direct access to the system memory. It reads blocks of data from the device buffer's storage and writes them directly to the main memory without requiring the CPU to oversee each byte of the transfer. Only one interrupt is generated per block. That is our DMA structure. Let's at the depiction. It points to the part of number of device that is connected to our computer system and it points out to the instructions and data where the RAM is This is RAM that holds the instructions and data. Let's look at the So through this DMA mechanism, the external devices now has direct access to the main memory for the instructions of that operation by passing now our CPU. instead of the CPU being involved in every step of the data transfer between the device and the memory the controller can now handle once again the transfer independently. So this offloads the CPU from the burden of managing the data transfer thereby increasing the overall system efficiency. So I hope that even if a confusing, understood how the DMA mechanism or the caching mechanism. Now let's proceed to our computer system architecture. okay so most systems use a single general purpose processors processors or PDAs through mainframes so computer system architecture not then a general purpose processor refers to the CPU okay so our CPU is designed to handle a wide range of computing tasks and instructions So from handheld devices like PDAs or Personal Digital Assistants to large-scale mainframe computers, many systems rely on a single general purpose processors as the primary computing unit. So consider a typical laptops or desktop computer. So these systems typically utilize a single general purpose processors like an Intel Core or AMD Ryzen CPU. So these processors now as the central computing unit. They the ones handling tasks like running our OS, executing applications, managing memory and they are tasked with coordinating the input and output operations. So whether it's browsing from the web by using your editors or playing video games all of your in your computer system the general purpose processor or CPU is handling those computing tasks most systems have a special purpose processor as well while a general purpose processor forms the core of most computer systems many systems also incorporate special purpose processors for specific tasks or functions. unlike the general purpose processors or our CPU, our special purpose processors are designed to excel at particular types of computations or operations. So in our modern smartphones, we often find a combination of general purpose processor along with special purpose processors that is tailored for specific tasks. So, one example is the inclusion of a GPU. The inclusion of our GPU in the computer system. So, smartphones use GPUs to handle tasks to render the HD graphics for games accelerating video playback and enhancing user interface animation. Our GPUs are optimized for parallel processing tasks that is common in graphics rendering, offering better performance and energy efficiency compared to relying solely on the general process or general processor which is your CPU. so this special purpose processors are intended for tasks or operations where they excel. For example, GPUs that we integrate in our computer system if we want to render high definition graphics okay So, multi-processors systems growing in use and importance. These multi-processors systems are also known as the parallel systems or a tightly coupled system. These systems refer to computer architecture that consists of multiple processors, also known as CPUs or cores. So if we at the specs of the computer system, we choose the multi- core, the octa-core, those the ones we for often when we buy our units. So they work together to execute tasks simultaneously. these systems are often referred to as a parallel system because the processors are closely interconnected and collaborate on processing tasks. So one example of a multi-processor system is a modern server that is used in the data centers. So the servers often contain multiple CPUs or CPU cores that is working to help together to handle numerous simultaneous requests from clients or users. the CPUs are interconnected through high-speed bus or interconnects to facilitate efficient communication and coordination. So usually, the multiple CPUs or multi-cores are used in modern servers. the data centers. These are the ones handling the numerous or simultaneous requests from the clients or users. So the advantages of using multi-processor systems includes the increased throughput. By distributing the tasks among multiple processors, the multi-processor systems that then handles a higher volume of computational workloads simultaneously. This results in increased overall system throughput or the number of tasks completed per unit of time. One good example is when we are accessing our portals so the same time, the client or user's so there Another example is when a web server handles multiple requests from the users concurrently, the multiprocessor system can distribute these requests among its CPUs or cores. It allows them to be processed simultaneously. So this results now in higher throughput meaning the server can serve more users or process more requests in a given amount of time. The second advantage also includes the economy of scale. So multiprocessor systems offer economies of sale in terms of both hardware and software. So instead of investing in multiple individual systems Organizations or institutions usually can consolidate their computing resources into a single multiprocessor system reducing the overall hardware cost and simplifying system management Our data centers often deploy multiprocessor servers to consolidate the computing resources of a system So by utilizing multiprocessor systems, the data centers now, or our data centers now, can reduce the number of physical servers that is required to handle the workload. As a result, is cost saving in nature or in terms of hardware acquisition. Also, power consumption and maintenance. And lastly, we have this increased reliability, graceful degradation or fault tolerance. These multiprocessor systems can provide increased reliability compared to the single processor systems. So in the event of a hardware failure or fault, the system can often continue operating with reduced performance rather than experiencing a complete shutdown. It can be through techniques such as graceful degradation where our system adjusts performance in response to failures or fault tolerance mechanism it allows the system to continue functioning despite hardware failures by employing redundancy or error detection and correction techniques. In a fault tolerant multiprocessor system, if one CPU or core fails, the remaining processors can continue to operate. This ensures that the critical services still is available even in the presence of hardware failures. So that is our increased reliability. So here in our multiprocessor, we have two types. So we have the asymmetric processing and symmetric Multi-processing okay, so in asymmetric multi- processing system, but what processors is assigned specific tasks or function? One processor known as the master processor Handle typically handles the system level tasks such as scheduling and managing IO operations while other processors known as slave processors are dedicated to execute application specific tasks So this asymmetric division of labor can be advantageous for workloads that require specialized processing capabilities. One example of this asymmetric multiprocessing is the we used earlier on smartphones. So in these devices, primary processor handles general purpose computing tasks such as running the OS and application while specialized core processors or accelerators, our GPUs, handle specific tasks like graphic rendering, audio processing or sensor data processing. Another type of this multi-processing is symmetric multiprocessing. So, in symmetric multiprocessing, processors have equal access to the system's resources and can execute any task or function. So, these systems distribute the tasks dynamically among the available processors. they aim for is to achieve load balancing and maximize the overall system performance. So this system multiprocessor is characterized by its symmetric architecture, where each processor has equal access to memory and peripheral devices. this type of multiprocessor is well suited for general purpose computing tasks and is commonly used in servers, high performance computing clusters and modern desktops, okay, so that is the two types of multi processor system, okay, so this multi processor systems here offer several advantages including Increase throughput, economies of scale and improve reliability and they come in two main types. We have the asymmetric where all have equal access to the system resources. So that is it guys for our lecture in the computer system organization, most especially in the storage and the I.O. DMA structure. So I hope that you find time watching this video lecture because these topics here will be covered in your midterm examination. So happy holidays. and have a nice day everyone goodbye