Computer Evolution Module 1 (Part 2) PDF

Summary

This document provides an introduction to computer architecture, focusing on the hierarchical structure and functions of computer components. It explores the fundamental concepts of data processing, storage, movement, and control. The document also discusses multicore computer structures and the role of principal components like the CPU, memory, and I/O in a typical computer system.

Full Transcript

A computer is a complex system; contemporary computers contain millions of elementary electronic components. How, then, can one clearly describe them? The key is to recognize the hierarchical nature of most complex systems, including the computer [SIMO96]. A hierarchical system is a set of interrela...

A computer is a complex system; contemporary computers contain millions of elementary electronic components. How, then, can one clearly describe them? The key is to recognize the hierarchical nature of most complex systems, including the computer [SIMO96]. A hierarchical system is a set of interrelated subsystems; each subsystem may, in turn, contain lower level subsystems, until we reach some lowest level of elementary subsystem. Structure and Function: The Difference The hierarchical nature of complex systems is essential to both their design and their description. The designer need only deal with a particular level of the system at a time. At each level, the system consists of a set of components and their interrelationships. The behavior at each level depends only on a simplified, abstracted characterization of the system at the next lower level. At each level, the designer is concerned with structure and function: Structure: The way in which the components are interrelated. Function: The operation of each individual component as part of the structure. Functions 1. Data processing: Data may take a wide variety of forms, and the range of processing requirements is broad. However, we shall see that there are only a few fundamental methods or types of data processing. 2. Data storage: Even if the computer is processing data on the fly (i.e., data come in and get processed, and the results go out immediately), the computer must temporarily store at least those pieces of data that are being worked on at any given moment. Thus, there is at least a short-term data storage function. Equally important, the computer performs a long-term data storage function. Files of data are stored on the computer for subsequent retrieval and update. 3. Data movement: The computer’s operating environment consists of devices that serve as either sources or destinations of data. When data are received from or delivered to a device that is directly connected to the computer, the process is known as input–output (I/O), and the device is referred to as a peripheral. When data are moved over longer distances, to or from a remote device, the process is known as data communications. 4. Control: Within the computer, a control unit manages the computer’s resources and orchestrates the performance of its functional parts in response to instructions. Structure of Computer Hierarchical View of the Internal Structure of a Traditional Single Processor Computer Central processing unit (CPU): Controls the operation of the computer and performs its data processing functions; often simply referred to as processor. Main memory: Stores data. I/O: Moves data between the computer and its external environment. System interconnection: Some mechanism that provides for communication among CPU, main memory, and I/O. A common example of system interconnection is by means of a system bus, consisting of a number of conducting wires to which all the other components attach. Multicore Computer Structure Contemporary computers generally have multiple processors. When these processors all reside on a single chip, the term multicore computer is used, and each processing unit (consisting of a control unit, ALU, registers, and perhaps cache) is called a core. What is CPU, Core and Processor? Central processing unit (CPU): That portion of a computer that fetches and executes instructions. It consists of an ALU, a control unit, and registers. In a system with a single processing unit, it is often simply referred to as a processor. Core: An individual processing unit on a processor chip. A core may be equivalent in functionality to a CPU on a single-CPU system. Other specialized processing units, such as one optimized for vector and matrix operations, are also referred to as cores. Processor: A physical piece of silicon containing one or more cores. The processor is the computer component that interprets and executes instructions. If a processor contains multiple cores, it is referred to as a multicore processor Principal Components of a Typical Multicore Computer Another prominent feature of contemporary computers is the use of multiple layers of memory, called cache memory, between the processor and main memory. ❑ Cache memory is smaller and faster than main memory and is used to speed up memory access, by placing in the cache data from main memory. ❑ A greater performance improvement may be obtained by using multiple levels of cache, with level 1 (L1) closest to the core and additional levels (L2, L3, and so on) progressively farther from the core. In this scheme, level n is smaller and faster than level n + 1. ❑ A printed circuit board (PCB) is a rigid, flat board that holds and interconnects chips and other electronic components. ❑ The board is made of layers, typically two to ten, that interconnect components via copper pathways that are etched into the board. ❑ The main printed circuit board in a computer is called a system board or motherboard, while smaller ones that plug into the slots in the main board are called expansion boards. ❑ Most computers, including embedded computers in smartphones and tablets, plus personal computers, laptops, and workstations, are housed on a motherboard. ❑ The most prominent elements on the motherboard are the chips. ❑ A chip is a single piece of semiconducting material, typically silicon, upon which electronic circuits and logic gates are fabricated. The resulting product is referred to as an integrated circuit Brief History of Computer 1. The First Generation: Vacuum Tubes The first generation of computers used vacuum tubes for digital logic elements and memory. A number of research and then commercial computers were built using vacuum tubes. ❑ The most famous first- generation computer, known as the IAS computer. ❑ A fundamental design approach first implemented in the IAS computer is known as the stored-program concept. ❑ The first publication of the idea was in a 1945 proposal by mathematician von Neumann for a new computer, the EDVAC (Electronic Discrete Variable Computer). ❑ In 1946, von Neumann and his colleagues began the design of a new stored- program computer, referred to as the IAS computer, at the Princeton Institute for Advanced Studies. The IAS computer, although not completed until 1952, is the prototype of all subsequent general- purpose computers. ❑ It consists of a main memory, which stores both data and instructions, an arithmetic and logic unit (ALU) capable of operating on binary data, a control unit, which interprets the instructions in memory and causes them to be executed and Input–output (I/O) equipment operated by the control unit. 2. The Second Generation: Transistors ❑ The first major change in the electronic computer came with the replacement of the vacuum tube by the transistor. ❑ The transistor, which is smaller, cheaper, and generates less heat than a vacuum tube, can be used in the same way as a vacuum tube to construct computers. ❑ Unlike the vacuum tube, which requires wires, metal plates, a glass capsule, and a vacuum, the transistor is a solid-state device, made from silicon. ❑ The transistor was invented at Bell Labs in 1947 and by the 1950s had launched an electronic revolution. ❑ In late 1950s, the fully transistorized computers were commercially available. ❑ New generation is characterized by greater processing performance, larger memory capacity, and smaller size than the previous one. ❑ The second generation saw the introduction of more complex arithmetic and logic units and control units, the use of high- level programming languages, and the provision of system software with the computer. 3. The Third Generation: Integrated Circuits ❑ A single, self- contained transistor is called a discrete component. ❑ Throughout the 1950s and early 1960s, electronic equipment was composed largely of discrete components— transistors, resistors, capacitors, and so on. ❑ Discrete components were manufactured separately, packaged in their own containers, and soldered or wired together onto Masonite- like circuit boards, which were then installed in computers, oscilloscopes, and other electronic equipment. ❑ Whenever an electronic device called for a transistor, a little tube of metal containing a pinhead- sized piece of silicon had to be soldered to a circuit board. ❑ The entire manufacturing process, from transistor to circuit board, was expensive. ❑ In 1958 came the achievement that revolutionized electronics and started the era of microelectronics: the invention of the integrated circuit. ❑ Example: IBM System/360, DEC PDP- 8 4. Later Generations ❑ Beyond the third generation there is less general agreement on defining generations of computers. ❑ There have been a number of later generations, based on advances in integrated circuit technology. ❑ With the introduction of large scale integration (LSI), more than 1,000 components can be placed on a single integrated circuit chip. ❑ Very- large-scale integration (VLSI) achieved more than 10,000 components per chip, while current ultralarge- scale integration (ULSI) chips can contain more than one billion components. ❑ The first application of integrated circuit technology to computers was the construction of the processor (the control unit and the arithmetic and logic unit) out of integrated circuit chips. ❑ Microprocessors: Just as the density of elements on memory chips has continued to rise, so has the density of elements on processor chips. As time went on, more and more elements were placed on each chip, so that fewer and fewer chips were needed to construct a single computer processor. ❑ A breakthrough was achieved in 1971, when Intel developed its 4004, a 4-bit microprocessor. The 4004 was the first chip to contain all of the components of a CPU on a single chip: The microprocessor was born. ❑ The next major step in the evolution of the microprocessor was the introduction in 1972 of the Intel 8008. This was the first 8-bit microprocessor and was almost twice as complex as the 4004. ❑ The introduction in 1974 of the Intel 8080. This was the first general- purpose microprocessor. Whereas the 4004 and the 8008 had been designed for specific applications, the 8080 was designed to be the CPU of a general- purpose microcomputer. ❑ 8080 is an 8-bit microprocessor. The 8080, however, is faster, has a richer instruction set, and has a large addressing capability. ❑ About the same time, 16-bit microprocessors began to be developed. However, it was not until the end of the 1970s that powerful, general- purpose 16-bit microprocessors appeared. One of these was the 8086. ❑ The next step in this trend occurred in 1981, when both Bell Labs and Hewlett- Packard developed 32-bit, single- chip microprocessors. Intel introduced its own 32-bit microprocessor, the 80386, in 1985. The Evolution of the Intel x86 Architecture Consider two processor families: the Intel x86 and the ARM architectures. 1. The current x86 offerings represent the results of decades of design effort on complex instruction set computers (CISCs). 2. The x86 incorporates the sophisticated design principles once found only on mainframes and supercomputers and serves as an excellent example of CISC design. 3. An alternative approach to processor design is the reduced instruction set computer (RISC).The ARM architecture is used in a wide variety of embedded systems and is one of the most powerful and best designed RISC- based systems on the market. 4. In terms of market share, Intel has ranked as the number one maker of microprocessors for non-embedded systems for decades. 5. The evolution of its flagship microprocessor product serves as a good indicator of the evolution of computer technology in general. Highlights of the Evolution of the Intel Product Line 8080: The world’s first general-purpose microprocessor. This was an 8-bit machine, with an 8-bit data path to memory. The 8080 was used in the first personal computer, the Altair. 8086: A far more powerful, 16-bit machine. In addition to a wider data path and larger registers, the 8086 sported an instruction cache, or queue, that prefetches a few instructions before they are executed. A variant of this processor, the 8088, was used in IBM’s first personal computer, securing the success of Intel. The 8086 is the first appearance of the x86 architecture. 80286: This extension of the 8086 enabled addressing a 16-MB memory instead of just 1 MB. 80386: Intel’s first 32-bit machine, and a major overhaul of the product. With a 32-bit architecture, the 80386 rivaled the complexity and power of minicomputers and mainframes introduced just a few years earlier. This was the first Intel processor to support multitasking, meaning it could run multiple programs at the same time. 80486: The 80486 introduced the use of much more sophisticated and powerful cache technology and sophisticated instruction pipelining. The 80486 also offered a built-in math coprocessor, offloading complex math operations from the main CPU. Pentium: With the Pentium, Intel introduced the use of superscalar techniques, which allow multiple instructions to execute in parallel. Pentium Pro: The Pentium Pro continued the move into superscalar organization begun with the Pentium, with aggressive use of register renaming, branch prediction, data flow analysis, and speculative execution. Pentium II: The Pentium II incorporated Intel MMX technology, which is designed specifically to process video, audio, and graphics data efficiently. Pentium III: The Pentium III incorporates additional floating-point instructions: The Streaming SIMD Extensions (SSE) instruction set extension added 70 new instructions designed to increase performance when exactly the same operations are to be performed on multiple data objects. Typical applications are digital signal processing and graphics processing. Pentium 4: The Pentium 4 includes additional floating-point and other enhancements for multimedia. Core: This is the first Intel x86 microprocessor with a dual core, referring to the implementation of two cores on a single chip. Core 2: The Core 2 extends the Core architecture to 64 bits. The Core 2 Quad provides four cores on a single chip. More recent Core offerings have up to 10 cores per chip. An important addition to the architecture was the Advanced Vector Extensions instruction set that provided a set of 256-bit, and then 512-bit, instructions for efficient processing of vector data. Embedded Systems An embedded system is a microprocessor-based computer hardware system with software that is designed to perform a dedicated function, either as an independent system or as a part of a large system. Embedded systems are tightly coupled to their environment. This can give rise to real- time constraints imposed by the need to interact with the environment. Types of devices with embedded systems are almost too numerous to list. Examples Include cell phones, digital cameras, video cameras, calculators, microwave ovens, home security systems, washing machines, lighting systems, thermostats, printers, various automotive systems (e.g., transmission control, cruisecontrol, fuel injection, anti-lock brakes, and suspension systems), tennis rackets, toothbrushes, and numerous types of sensors and actuators in automated systems. When considering the general architecture of the embedded system, in addition to the processor and memory, there are a number of elements that differ from the typical desktop or laptop computer. There may be a variety of interfaces that enable the system to measure, manipulate, and otherwise interact with the external environment. Embedded systems often interact (sense, manipulate, and communicate) with the external world through sensors and actuators, and hence are typically reactive systems; a reactive system is in continual interaction with the environment and executes at a pace determined by that environment. The human interface may be as simple as a flashing light or as complicated as real-time robotic vision. In many cases, there is no human interface. The diagnostic port may be used for diagnosing the system that is being controlled—not just for diagnosing the computer. Special-purpose field programmable gate array (FPGA), application-specific integrated circuit (ASIC), or even non digital hardware may be used to increase performance or reliability. Software often has a fixed function and is specific to the application. Efficiency is of paramount importance for embedded systems. They are optimized for energy, code size, execution time, weight and dimensions, and cost. Arm Architecture Advanced RISC (reduced instruction set computer) Machine (ARM) Processor is considered to be a family of Central Processing Units that are used in music players, smartphones, wearables, tablets, and other consumer electronic devices. Advanced RISC Machines create the ARM processor architecture, hence the name ARM. This needs very few instruction sets and transistors. It is very small in size. This is the reason that it is a perfect fit for small-size devices. It has less power consumption along with reduced complexity in its circuits. They can be applied to various designs such as 32-bit devices and embedded systems. They can even be upgraded according to userneeds. ARM Started With Microcomputing The applications of the ARM Process start with getting knowledge of the ARM Processors history. Before ARM, x86 processors were used, which were launched in 1978. Whenever we remove the predefined instructions like complex instructions and hard-to-implement instructions, the remaining instructions take less power and pace and run faster, this is called Reduced Instruction Set Computer (RISC) Architecture. x86 is a Complex Instruction Set Architecture (CISC). What Makes an ARM Architecture Valuable? One of the most common electronic architectural designs in the market is Advanced RISC Machine Architecture, even better than x86, which is very common in the server market. ARM Architecture is widely used in smartphones, normal phones, and also in laptops. Though x86 processors have optimized performance, ARM Processor gives cost-effective processors with small size, takes less power to run, and also gives better battery life. ARM Processor is not only limited to mobile phones but is also used in Fugaku, the world’s fastest supercomputer. ARM Processor also gives more feasibility designs of hardware designers and also gives control to designer’s supply chains. Features of Arm Architecture: 1. Multiprocessing Systems: ARM processors are designed to be used in cases of multiprocessing systems where more than one processor is used to process information. The First AMP processor introduced by the name of ARMv6K could support 4 CPUs along with its hardware. 2. Tightly Coupled Memory: The memory of ARM processors is tightly coupled. This has a very fast response time. It has low latency (quick response) that can also be used in cases of cache memory being unpredictable. 3. Memory Management: ARM processor has a management section. This includes Memory Management Unit and Memory Protection Unit. These management systems become very important in managing memory efficiently. 4. Thumb-2 Technology: Thumb-2 Technology was introduced in 2003 and was used to create variable-length instruction sets. It extends the 16-bit instructions of initial Thumb technology to 32-bit instructions. It has better performance than previously used Thumb technology. 5. One-Cycle Execution Time: ARM processor is optimized for each instruction on the CPU. Each instruction is of a fixed length that allows time for fetching future instructions before executing the present instructions. ARM has CPI (Clock Per Instruction) of one cycle. 6. Pipelining: Processing of instructions is done in parallel using pipelines. Instructions are broken down and decoded in one pipeline stage. The channel advances one step at a time to increase throughput (rate of processing). 7. A large number of Registers: A large number of registers are used in ARM processors to prevent large amounts of memory interactions. Records contain data and addresses. These act as a local memory store for all operations. Cloud Computing Cloud computing is the on-demand access of computing resources—physical servers or virtual servers, data storage, networking capabilities, application development tools, software, AI powered analytic tools and more—over the internet with pay-per-use pricing. The cloud computing model offers customers greater flexibility and scalability compared to traditional on-premises infrastructure. A cloud services provider (CSP) manages cloud based technology services hosted at a remote data center and typically makes these resources available for a pay-as-you-go or monthly subscription fee. Benefits: 1. Unlimited scalability - Cloud computing provides elasticity and self service provisioning, so instead of purchasing excess capacity that sits unused during slow periods, you can scale capacity up and down in response to spikes and dips in traffic. You can also use your cloud provider’s global network to spread your applications closer to users worldwide. 2. Enhanced strategic value - Cloud computing enables organizations to use various technologies and the most up-to date innovations to gain a competitive edge. For instance, in retail, banking and other customer-facing industries, generative AI powered virtual assistants deployed over the cloud can deliver better customer response time and free up teams to focus on higher level work. In manufacturing, teams can collaborate and use cloud-based software to monitor real-time data across logistics and supply chain processes. 3. Cost-effectiveness - Cloud computing lets you offload some or all of the expense and effort of purchasing, installing, configuring and managing mainframe computers and other on premises infrastructure. You pay only for cloud-based infrastructure and other computing resources as you use them. 4. Increased speed and agility - With cloud computing, your organization can use enterprise applications in minutes instead of waiting weeks or months for IT to respond to a request, purchase and configure supporting hardware and install software. This feature empowers users—specifically DevOps and other development teams—to help leverage cloud-based software and support infrastructure. Origins: The origins of cloud computing technology go back to the early 1960s when Dr. Joseph Carl Robnett Licklider, an American computer scientist and psychologist known as the "father of cloud computing", introduced the earliest ideas of global networking in a series of memos discussing an Intergalactic Computer Network. However, it wasn’t until the early 2000s that modern cloud infrastructure for business emerged. In 2002, Amazon Web Services started cloud-based storage and computing services. In 2006, it introduced Elastic Compute Cloud (EC2), an offering that allowed users to rent virtual computers to run their applications. Google also introduced the Google Apps suite (now called Google Workspace), a collection of SaaS productivity applications. In 2009, Microsoft started its first SaaS application, Microsoft Office 2011. Today, Gartner predicts worldwide end-user spending on the public cloud will total USD 679 billion and is projected to exceed USD 1 trillion. Components 1. Data Centers - CSPs own and operate remote data centers that house physical or bare metal servers, cloud storage systems and other physical hardware that create the underlying infrastructure and provide the physical foundation for cloud computing. 2. Virtualization - Cloud computing relies heavily on the virtualization of IT infrastructure — servers, operating system software, networking and other infrastructure that’s abstracted using special software so that it can be pooled and divided irrespective of physical hardware boundaries. For example, a single hardware server can be divided into multiple virtual servers. Virtualization enables cloud providers to make maximum use of their data center resources. 3. Networking Capabilities - In cloud computing, high speed networking connections are crucial. Typically, an internet connection known as a wide-area network (WAN) connects front-end users (for example, client-side interface made visible through web-enabled devices) with back-end functions (for example, data centers and cloud-based applications and services). Other advanced cloud computing networking technologies, including load balancers, content delivery networks (CDNs) and software-defined networking (SDN), are also incorporated to ensure dataflows quickly, easily and securely between front-end users and back-end resources. Types 1. Public Cloud - A public cloud is a type of cloud computing in which a cloud service provider makes computing resources available to users over the public internet. These include SaaS applications, individual virtual machines (VMs), bare metal computing hardware, complete enterprise-grade infrastructures and development platforms. These resources might be accessible for free or according to subscription-based or pay-per-usage pricing models. 2. Private Cloud - A private cloud is a cloud environment where all cloud infrastructure and computing resources are dedicated to one customer only. Private Cloud combines many benefits of cloud computing—including elasticity, scalability and ease of service delivery—with the access control, security and resource customization of on premises infrastructure. 3. Hybrid Cloud - A hybrid cloud is just what it sounds like: a combination of public cloud, private cloud and on-premises environments. Specifically (and ideally), a hybrid cloud connects a combination of these three environments into a single, flexible infrastructure for running the organization's applications and workloads. 4. Multicloud - Uses two or more clouds from two or more different cloud providers. A multicloud environment can be as simple as email SaaS from one vendor and image editing SaaS from another. But when enterprises talk about multi cloud, they typically refer to using multiple cloud services—including SaaS, PaaS and IaaS services—from two or more leading public cloud providers. Services: IaaS (Infrastructure-as-a-Service) provides on-demand access to fundamental computing resources—physical and virtual servers, networking and storage—over the internet on a pay-as-you go basis. IaaS enables end users to scale and shrink resources on an as-needed basis, reducing the need for high up-front capital expenditures or unnecessary on-premises or "owned" infrastructure and for overbuying resources to accommodate periodic spikes in usage. PaaS (Platform-as-a-Service) provides software developers with an on-demand platform—hardware, complete software stack, infrastructure and development tools—for running, developing and managing applications without the cost, complexity and inflexibility of maintaining that platform on premises. With PaaS, the cloud provider hosts everything at their data center. These include servers, networks, storage, operating system software, middleware and databases. Developers simply pick from a menu to spin up servers and environments they need to run, build, test, deploy, maintain, update and scale applications. SaaS (Software-as-a-Service), also known as cloud-based software or cloud applications, is application software hosted in the cloud. Users access SaaS through a web browser, a dedicated desktop client or an API that integrates with a desktop or mobile operating system. Cloud service providers offer SaaS based on a monthly or annual subscription fee. They may also provide these services through pay-per-usage pricing.

Use Quizgecko on...
Browser
Browser