Computer Architecture Introduction PDF

Summary

This document provides an introduction to computer architecture, outlining its course structure and covering key concepts like machine language, layered design, and the human-computer gap. It explains the importance of abstraction in computer design and the role of virtual machines. The document details different levels of abstraction in computer architecture used to manage complexity and aid in comprehension.

Full Transcript

COMPUTER ARCHITECTURE COURSE OUTLINE  Introduction  Milestones in Computer Architecture  The Computer Zoo  Processors  Memory  Input/Output  The Instruction Set  Parallel Computer Architectures CHAPTER 1: INTRODUCTION Understanding Digital Computers:  A digital computer follows a sequen...

COMPUTER ARCHITECTURE COURSE OUTLINE  Introduction  Milestones in Computer Architecture  The Computer Zoo  Processors  Memory  Input/Output  The Instruction Set  Parallel Computer Architectures CHAPTER 1: INTRODUCTION Understanding Digital Computers:  A digital computer follows a sequence of instructions, known as a program, to perform tasks.  Each computer can execute a limited set of basic instructions directly, such as:  Adding numbers  Checking if a number is zero  Copying data between memory locations Machine Language:  These basic instructions form a machine language: the fundamental language for communication between people and computers.  Machine language is simple to keep hardware costs down, but it’s difficult for people to use directly. Challenges and Solutions:  Because using machine language is complex and tedious, computers are designed in layers or abstractions.  Each layer builds on the previous one, helping manage complexity. Structured Computer Organization:  This layered approach to computer design is called structured computer organization.  It helps designers create systematic, organized computer systems. Abstractions in Computer Design:  Over time, designers realized that organizing computers as a sequence of layers or abstractions helps manage complexity.  Each layer builds on the one below it, simplifying the design process.  This layered structure allows computer systems to be systematic, organized, and easier to work with, even as technology grows more complex. The Human-Computer Gap:  There’s a significant gap between what’s easy for people and what computers are designed to do.  People want to perform complex tasks (X), but computers can only process simple instructions (Y). The Challenge:  This difference creates a problem in making computers useful and accessible for human needs. Goal of Structured Computer Organization/Computer Architecture:  This lecture aims to explain how structured computer organization/computer architecture can bridge this gap.  By using layers of abstraction, we can make computers more convenient and powerful for human use. LANGUAGES, LEVELS, AND VIRTUAL MACHINES Addressing the Human-Computer Gap:  To make computers easier for people to use, a new set of instructions (language) is introduced.  L0: The machine’s built-in language (low-level instructions).  L1: A higher-level language designed to be more convenient for people. Two Approaches to Executing L1 Programs: Translation:  Converts each instruction in L1 into an equivalent sequence of L0 instructions.  The computer executes this new L0 program. Interpretation:  Uses a program called an interpreter written in L0.  The interpreter reads and executes L1 instructions directly without creating a new L0 program. Bridging Human and Machine Needs:  Computers are designed with basic instructions (L0) that are efficient for machines but hard for humans to use.  People need higher-level languages (L1, L2, etc.) that are closer to human logic to create and manage complex programs. Virtual Machines as a Solution:  Virtual machines are hypothetical layers that simulate these higher-level languages (L1, L2, …) on a computer that can only process L0.  They allow people to write in more convenient languages without modifying the hardware.  For example, a virtual machine (M1) enables execution of L1 code as though it were native to the machine. Translation and Interpretation:  Virtual machines operate by either translating (converting the entire L1 program to L0) or interpreting (running L1 code step-by-step through an interpreter).  These techniques make it possible to use advanced languages on hardware designed for basic instructions. Layered Design for Effective Computing:  By creating multiple virtual machine layers, we can keep adding higher-level languages (e.g., L2, L3), making programming progressively easier and more human-friendly.  This layered approach allows complex systems to function without requiring prohibitively costly hardware A MULTILEVEL MACHINE. Simplifying Programming with Virtual Machines:  Programmers working at a high level (n-level virtual machine) don’t need to understand the complex layers below.  The virtual machine structure ensures programs are executed, regardless of whether they run directly on hardware or through layers of interpreters and translators. Focus on the Top Level:  Most developers care only about the top-level language, which is user-friendly and far removed from low-level machine code.  This abstraction allows developers to concentrate on problem-solving without worrying about hardware specifics. CONTEMPORARY MULTILEVEL MACHINES IN MODERN COMPUTERS Multilevel Architecture:  Modern computers often consist of multiple levels—up to six in some cases.  Each level represents a layer of abstraction, from high-level programming to the machine’s physical circuits. Understanding Each Level:  Level 0 is the hardware level, where circuits execute machine-language instructions from Level 1.  Each successive level builds upon the one below it, allowing complex operations at higher levels while keeping the hardware relatively simple. Digital Logic Level (Level 0):  Core Component: Gates (AND, OR, etc.), each built with transistors.  Function: Gates combine to form memory and the fundamental processing units of the computer.  Purpose: Foundation for all higher computing functions. Microarchitecture Level (Level 1):  Core Components: Registers (8 to 32 registers) and the Arithmetic Logic Unit (ALU).  Function: Performs arithmetic operations on data, using registers to temporarily hold data.  Control: May be controlled by a microprogram (software) or hardware directly. This controls the data flow and execution of instructions. Instruction Set Architecture (ISA) Level (Level 2):  Definition: The machine language unique to each computer model, e.g., the “language” described in a machine’s manual.  Execution: Programs in the ISA are executed by the microprogram or hardwired circuits. Operating System Machine Level (Level 3):  Hybrid Level: Supports instructions from the ISA level and additional features like memory organization and multitasking.  Execution: Some instructions are interpreted by the operating system, while others are executed by microprogramming. Assembly Language Level (Level 4):  Purpose: A symbolic language that translates into lower-level machine code.  Tool: Programs written in assembly are converted by an assembler for execution by lower levels. High-Level Languages (Level 5):  Examples: C, Java, Python, etc.  Function: Applications-oriented languages that make programming simpler and more accessible.  Execution: Typically translated to lower levels using compilers or interpreters. Key Takeaways:  Separation of Concerns: Lower levels focus on machine operations, while upper levels cater to application development.  Increasing Abstraction: Each level abstracts complexity, making programming more user-friendly at higher levels.  Role of Systems Programmers: Levels 1-3 are managed by systems programmers to support functionality and translation to higher levels. SUMMARY:  Computers are designed as a series of layers or levels, each built upon the previous one, with each level representing a distinct abstraction.  This hierarchical structure helps simplify the complexity of computer systems by allowing us to focus on higher-level concepts and ignore unnecessary details. Key points include:  Levels of Abstraction: Each level provides a different layer of functionality and operations, making the system easier to understand and work with. The lower levels (hardware, circuits) are complex but crucial for the operation of higher levels.  Architecture: The architecture of a level defines its visible features, such as data types, operations, and programming interfaces. These are the elements a programmer interacts with, such as the instruction set or memory organization.  Implementation vs. Architecture: The architecture deals with how the system is used by the programmer, while implementation refers to how the system is built (e.g., the type of memory technology used). Implementation details are not part of the architecture.  Computer Architecture: This is the field of study focused on designing the parts of a computer system that are visible to programmers, including the instruction set, memory management, and how the components interact. It’s crucial in determining how efficiently a computer can perform tasks.  Virtual Machines: In modern computing, virtual machines (VMs) often act as a layer above hardware to provide an abstraction that makes systems more flexible and easier to work with. Different programming languages may target different virtual machines, improving compatibility and simplifying the development process.  In essence, computer systems are composed of multiple, interdependent layers, each with its own purpose, and architecture defines how these layers interact and are used by programmers to execute tasks effectively. Computer architecture and organization are key concepts in designing systems that are both functional and efficient. EVOLUTION OF MULTILEVEL MACHINES: Hardware vs. Software:  Hardware refers to the physical components of a computer (e.g., integrated circuits, memory, input/output devices). It is tangible and directly executes machine-level instructions (level 1).  Software, in contrast, is a set of algorithms and instructions that tell the hardware what to do. It is stored on various media like hard disks or CDs, but its essence lies in the instructions that make up the programs. Historically Clear Boundary:  In the early days of computing, the line between hardware and software was sharply defined.  However, over time, this has become less distinct. With the evolution of multilevel machines, some operations that were once embedded in hardware are now handled by software and vice versa. Hardware and Software Equivalence:  A key idea in modern computing is that hardware and software are logically equivalent. This means any function carried out by hardware can also be implemented as software and vice versa. As Karen Panetta aptly put it, “Hardware is just petrified software.”  Conversely, software functions that were once considered the domain of hardware can be simulated in software. Decisions between Hardware and Software:  The decision to implement a function in hardware or software depends on several factors, including cost, speed, reliability, and the frequency of expected changes. These decisions are not fixed but change based on technological trends, economic considerations, and evolving user demands.  Trends in Technology: As technology evolves, the roles of hardware and software change. This ongoing evolution leads to multilevel machines where the number of layers between hardware and the highest-level programming languages continues to grow.  Trends in Technology: As technology evolves, the roles of hardware and software change. This ongoing evolution leads to multilevel machines where the number of layers between hardware and the highest-level programming languages continues to grow.  In summary, the evolution of multilevel machines reflects the growing flexibility and interdependence between hardware and software. What was once distinct has become more integrated, allowing for greater adaptability and efficiency in computer design. THE INVENTION OF MICROPROGRAMMING:  The concept of microprogramming emerged as a way to simplify the hardware design of early digital computers, which originally had only two levels: the Instruction Set Architecture (ISA) level (where all programming was done) and the digital logic level (which executed the programs).  Early Computer Design (1940s):  Early computers had a simple two-level architecture where the ISA level (high-level programming) directly controlled the digital logic level (hardware execution).  This design was challenging because the digital logic circuits were complex, difficult to build, and prone to failure, especially since they relied on vacuum tubes that were unreliable. Maurice Wilkes and the Three-Level Machine (1951):  Maurice Wilkes, a researcher at the University of Cambridge, proposed a radical change in 1951: introducing a third level (the microprogramming level).  The key idea was to design a microprogram, which would serve as a built-in interpreter to execute ISA-level programs. Instead of the hardware directly executing complex ISA-level instructions, it would only need to execute simplified microprograms with a smaller set of instructions.  This simplification promised to reduce the number of electronic circuits (and vacuum tubes) required, leading to improved reliability and easier maintenance.  Impact of Microprogramming:  The introduction of microprogramming significantly reduced hardware complexity by offloading the interpretation of complex ISA instructions to a simpler microprogram.  By 1970, this concept became widely adopted. Most of the major computers of the time utilized microprogramming, with microprograms acting as interpreters for ISA-level instructions instead of using direct hardware execution. The Invention of the Operating System (OS):  Early Computers (Pre-1960):  Early computers required programmers to operate the machines themselves. Programmers manually ran programs, which involved using punched cards and handling errors directly, often leading to idle time and inefficiency. Introduction of the Operating System (1960):  To reduce idle time, Operating Systems were introduced to automate tasks previously handled by operators. The first widespread OS, FMS (FORTRAN Monitor System), helped streamline processes by managing job queues and loading compilers automatically. Evolution of OS:  As OSs evolved, they added new instructions and features, turning into sophisticated systems that resembled a new level of abstraction beyond the ISA. This included macros and system calls that enhanced the functionality beyond simple machine instructions. The Migration of Functionality to Microcode:  With the rise of microprogramming (by 1970), machine designers could add new instructions via software, allowing for expanded instruction sets. This included more efficient arithmetic operations, memory management, process switching, and specialized features like multimedia processing.  Microcode evolution allowed for adding new instructions by modifying microprograms, rather than hardware changes.  Many new instructions were not strictly necessary but offered performance optimizations (e.g., INC instruction instead of ADD).  Key additions to instruction sets:  Instructions for integer multiplication and division.  Floating-point arithmetic instructions.  Instructions for calling and returning from procedures.  Instructions for speeding up looping.  Instructions for handling character strings.  Microprogramming’s decline: As microprograms grew complex in the 1960s and 1970s, they slowed down.  Simplification: Researchers proposed eliminating microprogramming and reducing instruction sets, leading to direct execution by hardware, speeding up performance.  Modern processors still use microprogramming to translate complex instructions into internal microcode for hardware execution.  Fluid boundaries: The distinction between hardware and software is not fixed—today’s software may become tomorrow’s hardware, and vice versa. Programmers are abstracted from how instructions are implemented. MILESTONES IN COMPUTER ARCHITECTURE  The Zeroth Generation—Mechanical Computers (1642–1945)  The First Generation—Vacuum Tubes (1945–1955)  The Second Generation—Transistors (1955–1965)  The Third Generation—Integrated Circuits (1965–1980)  The Fourth Generation—Very Large Scale Integration (1980–?)  The Fifth Generation—Low-Power and Invisible Computers THE COMPUTER ZOO  Moore’s Law:  Driving Progress: The computer industry’s rapid growth is fueled by the ability to fit more transistors onto chips each year. This results in more powerful processors, larger memories, and reduced manufacturing costs, enabling innovations across various sectors, from consumer electronics to enterprise solutions.  Gordon Moore’s Prediction: Gordon Moore, co-founder of Intel, predicted in 1965 that the number of transistors on a chip would double approximately every 18 months, equating to a 60% annual increase in computing power. This observation has held true for over four decades, shaping the trajectory of the computer industry by enabling smaller, faster, and more affordable devices.  Industry Transformation: Moore’s Law has been the cornerstone of the personal computer, mobile phone, and broader semiconductor industries. It has made possible the miniaturization of technology, increasing the affordability of computing devices, and driving mass adoption. TECHNOLOGICAL AND ECONOMIC FORCES  Moore’s Law: The rapid progress in the computer industry is largely driven by the ability to increase the number of transistors on a chip, enhancing processing power and memory capacity. Gordon Moore’s prediction that transistors double every 18 months has held true, leading to significant advancements in chip technology over decades.  Limitations and Future Challenges: While Moore’s Law has been a key driver, the shrinking size of transistors is nearing physical limits. Issues like energy dissipation and current leakage may soon hinder further progress, although technologies such as quantum computing and carbon nanotubes might provide alternatives. Economic Impact and the Virtuous Circle:  Economic Growth from Technological Advancements: As transistors on chips increase, products improve and prices fall, opening the door for new applications and markets. This creates a virtuous circle where technological innovations drive economic demand, leading to further improvements and new business opportunities.  Nathan Myhrvold’s Law of Software: The expansion of software to include more features demands more processing power and memory. This ongoing need for better hardware fuels further advances in both processors and memory technologies, contributing to the industry’s growth.  Memory and Storage Evolution:  Storage Growth: There has been a dramatic improvement in disk storage over the past few decades. For example, the IBM PC/XT in 1982 came with a 10MB hard disk, while today’s systems commonly feature 1TB disks. These advancements are not only in storage capacity but also in speed, with an annual price/performance increase of 50%.  Shift to Flash Memory: Traditional hard disks are gradually being replaced by faster and more reliable flash memory, driven by silicon-based technologies.  Telecommunications and Networking:  Exponential Growth in Networking: The telecommunications sector has also seen extraordinary progress, from 300-bit/sec modems in the 1980s to fiber-optic networks capable of transmitting at 1 trillion bits per second. The growth of the Internet is a key factor in the increasing demand for faster and more efficient communication technologies. THE COMPUTER SPECTRUM Over the past four decades, computer technology has increased by factors of millions, not just tens. The exponential growth in processing power, storage, and connectivity has not simply resulted in bigger or faster computers. Instead, it has transformed the very nature of computing—enabling new applications, such as artificial intelligence, big data processing, and real-time global communications, that would have been unimaginable with the capabilities of earlier computers. These advancements create new capabilities and industries, shifting the entire landscape of technology \THE CURRENT SPECTRUM OF COMPUTERS AVAILABLE.  Disposable computer  Microcontroller  Mobile and game computers  Personal computer  Server  Mainframe EXAMPLES OF COMPUTER FAMILIES  Introduction to the x86 Architecture  Introduction to the ARM Architecture  Introduction to the AVR Architecture

Use Quizgecko on...
Browser
Browser