CSC 101 - Introduction to Computer PDF

Document Details

FervidHeliotrope8067

Uploaded by FervidHeliotrope8067

Federal University of Technology Akure

Michael Cole

Tags

computer science introduction to computer computer components computer basics

Summary

This document provides a basic introduction to computers. It covers what a computer is, its components, functions, and characteristics, including speed, accuracy, diligence, versatility, memory, and more.

Full Transcript

Provided by Michael Cole What is Computer Computer is an electronic device that is designed to work with Information. The term computer is derived from the Latin term ‘computare’, this means to calculate or programmable machine. Computer cannot do anything without a Program. It represents the decima...

Provided by Michael Cole What is Computer Computer is an electronic device that is designed to work with Information. The term computer is derived from the Latin term ‘computare’, this means to calculate or programmable machine. Computer cannot do anything without a Program. It represents the decimal numbers through a string of binary digits. The Word 'Computer' usually refers to the Centre Processor Unit plus Internal memory. Charles Babbage is called the "Grand Father" of the computer. The First mechanical computer designed by Charles Babbage was called Analytical Engine. It uses read-only memory in the form of punch cards. Computer is an advanced electronic device that takes raw data as input from the user and processes these data under the control of set of instructions (called program) and gives the result (output) and saves output for the future use. It can process both numerical and non-numerical (arithmetic and logical) calculations. Digital Computer Definition The basic components of a modern digital computer are: Input Device, Output Device, Central Processor Unit (CPU), mass storage device and memory. A Typical modern computer uses Large Scale Integration (LSI) Chips. Four functions of computer are to accept data (input), process data (processing), produce output and store the results. Input is the raw information entered into a computer from the input devices. It is the collection of letters, numbers, images and so on. Process is the operation of data as per given instruction. It is totally internal process of the computer system. Output, also known as the result, is the processed data given by computer after data processing. Results can be saved in the storage devices for future use. Basic characteristics of computer are: 1. Speed: - Computer can work very fast. It takes only few seconds for calculations that we take hours to complete. You will be surprised to know that computer can perform millions (1,000,000) of instructions and even more per second. Therefore, we determine the speed of computer in terms of microsecond (10-6 part of a second) or nanosecond (10 to the power -9 part of a second). From this you can imagine how fast your computer performs work. 2. Accuracy: - The degree of accuracy of computer is very high and every calculation is performed with the same accuracy. The accuracy level is 7 determined on the basis of design of computer. The errors in computer are due to human and inaccurate data. 3. Diligence: - A computer is free from tiredness, lack of concentration, fatigue, etc. It can work for hours without creating any error. If millions of calculations are to be performed, a computer will perform every calculation with the same accuracy. Due to this capability it overpowers human being in routine type of work. 4. Versatility: - It means the capacity to perform completely different type of work. You may use your computer to prepare payroll slips. Next moment you may use it for inventory management or to prepare electric bills. 5. Power of Remembering: - Computer has the power of storing any amount of information or data. Any information can be stored and recalled as long as you require it, for any numbers of years. It depends entirely upon you how much data you want to store in a computer and when to lose or retrieve these data. 6. No IQ: - Computer is a dumb machine and it cannot do any work without instruction from the user. It performs the instructions at tremendous speed and with accuracy. The user decides what to do and in what sequence. So a computer cannot take decision independent of the user. 7. No Feeling: - It does not have feelings or emotion, taste, knowledge and experience. Thus it does not get tired even after long hours of work. It does not distinguish between users. 8. Storage: - The Computer has an in-built memory where it can store a large amount of data. You can also store data in secondary storage devices such as floppies, which can be kept outside your computer and can be carried to other computers. Computer and its Various Components A computer can process data, pictures, sound and graphics. They can solve highly complicated problems quickly and accurately. As shown in Figure 1, computer performs basically five major computer operations or functions irrespective of their size and make. It accepts data or instructions by way of input, stores data, process data as required by the user, gives results in the form of output and controls all operations inside a computer. Input: This is the process of entering data and programs in to the computer system. Computer like any other electronic machine takes as inputs raw data, performs some processing and gives out processed data. Therefore, the input unit of the computer takes data from us in an organized manner for processing. Figure 1: Basic computer Operations Storage: The process of saving data and instructions permanently is known as storage. Data has to be fed into the system before the actual processing starts. It is because the processing speed of Central Processing Unit (CPU) is so fast that the data has to be provided to CPU with the same speed. Therefore the data is first stored in the storage unit for faster access and processing. This storage unit or the primary storage of the computer system is designed to do the above functionality. It provides space for storing data and instructions. The function of the storage unit is to store all data and instructions before and after processing. It also store intermediate results of processing. Processing: The task of performing operations like arithmetic and logical operations is called processing. The Central Processing Unit (CPU) takes data and instructions from the storage unit and makes all sorts of calculations based on the instructions given and the type of data provided. It is then sent back to the storage unit. Output: This is the process of producing results from the data for getting useful information. Similarly, the output produced by the computer after processing must also be kept somewhere inside the computer before being presented in human readable form. The output is also stored inside the computer for further processing. Control: This determines manner of how instructions are executed as well as how the operations are performed. Controlling of all operations like input, processing and output are performed by control unit. It takes care of step by step processing of all operations inside the computer. COMPUTER FUNCTIONAL UNITS The computer system is divided into three separate units for its operation. They are: a. Arithmetic logical unit b. Control unit. c. Central processing unit. Arithmetic Logical Unit (ALU) After the entering of data through the input device, it is stored in the primary storage unit. The actual processing of the data and instruction are performed by the ALU. The major operations performed by the ALU are addition, subtraction, multiplication, division, logic and comparison. Data is transferred to ALU from storage unit when required. After processing the output is returned back to storage unit for further processing or storage. Control Unit (CU) The next component of computer is the Control Unit, which acts like the supervisor seeing that things are done in proper fashion. Control Unit is responsible for co-ordinating various operations using time signal. It determines the sequence in which computer programs and instructions are executed. It oversees the processing of programs stored in the main memory, interpretation of the instructions and issuing of signals for other units of the computer to execute them. It also acts as a switch board operator when several users access the computer simultaneously, thereby coordinating the activities of computer’s peripheral equipment as they perform the input and output operations. Central Processing Unit (CPU) The ALU and the CU of a computer system are jointly known as the central processing unit. You may call CPU as the brain of any computer system. It is just like brain that takes all major decisions, makes all sorts of calculations and directs different parts of the computer functions by activating and controlling the operations. Everything computer does is controlled by the CPU and is sometimes referred to simply as the central processor or Nerve Centre or heart, but more commonly called processor. In terms of computing power, the CPU is the most important element of a computer system. It adds and compares its data in CPU chip. A CPU or Processors of all computers, whether micro, mini or mainframe must have three elements or parts, namely primary storage, arithmetic logic unit (ALU) and control unit. Control Unit (CU) - decodes the program instruction. CPU chip used in a computer is partially made out of Silica. On other words silicon chip used for data processing are called Micro Processor. Central processing unit (CPU) is the central component of the PC. Sometimes it is called as processor. It is the brain that runs the show inside the PC. All work that is done on a computer is performed directly or indirectly by the processor. Obviously, it is one of the most important components of the PC. It is also, scientifically, not only one of the most amazing parts of the PC, but one of the most amazing devices in the world of technology. The processor plays a significant role in the following important aspects of your computer system; Performance: The processor is probably the most important single determinant of system performance in the Pc. While other components also playa key role in determining performance, the processor's capabilities dictate the maximum performance of a system. The other devices only allow the processor to reach its full potential. Software Support: Newer, faster processors enable the use of the latest software. In addition, new processors such as the Pentium with MMX Technology, enable the use of specialized software not usable on earlier machines. Reliability and Stability: The quality of the processor is one factor that determines how reliably your system will run. While most processors are very dependable, some are not. This also depends to some extent on the age of the processor and how much energy it consumes. Energy Consumption and Cooling: Originally processors consumed relatively little power compared to other system devices. Newer processors can consume a great deal of power. Power consumption has an impact on everything from cooling method selection to overall system reliability. Motherboard Support: The processor that decides to use in your system will be a major determining factor in what sort of chipset we must use, and hence what motherboard you buy. The motherboard in turn dictates many facets of. The system's capabilities and performance. Uses of Computer Computer is used in the following areas: Education: Getting the right kind of information is a major challenge as is getting information to make sense. College students spend an average of 5-6 hours a week on the internet. Research shows that computers can significantly enhance performance in learning. Students exposed to the internet say they think the web has helped them improve the quality of their academic research and of their written work. One revolution in education is the advent of distance learning. This offers a variety of internet and video-based online courses. Health and Medicine: Computer technology is radically changing the tools of medicine. All medical information can now be digitized. Software is now able to computer the risk of a disease. Mental health researchers are using computers to screen troubled teenagers in need of psychotherapy. A patient paralyzed by a stroke has received an implant that allows communication between his brain and a computer; as a result, he can move a cursor across a screen by brainpower and convey simple messages. Science: Scientists have long been users of it. A new adventure among scientists is the idea of a “collaboratory”, an internet based collaborative laboratory, in which researchers all over the world can work easily together even at a distance. An example is space physics where space physicists are allowed to band together to measure the earth’s ionosphere from instruments on four parts of the world. Business: Business clearly sees the interest as a way to enhance productivity and competitiveness. Some areas of business that are undergoing rapid changes are sales and marketing, retailing, banking, stock trading, etc. Sales representatives not only need to be better educated and more knowledgeable about their customer’s businesses, but also must be comfortable with computer technology. The internet has become a popular marketing tool. The world of cybercash has come to banking – not only smart cards but internet banking, electronic deposit, bill paying, online stock and bond trading, etc. Recreation and Entertainment: Our entertainment and pleasure-time have also been affected by computerization. For example: In movies, computer generated graphics give freedom to designers so that special effects and even imaginary characters can play a part in making movies, videos, and commercials. In sports, computers compile statistics, sell tickets, create training programs and diets for athletes, and suggest game plan strategies based on the competitor’s past performance. In restaurants, almost everyone has eaten food where the clerk enters an order by indicating choices on a rather unusual looking cash register; the device directly enters the actual data into a computer, and calculates the cost and then prints a receipt. Government: Various departments of the Government use computer for their planning, control and law enforcement activities. To name a few – Traffic, Tourism, Information & Broadcasting, Education, Aviation and many others. Defence: There are many uses computers in Defence such as: Controlling UAV or unmanned air-crafts an example is Predator. If you have cable I would recommend watching the shows “Future Weapons" and “Modern Marvels". The show future weapon gives an entire hour to the predator. They are also used on Intercontinental Ballistic Missiles (ICBMs) that uses GPS and Computers to help the missile get to the target. Computers are used to track incoming missiles and help slew weapons systems onto the incoming target to destroy them. Computers are used in helping the military find out where all their assets are (Situational Awareness) and in Communications/Battle Management Systems. Computers are used in the logistic and ordering functions of getting equipments to and around the battlefield. Computers are used in tanks and planes and ships to target enemy forces, help run the platform and more recently to help diagnose any problems with the platforms. Computers help design and test new systems. Sports: In today's technologically growing society, computers are being used for several purpose including: Recording Information: Official statistics keepers and some scouts use computers to record statistics, take notes and chat online while attending and working at a sports event. Analyzing Movements: The best athletes pay close attention to detail. Computers can slow recorded video and allow people to study their specific movements to try to improve their tendencies and repair poor habits. Writers: Many sportswriters attend several sporting events a week, and they take their computers with them to write during the game or shortly after while their thoughts are fresh in their mind. Scoreboard: While some scoreboards are manually updated, most professional sports venues have very modern scoreboards that are programmed to update statistics and information immediately after the information is entered into the computer. Safety: Computers have aided in the design of safety equipment in sports such as football helmets to shoes to mouth guards Impact of Computers on Society During the last decade computers have become an integral part of our daily lives. There is hardly any activity which does not make use of computers at some stage or the other. Even when someone on a holiday wishes to call a friend using his/her cell phone, he/she is using computers indirectly as messages are handled and directed by them. Similarly, on any given day, even if we are not directly working on computers on our desks, we make use of computers many times while using a mobile or a land line phone, purchasing from a modem outlet, and other such activities. Facilities such as e-mail and web have become the life-line of our modem society as well as of the world of business. Almost all the normal activities in present day society are controlled by computers including the functioning of offices, factories, research and development laboratories, along with development in the fields of teaching, sports, telecommunication, entertainment, etc. Activities that are closely linked with the developments in computer systems include weather forecasts, developments in science and technology, outer-space exploration, breakthroughs in medical sciences, and so on. Michael Cole'20 The term 'computer system' includes both the hardware and software. The hardware of a system consists of the physical components that are connected together to function as a computer. The working and functioning of a computer is governed by the software. All computers operate by carrying out the instructions contained in operating systems and other programs which comprise the software. The last decade has seen rapid development in computer hardware as well as software systems. As a result, the speed with which the present-day computers carry out instructions has increased tremendously. For instance, super computers are capable of carrying out trillions of instructions per second. At the same time, the costs of operations have reduced to a large extent. This has made it possible for a common man to possess a computer at home or office or shop in the form of a desktop or carry with him/her in the form of a laptop or a notebook while travelling. Modem desktops and laptops which a common person can now afford are much more powerful than the dream computers of a few top scientists of the world visualized just two decades ago. Due to widespread applications and facilities associated with computers, everyone wants to possess them. Besides the computational facilities that a computer provides, one can also avail facilities such as reservation for train and air travel, payment of bills and taxes, filing of tax returns, watching a movie using a DVD or on the internet, chatting with your friends using video chat, conducting a conference between people sitting thousands of miles apart, ordering an essential drug, grocery or fore-shopping and e-selling, etc. Students can make use of internet knowledge sources such as e-tutorials and other e-leaming devices such as CDs and DVDs, and e-libraries, which greatly aid in garnering useful information and gathering study material. Because of these advantages computer has become an extremely essential tool for every student and is not necessarily restricted to students of computer science CLASSIFICATION OF COMPUTERS Computing machines can be classified in many ways depending on their purpose, baseline technology, usage, capacity or size, the era in which they were used, their basic operating principle and the kinds of data they process. Classification by Purpose General purpose computers are designed to perform a range of tasks. They have the ability to store numerous programs, but lack in speed and efficiency. Specific purpose computers are designed to handle a specific problem or to perform a specific task. A set of instructions is built into the machine. Classification by Technology This classification is a historical one and it is based on what performs the computer operation, or the technology behind the computing skill. Before the advent of any kind of computing device at all, human beings performed computation by themselves. This involved the use of fingers, toes and any other part of the body. Wood became a computing device when it was first used to design the Abacus. Shickard in 1621 and Polini in 1709 were both instrumental to this development. Metals were used in the early machines of Pascal, Thomas, and the production versions from firms such as Brundsviga, Monroe and so on. Electromechanical devices as differential analyzers were present in the early machines of Zuse, Aiken, Stibitz and many others. Electronic elements were equally used in the Colossus, ABC, ENIAC, and the stored program computers. Several kinds of new electro technological devices have been used in the past decades. Classification by Capacity Computers can be classified according to their capacity. The term ‘capacity’ refers to the volume of work or the data processing capability a computer can handle. Their performance is determined by the amount of data that can be stored in memory, speed of internal operation of the computer, number and type of peripheral devices, amount and type of software available for use with the computer. The capacity of early generation computers was determined by their physical size - the larger the size, the greater the volume. Recent computer technology however is tending to create smaller machines, making it possible to package equivalent speed and capacity in a smaller format. Computer capacity is currently measured by the number of applications that it can run rather than by the volume of data it can process. This classification is therefore done as follows: MICROCOMPUTERS The microcomputer is a digital computer system that is controlled by a stored program that uses a microprocessor, a programmable read-only memory (ROM) and a random-access memory (RAM). The ROM defines the instructions to be executed by the computer while RAM is the functional equivalent of computer memory. A Microcomputer has the lowest level capacity and memories that are generally made of semiconductors fabricated on silicon chips. Large-scale production of silicon chips began in 1971 and this has been of great use in the production of microcomputers. The Apple IIe, the Radio Shack TRS-80, and the Genie III are examples of microcomputers and are essentially fourth generation devices. Microcomputers have from 4k to 64k storage location and are capable of handling small, single-business application such as sales analysis, inventory, billing and payroll. MINICOMPUTERS In the 1960s, the growing demand for a smaller stand-alone machine brought about the manufacture of the minicomputer, to handle tasks that large computers could not perform economically. Minicomputer systems provide faster operating speeds and larger storage capacities than microcomputer systems. Operating systems developed for minicomputer systems generally support both multiprogramming and virtual storage. This means that many programs can be run concurrently. This type of computer system is very flexible and can be expanded to meet the needs of users. Minicomputers usually have from 8k to 256k memory storage location, and relatively established application software. The PDP-8, the IBM systems 3 and the Honeywell 200 and 1200 computer are typical examples of minicomputers. MEDIUM-SIZE COMPUTERS Medium-size computer systems provide faster operating speeds and larger storage capacities than minicomputer systems. They support large number of high-speed input/output devices and several disk drives can be used to provide online access to large data files as required for direct access processing and their operating systems also support both multiprogramming and virtual storage. This allows the running of variety of programs concurrently. A medium-size computer can support a management information system and can therefore serve the needs of a large bank, insurance company or university. They usually have memory sizes ranging from 32k to 512k. The IBM System 370, Burroughs 3500 System and NCR Century 200 system are examples of medium-size computers. LARGE COMPUTERS Large computers are next to Super Computers and have bigger capacity than the Medium- size computers. They usually contain full control systems with minimal operator intervention. Large computer system ranges from single-processing configurations to nationwide computer-based networks involving general large computers. Large computers have storage capacities from 512k to 8192k, and these computers have internal operating speeds measured in terms of nanosecond, as compared to small computers where speed is measured in terms of microseconds. Expandability to 8 or even 16 million characters is possible with some of these systems. Such characteristics permit many data processing jobs to be accomplished concurrently. Large computers are usually used in government agencies, large corporations and computer services organizations. They are used in complex modelling, or simulation, business operations, product testing, design and engineering work and in the development of space technology. Large computers can serve as server systems where many smaller computers can be connected to it to form a communication network. SUPERCOMPUTERS The supercomputers are the biggest and fastest machines today and they are used when billion or even trillions of calculations are required. These machines are applied in nuclear weapon development, accurate weather forecasting and as host processors for local computer and time sharing networks. Super computers have capabilities far beyond even the traditional large-scale systems. Their speed ranges from 100 million-instruction-per-second to well over three billion. Because of their size, supercomputers sacrifice a certain amount of flexibility. They are therefore not ideal for providing a variety of user services. For this reason, supercomputers may need the assistance of a medium-size general purpose machines (usually called front-end processor) to handle minor programs or perform slower speed or smaller volume operation. Classification On the basis of Size Super Computer: The fastest and most powerful type of computer Supercomputers are very expensive and are employed for specialized applications that require immense amounts of mathematical calculations. For example, weather forecasting requires a supercomputer. Other uses of supercomputers include animated graphics, fluid dynamic calculations, nuclear energy research, and petroleum exploration. The chief difference between a supercomputer and a mainframe is that a supercomputer channels all its power into executing a few programs as fast as possible, whereas a mainframe uses its power to execute many programs concurrently. Mainframe Computer: A very large and expensive computer capable of supporting hundreds, or even thousands, of users simultaneously. In the hierarchy that starts with a simple microprocessor (in watches, for example) at the bottom and moves to supercomputers at the top, mainframes are just below supercomputers. In some ways, mainframes are more powerful than supercomputers because they support more simultaneous programs. But supercomputers can execute a single program faster than a mainframe. Mini Computer: A midsized computer. In size and power, minicomputers lie between workstations and mainframes. In the past decade, the distinction between large minicomputers and small mainframes has blurred, however, as has the distinction between small minicomputers and workstations. But in general, a minicomputer is a multiprocessing system capable of supporting from 4 to about 200 users simultaneously. Micro Computer or Personal Computer: Micro or personal computers include: Desktop Computer: a personal or micro-mini computer sufficient to fit on a desk. Laptop Computer: a portable computer complete with an integrated screen and keyboard. It is generally smaller in size than a desktop computer and larger than a notebook computer. Palmtop Computer/Digital Diary /Notebook /PDAs: a hand-sized computer. Palmtops have no keyboard but the screen serves both as an input and output device. Workstations: A terminal or desktop computer in a network. In this context, workstation is just a generic term for a user's machine (client machine) in contrast to a "server" or "mainframe." Classification by basic operating principle Using this classification technique, computers are divided into Analog, Digital and Hybrid systems. Analog Computers: These are computers were well known in the 1940s although they are now uncommon. An analog computer accept inputs which vary with respect to time and are directly applied to various devices which performs the computing operations of additions, subtraction, multiplication, division, integration and function generation. Numbers to be used in some calculation were represented by physical quantities - such as electrical voltages. The computing units of analog computers respond immediately to the changes which they detect in the input variables. Analog computers excel in solving differential equations and are faster than digital computers. Digital Computers: Digital computers represent information discretely and use a binary (two-step) system that represents each piece of information as a series of zeroes and ones for calculation. They are designed to process data in numerical form and manipulate them more easily with their circuits performing directly the mathematical operations of addition, subtraction, multiplication, and division. Due to the discrete for of digital information, it can be copied exactly while it is difficult to make exact copies of analog information. Hybrid Computers: These are machines that can work as both analog and digital computers. During the period from the early 1960s to the early 1970s, electronic analog computers were increasingly combined with a digital computer in hybrid systems. The idea was to combine the easy programmability of a general purpose digital computer with the ability of a large electronic analog computer to solve substantial, complex problems, notably large sets of nonlinear differential equations, or to simulate challenging spaceflights and "person-in-the- Loop" situations in real time. A number of specialty hybrid systems emerged in the mid- to late 1960s and early 1970s. An unusual such system was the Trice digital analog computer developed by the Packard Bell Company and used by NASA for spaceflight simulation THE COMPUTER EVOLUTION The computer evolution is indeed an interesting topic that has been explained in some different ways over the years, by many authors. Each generation of computer is characterized by a major technological development that fundamentally changed the way computers operate, resulting in increasingly smaller, cheaper, more powerful and more efficient and reliable devices. The various generations of computers an listed below : The Mechanical Era (1623-1945) Trying to use machines to solve mathematical problems can be traced to the early 17th century. Wilhelm Schickhard, Blaise Pascal, and Gottfried Leibnitz were among mathematicians who designed and implemented calculators that were capable of addition, subtraction, multiplication, and division included The first multi-purpose or programmable computing device was probably Charles Babbage's Difference Engine, which was begun in 1823 but never completed. In 1842, Babbage designed a more ambitious machine, called the Analytical Engine but unfortunately it also was only partially completed. Babbage, together with Ada Lovelace recognized several important programming techniques, including conditional branches, iterative loops and index variables. Babbage designed the machine which is arguably the first to be used in computational science. In 1933, George Scheutz and his son, Edvard began work on a smaller version of the difference engine and by 1853 they had constructed a machine that could process 15-digit numbers and calculate fourth-order differences. The US Census Bureau was one of the first organizations to use the mechanical computers which used punch-card equipment designed by Herman Hollerith to tabulate data for the 1890 census. In 1911 Hollerith's company merged with a competitor to found the corporation which in 1924 became International Business Machines (IBM). First Generation (1946-1954) In 1946 there was no 'best' way of storing instructions and data in a computer memory. There were four competing technologies for providing computer memory: electrostatic storage tubes, acoustic delay lines (mercury or nickel), magnetic drums (and disks?), and magnetic core storage. The digital computes using electronic valves (Vacuum tubes) are known as first generation computers. the first 'computer' to use electronic valves (ie. vacuum tubes). The high cost of vacuum tubes prevented their use for main memory. They stored information in the form of propagating sound waves. The vacuum tube consumes a lot of power. The Vacuum tube was developed by Lee DeForest in 1908. These computers were large in size and writing programs on them was difficult. Some of the computers of this generation were: Mark I: The IBM Automatic Sequence Controlled Calculator (ASCC), called the Mark I by Harvard University, was an electro-mechanical computer. Mark I is the first machine to successfully perform a long services of arithmetic and logical operation. Mark I is the First Generation Computer. it was the first operating machine that could execute long computations automatically. Mark I computer which was built as a partnership between Harvard and IBM in 1944. This was the first programmable digital computer made in the U.S. But it was not a purely electronic computer. Instead the Mark I was constructed out of switches, relays, rotating shafts, and clutches. The machine weighed 5 tons, incorporated 500 miles of wire, was 8 feet tall and 51 feet long, and had a 50 ft rotating shaft running its length, turned by a 5 horsepower electric motor. ENIAC: It was the first general-purpose electronic computer built in 1946 at University of Pennsylvania, USA by John Mauchly and J. Presper Eckert. The completed machine was announced to the public the evening of February 14, 1946. It was named Electronic Numerical Integrator and Calculator (ENIAC). ENIAC contained 17,468 vacuum tubes, 7,200 crystal diodes, 1,500 relays, 70,000 resistors, 10,000 capacitors and around 5 million hand-soldered joints. It weighed more than 30 short tons (27 t), was roughly 8 by 3 by 100 feet (2.4 m × 0.9 m × 30 m), took up 1800 square feet (167 m2), and consumed 150 kW of power. Input was possible from an IBM card reader, and an IBM card punch was used for output. These cards could be used to produce printed output offline using an IBM accounting machine, such as the IBM 405. Today your favorite computer is many times as powerful as ENIAC, still size is very small. EDVAC: It stands for Electronic Discrete Variable Automatic Computer and was developed in 1950.it was to be a vast improvement upon ENIAC, it was binary rather than decimal, and was a stored program computer. The concept of storing data and instructions inside the computer was introduced here. This allowed much faster operation since the computer had rapid access to both data and instructions. The other advantage of storing instruction was that computer could do logical decision internally. The EDVAC was a binary serial computer with automatic addition, subtraction, multiplication, programmed division and automatic checking with an ultrasonic serial memory. EDVAC's addition time was 864 microseconds and its multiplication time was 2900 microseconds (2.9 milliseconds). The computer had almost 6,000 vacuum tubes and 12,000 diodes, and consumed 56 kW of power. It covered 490 ft² (45.5 m²) of floor space and weighed 17,300 lb (7,850 kg). EDSAC: It stands for Electronic Delay Storage Automatic Computer and was developed by M.V. Wilkes at Cambridge University in 1949. Two groups of individuals were working at the same time to develop the first stored-program computer. In the United States, at the University of Pennsylvania the EDVAC (Electronic Discrete Variable Automatic Computer) was being worked on. In England at Cambridge, the EDSAC (Electronic Delay Storage Automatic Computer) was also being developed. The EDSAC won the race as the first stored-program computer beating the United States’ EDVAC by two months. The EDSAC performed computations in the three millisecond range. It performed arithmetic and logical operations without human intervention. The key to the success was in the stored instructions which it depended upon solely for its operation. This machine marked the beginning of the computer age. EDSAC is the first computer is used to store a program UNIVAC-1: Ecker and Mauchly produced it in 1951 by Universal Accounting Computer setup. it was the first commercial computer produced in the United States. It was designed principally by J. Presper Eckert and John Mauchly, the inventors of the ENIAC. The machine was 25 feet by 50 feet in length, contained 5,600 tubes, 18,000 crystal diodes, and 300 relays. It utilized serial circuitry, 2.25 MHz bit rate, and had an internal storage capacity 1,000 words or 12,000 characters. It utilized a Mercury delay line, magnetic tape, and typewriter output. The UNIVAC was used for general purpose computing with large amounts of input and output. Power consumption was about 120 kva. Its reported processing speed was 0.525 milliseconds for arithmetic functions, 2.15 milliseconds for multiplication and 3.9 Milliseconds for division. The UNIVAC was also the first computer to come equipped with a magnetic tape unit and was the first computer to use buffer memory. Other Important Computers of First Generation Some other computers of this time worth mentioning are the Whirlwind, developed at Massachussets Institute of Technology, and JOHNNIAC, by the Rand Corporation. The Whirlwind was the first computer to display real time video and use core memory. The JOHNNIAC was named in honor of Jon Von Neumann. Computers at this time were usually kept in special locations like government and university research labs or military compounds. Limitations of First Generation Computer Followings are the major drawbacks of First generation computers. They used valves or vacuum tubes as their main electronic component. They were large in size, slow in processing and had less storage capacity. They consumed lots of electricity and produced lots of heat. Their computing capabilities were limited. They were not so accurate and reliable. They used machine level language for programming. They were very expensive. Example: ENIAC, UNIVAC, IBM 650 and so on. Second Generation (1955-1964) The second-generation computer used transistors for CPU components & ferrite cores for main memory & magnetic disks for secondary memory. They used high-level languages such as FORTRAN (1956), ALGOL (1960) & COBOL (1960 - 1961). I/O processor was included to control I/O operations. Around 1955 a device called Transistor replaced the bulky Vacuum tubes in the first generation computer. Transistors are smaller than Vacuum tubes and have higher operating speed. They have no filament and require no heating. Manufacturing cost was also very low. Thus the size of the computer got reduced considerably. It is in the second generation that the concept of Central Processing Unit (CPU), memory, programming language and input and output units were developed. The programming languages such as COBOL, FORTRAN were developed during this period. Some of the computers of the Second Generation were the following: IBM 1620: Its size was smaller as compared to First Generation computers and mostly used for scientific purpose. IBM 1401: Its size was small to medium and used for business applications. CDC 3600: Its size was large and is used for scientific purposes. Features: Transistors were used instead of Vacuum Tube. Processing speed is faster than First Generation Computers (Micro Second) Smaller in Size (51 square feet) The input and output devices were faster. Example: IBM 1400 and 7000 Series, Control Data 3600 etc. Third Generation (1964-1977) This began with the development of a small chip consisting of 300 or more transistors. These ICs are popularly known as Chips. A single IC has many transistors, registers and capacitors built on a single thin slice of silicon. So it is quite obvious that the size of the computer got further reduced. Some of the computers developed during this period were IBM-360, ICL- 1900, IBM-370, and VAX-750. Higher level language such as BASIC (Beginners All purpose Symbolic Instruction Code) was developed during this period. Computers of this generation were small in size, low cost, large memory and processing speed is very high. Very soon ICs Were replaced by LSI (Large Scale Integration), which consisted about 100 components. An IC containing about 100 components is called LSI. Features: They used Integrated Circuit (IC) chips in place of the transistors. Semi conductor memory devices were used. The size was greatly reduced, the speed of processing was high, they were more accurate and reliable. Large Scale Integration (LSI) and Very Large Scale Integration (VLSI) were also developed. The mini computers were introduced in this generation. They used high level language for programming. Example: IBM 360, IBM 370 etc. Fourth Generation An IC containing about 100 components is called LSI (Large Scale Integration) and the one, which has more than 1000 such components, is called as VLSI (Very Large Scale Integration). It uses large scale Integrated Circuits (LSIC) built on a single silicon chip called microprocessors. Due to the development of microprocessor it is possible to place computer’s central processing unit (CPU) on single chip. These computers are called microcomputers. Later very large scale Integrated Circuits (VLSIC) replaced LSICs. Thus the computer which was occupying a very large room in earlier days can now be placed on a table. The personal computer (PC) that you see in your school is a Fourth Generation Computer Main memory used fast semiconductors chips up to 4 M bits size. Hard disks were used as secondary memory. Keyboards, dot matrix printers etc. were developed. OS-such as MS-DOS, UNIX, Apple’s Macintosh were available. Object oriented language, C++ etc were developed. Features: They used Microprocessor (VLSI) as their main switching element. They are also called as micro computers or personal computers. Their size varies from desktop to laptop or palmtop. They have very high speed of processing; they are 100% accurate, reliable, diligent and versatile. They have very large storage capacity. Example: IBM PC, Apple-Macintosh etc. Fifth Generation (1991- continued) The 5th generation computers use ULSI (Ultra-Large Scale Integration) chips. Millions of transistors are placed in a single IC in ULSI chips. 64 bit microprocessors have been developed during this period. Data flow & EPIC architecture of these processors have been developed. RISC & CISC, both types of designs are used in modern processors. Memory chips and flash memory up to 1 GB, hard disks up to 600 GB & optical disks up to 50 GB have been developed. fifth generation digital computer will be Artificial intelligence. The Active Players in Computer Technology Hundreds of people from different parts of the world played prominent roles in the history of computer. This section highlights some of those roles as played in several parts of the world. The American Participation America indeed played big roles in the history of computer. John Atanasoff invented the Atanasoff-Berry Computer (ABC) which introduced electronic binary logic in the late 1930s. Atanasoff and Berry completed the computer by 1942, but it was later dismantled. Howard Aiken is regarded as one of the pioneers who introduced the computer age and he completed the design of four calculators (or computers). Aiken started what is known as computer science today and was one of the first explorers of the application of the new machines to business purposes and machine translation of foreign languages. His first machine was known as Mark I (or the Harvard Mark I), and originally named the IBM ASCC and this was the first machine that could solve complicated mathematical problems by being programmed to execute a series of controlled operations in a specific sequence. The ENIAC (Electronic Numerical Integrator and Computer) was displayed to the public on February 14, 1946, at the Moore School of Electrical Engineering at the University of Pennsylvania and about fifty years after, a team of students and faculty started the reconstruction of the ENIAC and this was done, using state-of-the-art solid-state CMOS technology. The German Participation The DEHOMAG D11 tabulator was invented in Germany. It had a decisive influence on the diffusion of punched card data processing in Germany. The invention took place between the period of 1926 and 1931. Korad Zuse is popularly recognized in Germany as the father of the Michael Cole'20 computer and his Z1, a programmable automaton built from 1936 to 1938, is said to be the world’s ‘first programmable calculating machine’. He built the Z4, a relay computer with a mechanical memory of unique design, during the war years in Berlin. Eduard Stiefel, a professor at the Swiss Federal Institute of Technology (ETH), who was looking for a computer suitable for numerical analysis, discovered the machine in Bavaria in 1949. Around 1938, Konrad Zuse began work on the creation of the Plankalkul, while working on the Z3. He wanted to build a Planfertigungsgerat, and made some progress in this direction in 1943 and in 1944, he prepared a draft of the Plankalkul, which was meant to become a doctoral dissertation some day. The Plankalkul is the first fully-fledged algorithmic programming language. Years later, a small group under the direction of Dr. Heinz Billing constructed four different computers, the G1 (1952), the G2 (1955), the Gla (1958) and the G3 (1961), at the Max Planck Institute in Gottingen. Lastly, during the World war II, a young German engineer, Helmut Hoelzer studied the application of electronic analog circuits for the guidance and control system of liquid- propellant rockets and developed a special purpose analog computer, the ‘Mischgerat’ and integrated it into the rocket. The development of the fully electronic, general purpose, analog computer was a spin-off of this work. It was used to simulate ballistic paths by solving the equations of motion. The British Participation The Colossus was designed and constructed at the Post Office Research Laboratories at Dollis Hill in North London in 1943 to help Bletchley Park in decoding intercepted German telegraphic messages. Colossus was the world’s first large electronic valve programmable logic calculator and ten of them were built and were operational in Bletchley Park, home of Allied World War II code-breaking. Between 1948 and 1951, four related computers were designed and constructed in Manchester and each machine has its innovative peculiarity. The SSEM (June 1948) was the first such machine to work. The Manchester Mark 1 (Intermediate Version, April 1949) was the first full-sized computer available for use. The completed Manchester Mark 1 (October 1949), with a fast random access magnetic drum, was the first computer with a classic two-level store. The Ferranti Mark 1 (February 1951) was the first production computer delivered by a manufacturer. The University of Manchester Small-Scale Experimental Machine, the ‘Baby’ first ran a stored program on June 21, 1948, thus claiming to be the first operational general purpose computer. The Atlas computer was constructed in the Department of Computer Science at the University of Manchester. After its completion in December 1962, it was regarded as the most powerful computer in the world and it had many innovative design features of which the most important were the implementation of virtual addressing and the one-level store. The Japanese Participation In the second half of the 1950s, many experimental computers were designed and produced by Japanese national laboratories, universities and private companies. In those days, many experiments were carried out using various electronic and mechanical techniques and materials such as relays, vacuum tubes, parametrons, transistors, mercury delay lines, cathode ray tubes, magnetic cores and magnetic drums. These provided a great foundation for the development of electronics in Japan. Between the periods of 1955 and 1959, computers like ETL-Mark 2, JUJIC, MUSASINO I, ETL-Mark-4, PC-1, ETL-Mark-4a, TAC, Handai- Computer and K-1 were built. The African Participation Africa evidently did not play any major roles in the recorded history of computer, but indeed it has played big roles in the last few decades. Particularly worthy of mention is the contribution of a Nigerian who made a mark just before the end of the twentieth century. Former American President – Bill Clinton (2000) said “One of the great minds of the Information Age is a Nigerian American named Philip Emeagwali. He had to leave school because his parents couldn't pay the fees. He lived in a refugee camp during your civil war. He won a scholarship to university and went on to invent a formula that lets computers make 3.1 billion calculations per second”. Philip Emeagwali, supercomputer and Internet pioneer, was born in 1954, in Nigeria, Africa. In 1989, he invented the formula that used 65,000 separate computer processors to perform 3.1 billion calculations per second. Emeagwali is regarded as one of the fathers of the internet because he invented an international network which is similar to, but predates that of the Internet. He also discovered mathematical equations that enable the petroleum industry to recover more oil. Emeagwali won the 1989 Gordon Bell Prize, computation's Nobel prize, for inventing a formula that lets computers perform the fastest computations, a work that led to the reinvention of supercomputers. Computer System Components A block diagram of the basic setup of a typical computer system appears in Figure 2. The major components are as follows: Figure 2: Components of a computer system CPU As mentioned earlier, this is the central processing unit, often called simply the processor, where the actual execution of a program takes place. (Since only machine language programs can execute on a computer, the word program will usually mean a machine language program. Recall that we might write such a program directly, or it might be produced indirectly, as the result of compiling a source program written in a high-level language (HLL) such as C.) Memory This was described in Chapter 1. A program’s data and machine instructions are stored here during the time the program is executing. Memory consists of cells called words, each of which is identifiable by its address. If the CPU fetches the contents of some word of memory, we say that the CPU reads that word. On the other hand, if the CPU stores a value into some word of memory, we say that it writes to that word. Reading is analogous to watching a video cassette tape, while writing is analogous to recording onto the tape. Ordinary memory is called RAM, for Random Access Memory, a term which means that the access time is the same for each word.1 There is also ROM (Read-Only Memory), which as its name implies, can be read but not written. ROM is used for programs which need to be stored permanently in main memory, staying there even after the power is turned off. For example, an autofocus camera typically has a computer in it, which runs only one program, a program to control the operation of the camera. Think of how inconvenient—to say the least—it would be if this program had to be loaded from a disk drive everytime you took a picture! It is much better to keep the program in ROM. I/O Devices A typical computer system will have several input/output devices, possibly even hundreds of them (Figure 2.1 shows two of them). Typical examples are keyboards/monitor screens, floppy and fixed disks, CDROMs, modems, printers, mice and so on. Specialized applications may have their own special I/O devices. For example, consider a vending machine, say for tickets for a regional railway system such as the San Franciso Bay Area’s BART, which is capable of accepting dollar bills. The machine is likely to be controlled by a small computer. One of its input devices might be an optical sensor which senses the presence of a bill, and collects data which will be used to analyze whether the bill is genuine. One of the system’s output devices will control a motor which is used to pull in the bill; a similar device will control a motor to dispense the railway ticket. Yet another output device will be a screen to give messages to the person buying the ticket, such as “please deposit 25 cents more.” The common feature of all of these examples is that they serve as interfaces between the computer and the “outside world.” Note that in all cases, they are communicating with a program which is running on the computer. Just as you have in the past written programs which input from a keyboard and output to a monitor screen, programs also need to be written in specialized applications to do input/output from special I/O devices, such as the railway ticket machine application above. For example, the optical sensor would collect data about the bill, which would be input by the program. The program would then analyze this data to verify that the bill is genuine. System Bus A bus is a set of parallel wires (usually referred to as “lines”), used as communication between components. Our system bus plays this role in Figure 2.1—the CPU communicates with memory and I/O devices via the bus. It is also possible for I/O devices to communicate directly with memory, an action which is called direct memory access (DMA), and again this is done through the bus. The bus is broken down into three sub-buses: Data Bus: As its name implies, this is used for sending data. When the CPU reads a memory word, the memory sends the contents of that word along the data bus to the CPU; when the CPU writes a value to a memory word, the value flows along the data bus in the opposite direction. Since the word is the basic unit of memory, a data bus usually has as many lines as there are bits in a memory word. For instance, a machine with 32-bit word size would have a data bus consisting of 32 lines. Address Bus: When the CPU wants to read or write a certain word of memory, it needs to have some mechanism with which to tell memory which words it wants to read or write. This is the role of the address bus. For example, if the CPU wants to read Word 504 of memory, it will put the value 504 on the address bus, along which it will flow to the memory, thus informing memory that Word 504 is the word the CPU wants. The address bus usually has the same number of lines as there are bits in the computer’s addresses. Control Bus: How will the memory know whether the CPU wants to read or write? This is one of the functions of the control bus. For example, the control bus in typical PCs includes lines named MEMR and MEMW, for “memory read” and “memory write.” If the CPU wants to read memory, it will assert the MEMR line, by putting a low voltage on it, while for a write, it will assert MEMW. Again, this signal will be noticed by the memory, since it too is connected to the control bus, and so it can act accordingly. As an example, consider a machine with both address and word size equal to 32 bits. Let us denote the 32 lines in the address bus as A31 through A0, corresponding to Bits 31 through 0 of the 32-bit address, and denote the 32 lines in the data bus by D31 through D0, corresponding to Bits 31 through 0 of the word being accessed. Suppose the CPU executes an instruction to fetch the contents of Word 0x000d0126 of memory. This will involve the CPU putting the value 0x000d0126 onto the address bus. Remember, this is hex notation, which is just a shorthand abbreviation for the actual value, 000000000000000011010000000100100110 So, the CPU will put 0s on lines A31 through A20, a 1 on Line A19, a 1 on Line A18, a 0 on Line A17, and so on. At the same time, it will assert the MEMR line in the control bus. The memory, which is attached to these bus lines, will sense these values, and “understand” that we wish to read Word 0x000d0126. Thus the memory will send back the contents of that word on the data bus. If for instance c(0x000d0126) = 0003, then the memory will put 0s on Lines D31 through D2, and 1s on Lines D1 and D0, all of which will be sensed by the CPU. Some computers have several buses, thus enabling more than one bus transaction at a time, improving performance. CPU Components Figure 3 shows the components that make up a typical CPU. Included are an arithmetic and logic unit (ALU), and various registers. DIAGRAM HERE The ALU, as its name implies, does arithmetic operations, such as addition, subtraction, multiplication and division, and also several logical operations. The latter category of operations are similar to the &&, || and ! operators in the C language) used in logical expressions such as if (a < b && c == 3) x = y; The ALU does not store anything. Values are input to the ALU, and results are then output from it, but it does not store anything, in contrast to memory words, which do store values. An analogy might be made to telephone equipment. A telephone inputs sound, in the form of mechanical vibrations in the air, and converts the sounds to electrical pulses to be sent to the listener’s phone, but it does not store these sounds. A telephone tape-recording answering machine, on the other hand, does store the sounds which are input to it. Registers are storage cells similar in function to memory words. The number of bits in a register is typically the same as that for memory words. We will even use the same c( ) notation for the contents of a register as we have used for the contents of a memory word. For example, c(PC) will denote the contents of the register PC described below, just as, for instance, c(0x22c4) means the contents of memory word 0x22c4. (For convenience, we are assuming 16-bit words and addresses here.) Keep in mind, though, that registers are not in memory; they are inside the CPU. Here are some details concerning the registers shown in Figure 2.2. PC: This is the program counter. Recall that a program’s machine instructions must be stored in memory while the program is executing. The PC contains the address of the currently executing instruction. SP: The stack pointer contains the address of the “top” of a certain memory region which is called the stack. A stack is a type of data structure which the machine uses to keep track of function calls and other information, as we will see in Chapter 5. XR: An index register helps programs access arrays. Its name comes from the fact that in an array element, say y[i] for an int array y, the subscript i is often called the index. The instruction itself would specify the address of y, and isizeof(int) would be placed in XR. The circuitry in the CPU adds these two quantities, thus producing the proper location to access y[i]. PS: The processor status register contains miscellaneous pieces of information, including the condition codes. The latter are indicators of information such as whether the most recent computation produced a negative, positive or zero result. Note that there are wires leading out of the ALU to the PS (shown as just one line in Figure 2.2). These lines keep the condition codes up to date. Each time the ALU is used, the condition codes are immediately updated according to the results of the ALU operation. Generally the PS will contain other information in addition to condition codes. For example, it was mentioned in Chapter 1 that MIPS and PowerPC processors give the operating system a choice as to whether the machine will run in big-endian or little-endian mode. A bit in the PS will record which mode is used. DRs: Data registers are usually used as “fast memory,” i.e. as temporary places to store data to which we need quick access. Because they are in the CPU, an instruction executing within the CPU can access them much faster than it can access memory, since memory is outside the CPU (see Figure 2.1). Different CPU types have different numbers of DRs; two are pictured in Figure 2.2. MAR: The memory address register is used as the CPU’s connection to the address bus. For example, if the currently executing instruction needs to read Word 0x0054 from memory, the CPU will put 0x0054 into MAR, from which it will flow onto the address bus. MDR: The memory data register is used as the CPU’s connection to the data bus. For example, if the currently executing instruction needs to read Word 0x0054 from memory, the memory will put c(0x0054) onto the data bus, from which it will flow into the MDR in the CPU. On the other hand, if we are writing to Word 0x0054, say writing the value 0x0019, the CPU will put 0x0019 in the MDR, from which it will flow out onto the data bus and then to memory. At the same time, we will put 0x0054 into the MAR, so that the memory will know to which word the 0x0019 is to be written. IR: This is the instruction register. When the CPU is ready to start execution of a new instruction, it fetches the instruction from memory. The instruction is returned along the data bus, and thus is deposited in the MDR. The CPU needs to use the MDR for further accesses to memory, so it enables this by copying the fetched instruction into the IR, so that the original copy in the MDR may be overwritten. The PC, SP, XR and MAR all contain addresses, and thus typically have sizes equal to the address size of the machine. Similarly, DRs and the MDR typically have sizes equal to the word size of the machine. The PS stores miscellaneous information, and thus its size has no particular relation to the machine’s address or word size. The IR must be large enough to store the longest possible instruction for that machine. A CPU also has internal buses, similar in function to the system bus, which serve as pathways with which transfers of data from one register to another can be made. Figure 2.2 shows a CPU having only one such bus, but some CPUs have two or more. Internal buses are beyond the scope of this book, and thus any reference to a “bus” from this point onward will mean the system bus. The reader should pay particular attention to the MAR and MDR. They will be referred to at a number of points in the following chapters, both in text and in the exercises—not because they are so vital in their own right, but rather because they serve as excellent vehicles for clarifying various concepts that we will cover in this book. In particular, phrasing some discussions in terms of the MAR and MDR will clarify the fact that some CPU instructions access memory while others do not. Again, the CPU structure shown above should only be considered “typical,” and there are many variations. RISC CPUs, not surprisingly, tend to be somewhat simpler than the above model, though still similar. Software Components of the Computer “Engine” There are many aspects of a computer system which people who are at the learning stage typically take for granted as being controlled by hardware, but which are actually controlled by software. An example of this is the backspace action when you type the backspace key on the keyboard. You are accustomed to seeing the last character you typed now disapppear from the screen, and the cursor moving one position to the left. You might have had the impression that this is an inherent property of the keyboard and the screen, i.e. that their circuitry was designed to do this. However, for most computer systems today this is not the case. The bare hardware will not take any special action when you hit the backspace key. Instead, the special action is taken by whichever operating system (OS) is being used on the computer. The OS is software—a program, which a person or group of people wrote to provide various services to user programs. One of those services is to monitor keystrokes for the backspace key, and to take special actions (move the cursor leftward one position, and put a blank where it used to be) when encountering that key. When you write a program, say in C, you do not have to do this monitoring yourself, which is a tremendous convenience. Imagine what a nuisance it would be if you were forced to handle the backspace key yourself: You would have to include some statements in each program you write to check for the backspace key, and to update the screen if this character is encountered. The OS relieves you of this burden. This backspace-processing is an example of one of the many services that an OS provides. Another example is maintenance of a file system. Again the theme is convenience. When you create a file, you do not have to burden yourself with knowing the physical location of your file on the disk. You merely give the file a name. The OS finds unused space on the disk to store your file, and enters the name and physical location in a table that the OS maintains. Subsequently, you may access the file merely by specifying the name, and the OS service will translate that into the physical location and access the file on your behalf. In fact, a typical OS will offer a large variety of services for accessing files. So, a user program will make use of many OS services, usually by calling them as functions. For example, consider the C-language function scanf(). Though of course you did not write this function yourself, someone did, and and in doing so that person (or group of people) relied heavily on calls to an OS subroutine read(). In terms of our “look under the hood” theme, we might phrase this by saying that a look under the hood of the C scanf() source code would reveal system calls to the OS function read(). For this reason, the OS is often referred to as “low-level” software. Also, this reliance of user programs on OS services shows why the OS is included in our “computer engine” metaphor—the OS is indeed one of the sources of “power” for user programs, just as the hardware is the other source of power. To underscore that the OS services do form a vital part of the computer’s “engine,” consider the following example. Suppose we have a machine-language program—which we either wrote ourselves or produced by compiling from C—for a DEC computer with a MIPS CPU. Could that program be run without modification on a Silicon Graphics machine, which also uses the MIPS chip? The answer is no. Even though both machines do run a Unix OS, there are many different “flavors” of Unix. The DEC version of Unix, called Ultrix, differs somewhat from the SGI version, called IRIX. The program in question would probably include a number of calls to OS services—recall from above that even reads from the keyboard and writes to the screen are implemented as OS services—and those services would be different under the two OSs. Thus even though individual instructions of the program written for the DEC would make sense on the SGI machine, since both machines would use the same type of CPU, some of those instructions would be devoted to OS calls, which would differ. Since an OS consists of a program, written to provide a group of services, it follows that several different OSs—i.e. several different programs which offer different groups of services—could be run on the same hardware. For instance, this is the case for PCs. The most widely used OS for these CPUs is Microsoft Windows, but there are also several versions of Unix for PCs, notably the free, public-domain Linux and the commercial SCO. Figure 4: Simplified block diagram of one of the first-generation microprocessors PROGRAMMING METHODOLOGY AND ALGORITHM Programming languages are languages through which we can instruct the computer to carry out some processes or tasks. They are also designed to communicate ideas about algorithms between human beings and computers. Programming languages can be used to execute a wide range of algorithms, that is, an instruction could be executed through more than a procedure of execution. The full concept of algorithm will be explained later. A program is a set of codes that instructs the computer to carry out some processes. Programming is the process of writing programs. LEVELS OF PROGRAMMING LANGUAGES Programs and programming languages have been in existence since the invention of computers, and there are three levels of programming languages. These are: Machine Language: Machine language is a set binary coded instruction, which consists of zeros (0) and ones (1). Machine language is peculiar to each type of computer. The first generation of computers was coded in machine language that was specific to each model of computer. Some of the shortcomings of the machine language were: Coding in machine language was a very tedious and boring job Machine language was not user-friendly. That is the user had to remember a long list of codes, numbers or operation codes and know where instructions were stored in computer memory. Debugging any set of codes is a very difficult task since it requires going through the program instruction from the beginning to the end. The major advantage of machine language is that it requires no translation since it is already in machine language and is therefore faster to execute. Low Level Language: This is a level of programming language which is different from the machine language. That is, the instructions are not entirely in binary coded form. It also consists of some symbolic codes, which are easier to remember than machine codes. In assembly language, memory addresses are referenced by symbols rather than addresses in machine language. Low level programming language is also called assembly language, because it makes use of an assembler to translate codes into machine language. An example of assembly language statement is: MOVE A1, A2 ® Move the contents of Register A2 to A1 JMP b ®Go to the process with label b The disadvantages of assembly language are that: It is specific to particular machines It requires a translator called an assembler. The major advantage of the assembly language is that programs written in it are easier to read and more user friendly than those written in machine language, especially when comments are inserted in the codes High Level Language: This programming language consists of English-like codes. High-level language is independent of the computer because the programmer only needs to pay attention to the steps or procedures involved in solving the problem for which the program is to be used to execute the problem. High-level language is usually broken into one or more states such as: Main programs, sub-programs, classes, blocks, functions, procedures, etc. The name given to each component differs from one language to the other. Some advantages of high-level language: It is more user friendly, that is, easy to learn and write It is very portable, that is, it can be used on almost any computer It saves much time and effort when used compared to any other programming level language. Codes written in this language can easily be debugged. FEATURES OF PROGRAMMING LANGUAGES There are some conventional features which a programming language must possess, these features are: It must have syntactic rules for forming statements. It must have a vocabulary that consists of letters of the alphabet. It must have a language structure, which consists of keywords, expressions and statements. It may require a translator before it can be understood by a computer. Programming languages are written and processed by the computer for the purpose of communicating data between the human being and the computer. PROGRAMMING METHODOLOGIES AND APPLICATION DOMAIN Some programming methodologies are stated below: Procedural Programming: A procedural program is a series of steps, each of which performs a calculation, retrieves input, or produces output. Concepts like assignments, loops, sequences and conditional statements are the building blocks of procedural programming. Major procedural programming languages are COBOL, FORTRAN, C, AND C++. Object-Oriented (OO) Programming: The OO program is a collection of objects that interact with each other by passing messages that transform their state. The fundamental building blocks of OO programming are object modelling, classification and inheritance. Major object-oriented languages are C++, Java etc. Functional Programming: A functional program is a collection of mathematical functions, each with an input (domain) and a result (range). Interaction and combination of functions is carried out by functional compositions, conditionals and recursion. Major functional programming languages are Lisp, Scheme, Haskell, and ML. Logic (Declarative) Programming: A logic programme is a collection of logical declarations about what outcome a function should accomplish rather than how that outcome should be accomplished. Logic programming provides a natural vehicle for expressing non-determinism, since the solutions to many problems are often not unique but manifold. The major logic programming language is Prolog. Event Driven Programming: An event driven program is a continuous loop that responds to events that are generated in an unpredictable order. These events originate from user actions on the screen (mouse clicks or keystrokes, for example), or else from other sources (like readings from sensors on a robot). Major event-driven programming languages include Visual basic and Java. Concurrent Programming: A concurrent program is a collection of cooperating processes, sharing information with each other from time to time but generally operating asynchronously. Concurrent programming languages include SR, Linda, and High performance FORTRAN. APPLICATION AREAS The programming communities that represent distinct application areas can be grouped in the following way: Scientific Computing: It is concerned with making complex calculations very fast and very accurately. The calculations are defined by mathematical models, which represent scientific phenomena. Examples of scientific programming languages include Fortran 90, C, and High Performance Fortran Management Information System (MIS): Programs for use by institutions to manage their information systems are probably the most prolific in the world. These systems include an organisation’s payroll system, online sales and marketing systems, inventory and manufacturing systems, and so forth. Traditionally, MIS have been developed in programming languages like COBOL, RPG, and SQL. Artificial Intelligence: The artificial intelligence programming community has been active since the early 1960s. This community is concerned about developing programs that model human intelligent behaviour, logical deduction, and cognition. Examples of AI programming languages are prominent functional and logic programming languages like Prolog, CLP, ML, Lisp, Scheme and Haskell. Systems: System programmers are those who design and maintain the basic software that runs systems – operating system components, networks software, programming language compilers and debuggers, virtual machines and interpreters, and so on. Some of these programs are written in the assembly language of the machine, while many others are written in a language specifically designed for systems programming. The primary example of a system programming language is C. Web-centric: The most dynamic area of new programming community growth is the World Wide Web, which is the enabling vehicle for electronic commerce and a wide range of applications in academia, government, and industry. The notion of Web-centric computing, and then Web- centric programming, is motivated by an interactive model, in which a program remains in an infinite loop waiting for the next request or event to arrive, responding to that event, and returning to its looping state. Programming languages that support Web-centric computing require a paradigm that encourages system-user interaction, or event-driven programming. Programming languages that support Web-centric computing include Perl, Tc1/Tk, Visual basic, and Java TRANSLATORS A translator is a program that translates another program written in any programming language other than the machine language to an understandable set of codes for the computer and in so doing produces a program that may be executed on the computer. The need for a translator arises because only a program that is directly executable on a computer is the machine language. Examples of a translator are:- Assembler: This is a program that converts programs written in assembly or low-level language to machine language. Interpreters and Compilers: These consist of programs that convert programs in high level programming language into machine language. The major difference between interpreters and compilers is that a compiler converts the entire source program into object code before the entire program is Michael Cole'20 executed while the interpreter translates the source instructions line by line. In the former, the computer immediately executes one instruction before translating the next instruction. Features of Translators They exist to make programs understandable by the computer There exist different translators for different levels and types of programming languages Without them, the programs cannot be executed. The Programming Environment Programming environments comprise of the following: The Editor: An editor allows a program to be retrieved from the disk and amended as necessary. In order to type any program on the keyboard and save the program on a disk, it will be necessary to run a program called an editor. The Compiler: This will translate a program written in high level language stored in a text mode on a disk to the program stored in a machine-oriented language on a disk. The Linker/Loader: A linker/loader picks up the machine-oriented program and combines it with any necessary software (already in machine oriented form) to enable the program to be run. Before a compiled program can be run or executed by the computer, it must be converted into an executable form. ALGORITHMS Algorithm is a procedure through which we obtain the solution of a problem. In other words, a sequence of statements that, when executed one after the other, allow one to calculate the solution of the problem starting from the information provided as input. An algorithm is characterized by: non ambiguity: the statements must be interpretable in a unique way by whom is executing them executability: it must be possible to execute each statement (in a finite amount of time) given the available resources finiteness: the execution of the algorithm must terminate in a finite amount of time for each possible set of input data Example of an algorithm: scan the person names, one after the other as they appear in the registry, until you have found the requested one; return the associated telephone number. Different algorithms exist for solving the same problem. Once we have found/developed an algorithm, we have to code it in the selected programming language. COMPUTER PROBLEM SOLVING STRATEGIES Computer problem strategies are the approaches adopted for obtaining satisfactory solution to a problem. The following are the major computer problems solving strategies: Decomposition The first step to solving any problem is to decompose the problem description. A good way to do this would be to perform syntactic analysis on the description. We can do this in four steps. Identify all of the nouns in the sentence. Given the 3 dimensions of a box (length, width, and height), calculate the volume. The nouns in the problem specification identify descriptions of information that you will need to either identify or keep track of. Once these nouns are identified, they should be grouped into one of two categories: Input (items that are already know or are expected from the user) Output (items that are determined through manipulation of the input) Eliminate redundant or irrelevant information. There may be some information in the problem description that made it into our input/output chart that we really don’t need to solve the problem (that is, not all of the nouns may be relevant). Also, there may be some nouns that appear redundant (information we already have in our table, just in a different form). You may ask why we eliminated “dimensions” instead of “length,” “width,” and “height.” The rule of thumb for eliminating redundant information is to always eliminate the most general item. In other words, you wish to keep the most specific nouns possible in your table. When in doubt, try to piece it together logically: when figuring out the volume, which nouns would be the most useful to you? Identify all of the verbs in the sentence. Given the 3 dimensions of a box (length, width, and height), calculate the volume. The verbs in the problem specification identify what actions your program will need to take. These actions, known as processing are the steps between your input and your output. Link your inputs, processes, and output This step is as simple as drawing lines between the relevant information in your chart. Your lines show what inputs need to be processed to get the desired output. In our example, we need to take our length, width, and height and multiply them, to give us our desired volume. Figure 5: Linking input, process and output Use external knowledge to complete your solution In the solution, we have used a general verb calculate. It is at this point we are required to determine what “calculate” means. In some arbitrary problem, calculate could refer to applying some mathematical formula or other transformation to some input data in order to reach the desired output. External knowledge (such as your background in mathematics) often provides basis to “fill in the blanks.” In this case, by elementary geometry, the volume of a box can be found using the following formula: Volume = length * width * height Generally Computer problem solving strategies which are the approaches adopted for obtaining satisfactory solution to a problem will involve the following: (1) Algorithm formulation: the following are step in algorithm formulation: - analysis of the problem - decomposition of the problem into sub-problems - stepwise refinement - review of proposed solution procedure The algorithm can further be made clearer using flowcharts and pseudocode Flowcharting The second step in solving our problem involves the use of flowcharting. Flowcharting is a graphical way of depicting a problem in terms of its inputs, outputs, and processes. Though the shapes we will use in our flowcharts will be expanded as we cover more topics, some of the basic elements are presented in Figure 6. Oval (start/end of a program) Parallelogram (program input and output) Uni-directional arrow (indicate the flow of program) Rectangle (processing) Diamond (condition/looping) PseudoCode The final step in analyzing a problem is to step from flowchart to pseudocode. Pseudocode involves writing down all of the major steps you will use in the program as depicted in your flowchart. This is similar to writing final statements in your programming language without needing to worry about program syntax, but retaining the flexibility of program design. Like flowcharting, there are many elements to pseudocode design, only the most rudimentary are described here. Get: used to get information from the user Display: used to display information for the user Compute: perform an arithmetic operation +, -, *, /, =, ( ): standard arithmetic operators Store: store a piece of information for later use A typical pseudocode is presented as follows: Get length, width, height Compute volume volume = length * width * height Store volume Display volume Exercise: Work through the three steps of decomposition, flowcharting, and pseudocode for a program on the following example. Lemons and oranges are sold in a store. Oranges are N30 each and lemons are N15 each. The program gets from the user the numbers of oranges and lemons he/she wants and outputs the total amount of money to be paid. Finally, writing or developing a computer program to realise the steps in the algorithm that will lead to the solution of the problem will involve some program development stages/phases. Computer program development stages/phases - requirement and specification stage - design - coding/implementation - testing - maintenance SUMMARY, CONCLUSION AND RECOMMENDATION Researching, studying and writing on ‘History of the Computer’ has indeed been a fulfilling, but challenging task and has brought about greater appreciation of several work done by scientists of old, great developmental research carried out by more recent scientists and of course the impact all such innovations have made on the development of the human race. It has generated greater awareness of the need to study history of the computer as a means of knowing how to develop or improve on existing computer technology. It is therefore strongly recommended that science and engineering students should develop greater interest in the history of their profession. The saying that ‘there is nothing absolutely new under the sun’ is indeed real because the same world resources but fresh ideas have been used over the years to improve on existing technologies. Finally, it is hoped that this paper is found suitable as a good summary of ‘the technological history and development of computer’ and challenging to upcoming scientists and engineers to study the history of their profession. Provided by Michael Cole LIST OF REFERENCES Computer Notes: http//ecomputernotes.com/fundamental/introduction-to-computer/ Programming And Algorithms: http://www.nou.edu.ng/uploads/NOUN_OCL/ pdf/pdf2/CIT%20237.pdf Alacritude, LLC. (2003). http://www.infoplease.com Allison, Joanne (1997) http://www.computer50.org/mark1/ana-dig.html Brain, Marshall (2003). How Microcontrollers work http://electronics.howstuffworks.com/microcontroller1.htm Brown, Donita (2003). Reinventing supercomputers http://emeagwali.com/education/inventions-discoveries/ Computational Science Education Project. (1996) http://csep1.phy.ornl.gov/csep.html Ceruzzi, Paul E. (2000). A History of Modern Computing. London: The MIT Press Crowther, Jonathan (1995). ed. Oxford Advanced Learner’s Dictionary of Current English. Oxford: Oxford University Press. Dick, Maybach (2001). BCUG - Brookdale Computer Users Group http://www.bcug.com/. Dick, Pountain (2003). Penguin Dictionary of Computing. Australia: Penguin Published Encyclopedia Britannica (2003) http://www.britannica.com.. Joelmreyes website (2002). http://comp100.joelmreyes.cjb.net Leven Antov (1996) History of the MS-DOS. California: Maxframe Corporation Moreau, R. (1984). The Computer Comes of Age – The People, the Hardware, and the Software. London: The MIT Press Morris, William (1980). ed. The American Heritage Dictionary. Boston: Houghton Mifflin Company. Reedy, Jerry (1984). ed. Notable Quotables. Chicago: World Book Encyclopedia, Inc. Rojas, Raul and Ulf Hashagen (2000). eds. The First Computers – History and Architecture. London: The MIT Press Steve Ditlea (1984). ed. Digital Deli. New York: Workman Publishing Company, Inc. Techencyclopedia (2003). http://www.techweb.com/encyclopedia The Computer Language Company. The World Book Encyclopedia (1982), C-Ch Volume 3 World Book-Childcraft International, Inc Weiner, Mike and others (1990). eds. The Pocket Word Finder Thesaurus. New York: Pocket Books Layman, Thomas (1990). eds. The Pocket Webster School & Office Dictionary. New York: Pocket Books

Use Quizgecko on...
Browser
Browser