Five Generations of Computers PDF
Document Details
Uploaded by SimplifiedDystopia7417
Tags
Related
Summary
This document discusses the five generations of computers, ranging from the vacuum tube era to the artificial intelligence era. It covers the technical advancements, characteristics, and applications associated with each generation, along with an overview of CPU components and storage devices.
Full Transcript
The Five Generations of Computers Generations of Computer The computer has evolved from a large-sized simple calculating machine to a smaller but much more powerful machine. The evolution of computer to the current state is defined in terms of the generations of computer. E...
The Five Generations of Computers Generations of Computer The computer has evolved from a large-sized simple calculating machine to a smaller but much more powerful machine. The evolution of computer to the current state is defined in terms of the generations of computer. Each generation of computer is designed based on a new technological development, resulting in better, cheaper and smaller computers that are more powerful, faster and efficient than their predecessors. Generations of Computer Currently, there are five generations of computer. In the following subsections, we will discuss the generations of computer in terms of the technology used by them (hardware and software), computing characteristics (speed, i.e., number of instructions executed per second), physical appearance, and their applications. First Generation Computers (1940-1956) The first computers used vacuum tubes for circuitry and magnetic drums for memory. They were often enormous and taking up entire room. First generation computers relied on machine language. They were very expensive to operate and in addition to using a great deal of electricity, generated a lot of heat, which was often the cause of malfunctions. The UNIVAC and ENIAC computers are examples of first-generation computing devices. First Generation Computers Advantages : It was only electronic device First device to hold memory Disadvantages : Too bulky i.e large in size Vacuum tubes burn frequently They were producing heat Maintenance problems Second Generation Computers (1956-1963) Transistors replaced vacuum tubes and ushered in the second generation of computers. Second-generation computers moved from cryptic binary machine language to symbolic. High-level programming languages were also being developed at this time, such as early versions of COBOL and FORTRAN. These were also the first computers that stored their instructions in their memory. Second Generation Computers Advantages : Size reduced considerably The very fast Very much reliable Disadvantages : They over heated quickly Maintenance problems Third Generation Computers (1964-1971) The development of the integrated circuit was the hallmark of the third generation of computers. Transistors were miniaturized and placed on siliconchips, called semiconductors. Instead of punched cards and printouts, users interacted with third generation computers through keyboards and monitors and interfaced with an operating system. Allowed the device to run many different applications at one time. Third generation computers Advantages : ICs are very small in size Improved performance Production cost cheap Disadvantages : ICs are sophisticated Fourth Generation Computers (1971-present) The microprocessor brought the fourth generation of computers, as thousands of integrated circuits were built onto a single silicon chip. The Intel 4004 chip, developed in 1971, located all the components of the computer. From the central processing unit and memory to input/output controls—on a single chip.. Fourth generation computers also saw the development of GUIs, the mouse and handheld devices. Fourth Generation Computers Fifth Generation Computers (present and beyond) Fifth generation computing devices, based on artificial intelligence. Are still in development, though there are some applications, such as voice recognition. The use of parallel processing and superconductors is helping to make artificial intelligence a reality. The goal of fifth-generation computing is to develop devices that respond to natural language input and are capable of learning and self-organization. Fifth Generation Computers CPU The central processing unit(CPU). The brain of any computer system is the CPU. It controls the functioning of the other units and process the data. The CPU is sometimes called the processor, or in the personal computer field called “microprocessor”. It is a single integrated circuit that contains all the electronics needed to execute a program. The processor calculates (add, multiplies and so on), performs logical operations (compares numbers and make decisions), and controls the transfer of data among devices. The processor acts as the controller of all actions or services provided by the system. Processor actions are synchronized to its clock input. A clock signal consists of clock cycles. The time to complete a clock cycle is called the clock period. Normally, we use the clock frequency, which is the inverse of the clock period, to specify the clock. The clock frequency is measured in Hertz, which represents one cycle/second. Hertz is abbreviated as Hz. 38 Usually, we use mega Hertz (MHz) and giga Hertz (GHz) as in 1.8 GHz Pentium. The processor can be thought of as executing the following cycle forever: 1.Fetch an instruction from the memory, 2.Decode the instruction (i.e., determine the instruction type), 3.Execute the instruction (i.e., perform the action specified by the instruction). Execution of an instruction involves fetching any required operands, performing the specified operation, and writing the results back. This process is often referred to as the fetch- execute cycle, or simply the execution cycle. The execution cycle is repeated as long as there are more instructions to execute. This raises several questions. Who provides the instructions to the processor? Who places these instructions in the main memory? How does the processor know where in memory these instructions are located? When we write programs—whether in a high-level language or in an assembly language— we provide a sequence of instructions to perform a particular task (i.e., solve a problem). A compiler or assembler will eventually translate these instructions to an equivalent sequence of machine language instructions that the processor understands. The operating system, which provides instructions to the processor whenever a user program is not executing, loads the user program into the main memory. The operating system then indicates the location of the user program to the processor and instructs it to execute the program. The actions of the CPU during an execution cycle are defined by micro-orders issued by the control unit. These micro-orders are individual control signals sent over dedicated control lines. For example, let us assume that we want to execute an instruction that moves the contents of register X to register Y. Let us also assume that both registers are connected to the data bus, D. The control unit will issue a control signal to tell register X to place its contents on the data bus D. After some delay, another control signal will be sent to tell register Y to read from data bus D. 39 The components of CPU. A typical CPU has three major components: (1) register set, (2) arithmetic logic unit (ALU), and (3) control unit (CU). The register set differs from one computer architecture to another. It is usually a combination of general-purpose and special purpose registers. General-purpose registers are used for any purpose, hence the name general purpose. Special- purpose registers have specific functions within the CPU. For example, the program counter (PC) is a special-purpose register that is used to hold the address of the instruction to be executed next. Another example of special-purpose registers is the instruction register (IR), which is used to hold the instruction that is currently executed. Figure 12 shows the main components of the CPU and its interactions with the memory system and the input/output devices. Memory System Instructions Data Input / Output Figure 12: Central processing unit main components and interactions with the memory and I/O. The ALU provides the circuitry needed to perform the arithmetic, logical and shift operations demanded of the instruction set. The control unit is the entity responsible for fetching the instruction to be executed from the main memory and decoding and then executing it. The CPU can be divided into a data section and a control section. The data section, which is also called the datapath, contains the registers (known as the register file) and the ALU. The datapath is capable of performing certain operations on data items. The register file can be thought of as a small, fast memory, separate from the system memory, which is used for temporary storage during computation. 40 The control section is basically the control unit, which issues control signals to the datapath. The control unit of a computer is responsible for executing the program instructions, which are stored in the main memory. It can be thought of as a form of a “computer within a computer” in the sense that it makes decisions as to how the rest of the machine behaves. Like the system memory, each register in the register file is assigned an address in sequence starting from zero. These register “addresses” are much smaller than main memory addresses: a register file containing 32 registers would have only a 5-bit address, for example. The major differences between the register file and the system memory is that the register file is contained within the CPU, and is therefore much faster. An instruction that operates on data from the register file can often run ten times faster than the same instruction that operates on data in memory. For this reason, register-intensive programs are faster than the equivalent memory intensive programs, even if it takes more register operations to do the same tasks that would require fewer operations with the operands located in memory. 41 Computer Storage Devices A storage device for a computer enables its user to store and safely access the data and applications on a computer device. Knowing and learning about these computer storage devices is necessary as it works as one of the core components of the system. Types of Computer Storage The computer storage devices can be classified into various parts, but the computer storage unit is also divided into three parts. Given below are details about the three types of computer storage: 1. Primary Storage: This is the direct memory which is accessible to the Central Processing Unit (CPU). ○ This is also known as the main memory and is volatile. ○ This is temporary. As soon as the device turns off or is rebooted, the memory is erased ○ It is smaller in size ○ Primary storage comprises only of Internal memory ○ Examples of primary storage include RAM, cache memory, etc. 2. Secondary Storage: This type of storage does not have direct accessibility to the Central Processing Unit. ○ The input and output channels are used to connect such storage devices to the computer, as they are mainly external ○ It is non-volatile and larger storage capacity in comparison to primary storage ○ This type of storage is permanent until removed by an external factor ○ It comprises of both internal and external memory ○ Examples of secondary storage are USB drives, floppy disks, etc. 1. Tertiary Memory: This type of storage is generally not considered to be important and is generally not a part of personal computers. ○ It involves mounting and unmounting of mass storage data which is removable from a computer device ○ This type of storage holds robotic functions ○ It does not always require human intervention and can function automatically List of Computer Storage Devices Magnetic Storage Devices Optical Storage Devices Flash Memory Devices Online Cloud Storage Magnetic Storage Devices The most commonly used storage devices in today’s time are magnetic storage devices. These are affordable and easily accessible. A large amount of data can be stored in these through magnetised mediums. A magnetic field is created when the device is attached to the computer and with the help of the two magnetic polarities, the device is able to read the binary language and store the information. Given below are the examples of magnetic storage devices. Floppy Disk - Also known as a floppy diskette, it is a removable storage device which is in the shape of a square and comprises magnetic elements. When placed in the disk reader of the computer device, it spins around and can store information. Lately, these floppy disks have been replaced with CDs, DVDs and USB drives Hard Drive - This primary storage device is directly attached to the motherboard’s disk controller. It is an integral storage space as it is required to install any new program or application to the device. Software programs, images, videos, etc. can all be saved in a hard drive and hard drives with storage space in terabytes are also easily available now Zip Disk - Introduced by Iomega, is a removable storage device which was initially released with a storage space of 100 MB which was later increased to 250 and then finally 750 MB Magnetic Strip - A magnetic strip is attached in the device comprising digital data. The most suitable example for this is a debit card which has a strip placed on one of its sides which stores the digital data Optical Storage Devices Such devices used lasers and lights to detect and store data. They are cheaper in comparison to USB drives and can store more data. Discussed below are a few commonly used optical storage devices. CD-ROM - This stands for Compact Disc - Read-Only Memory and is an external device which can store and read data in the form of audio or software data Blu-Ray Disc - Introduced in 2006, Blu-ray disk was backup up by major IT and computer companies. It can store up to 25 GB data in a single- layer disc and 50 GB data in a dual-layer disc DVD - Digital Versatile Disc is another type of optical storage device. It can be readable, recordable, and rewritable. Recordings can be done in such devices and then can be attached to the system CD-R - It is a readable Compact Disc which uses photosensitive organic dye to record data and store it. They are a low-cost replacement for storing software and applications Flash Memory Devices These storage devices have now replaced both magnetic and optical storage devices. They are easy to use, portable and easily available and accessible. They have become a cheaper and more convenient option to store data. Discussed below are the major flash memory devices which are being commonly used by the people nowadays. USB Drive - Also, known as a pen drive, this storage device is small in size and is portable and ranges between storage space of 2 GB to 1 TB. It comprises an integrated circuit which allows it to store data and also replace it Memory Card - Usually attached with smaller electronic and computerised devices like mobile phones or digital camera, a memory card can be used to store images, videos and audios and is compatible and small in size Memory Stick - Originally launched by Sony, a memory stick can store more data and is easy and quick to transfer data using this storage device. Later on, various other versions of memory stock were also released SD Card - Known as Secure Digital Card, it is used in various electronic devices to store data and is available in mini and micro sizes. Generally, computers have a separate slot to insert an SD card. In case they do not have one, separate USBs are available in which these cards can be inserted and then connected to the computer There are various other flash memory drives which are also easily available in the market and are easily accessible and easy to use. Online cloud Storage The term Cloud computing is used to describe the data centres available for users over the Internet where they can save their databases and files. This data can easily be accessed over the internet anytime and anywhere. This has become a common mode to store data. The largest or the smallest computerised devices can use the online cloud storage to save their data files. This option is also available in mobile phones where a backup of our files and data is being managed. Input and Output Devices Unit 4: Input and Output Devices Introduction The unit 4 presents the information of input and output devices. A number of input/output devices are used with many types of microcomputers. Many of these are less complex versions of I/O devices that have been available for larger computer systems. The principal difference is that because they are intended for use with microcomputers, they are significantly slower and substantially cheaper. Few of these devices are discussed in this unit. Lesson 1: Input Devices 1.1 Learning Objectives On completion of this lesson you will be able to: understand the functions of input devices know different types of input devices. 1.2 Keyboards The most common of all input devices is the keyboard. Several versions of keyboards are available. The best and most expensive of these is the full-stroke keyboard. This is ideal for word processing and other volume data and program entry activities. This type of keyboard is available with Full-stroke keyboard, most mainframe computer terminals or the expensive microcomputer enhanced keyboard. systems. Some popular microcomputers offer enhanced keyboard for easy entry of numbers. This is accomplished with a smaller group of keys known as a numeric keypad at the right of the keyboard. These keys generally consist of the digits, a decimal point, a negative sign, and an ENTER key. This type of keyboard is ideal for accounting operations, which require a large volume of numbers to be entered. Keyboards generally utilize integrated circuits to perform essential functions, such as determining the combination of 1s and 0s, or binary code, to send to the CPU, corresponding to each key depressed, switching between shifted and nonshifted keys, repeating a key code if a key is held down for a prolonged period of time, and temporarily storing or "buffering" input when keys are typed too fast. 71 Computer Basics The keyboard arrangement provided as standard on most keyboards is the QWERTY arrangement, named for the six letters beginning the row at the top left of the keyboard (Figure 4.1). This arrangement was chosen intentionally to slow expert typists, since those who typed too fast would QWERTY arrangement cause the keys on a mechanical typewriter to jam. Slowing down the typist was accomplished by scattering the most used around the keyboard, making frequently used combinations of letters awkward and slower to type. This QWERTY keyboard arrangement has been used for nearly a century. The Dvorak Simplified Keyboard (DSK) arrangement, designed in 1932 by August Dvorak, is the result of extensive ergonomic studies. Dvorak noted that with the QWERTY keyboard arrangement, typists used the weakest fourth and fifth fingers of their left hand a large proportion of the time. Thus, Dvorak rearranged the keyboard so that the five more frequently used vowels (a, o, e, u, and i) and the five most frequently Dvorak simplified keyboard used consonants (d, h, t, n, and s) were positioned on the home row where the fingers of the left and right hands rest, respectively (Figure 4.2). Thus, 70 percent of the typing is done on the home row. He then placed the next most frequently used characters in the row above the home row and the least frequently used characters in the row below the home row. This resulted in a reduction of finger movement of approximately 80 percent and overall, an increase in productivity of nearly 40 percent. Expert typists and word processors generally agree that using the Dvorak arrangement increases productivity while simultaneously decreasing fatigue. The world's fastest typing speed, nearly 200 words per minute, was achieved on a Dvorak keyboard. Despite these improvements the QWERTY keyboard arrangements is still the most common because of the difficulty of overcoming inertia and retraining. In the mean while, microcomputer manufacturers and software vendors are producing software that will convert your keyboard from QWERTY to Dvorak, and back again at will. To date, larger computer systems employ the traditional QWERTY arrangement only. 72 Input and Output Devices Figure 4.1 QWERTY Keyboard. Figure 4.2: Dvorak Keyboard. 1.3 Other Input Devices Punched Card The punched card has served as an input medium to automated computational devices. It has undergone little or no change since that time, and most companies have phased out and replaced it with the more efficient data entry media. Among the punched card devices still in use Punched card is the punched card reader. The reading of punched cards takes place at speeds ranging from hundred fifty to more than two thousand five hundred cards per minute. 73 Computer Basics Key-to-Tape and Key-to-Disk Systems In a key-to-tape system, data entered at a keyboard are recorded directly on magnetic tape. The magnetic tape used is similar to the tape cartridge or cassette used with home recorders. Accuracy is verified by placing the recording tape into a magnetic tape verifier and having the original data retyped. Magnetic tape encoders and verifiers are generally housed in the same physical unit. Errors detected are corrected simply by erasing the mistakes and substituting the correct character(s). Character Readers A character reader is capable of accepting printed or typed characters from source documents and converting these data into a computer- Character Readers acceptable code. Currently available high-speed character readers are capable of reading source documents at rates of up to several thousand documents per minute and are costly. The three basic types of character readers are magnetic-ink, optical mark, and optical character readers. Magnetic-ink Character Readers Magnetic-Ink Character Recognition (MICR) was developed by the Stanford Research Institute for use by the world's largest bank, the Bank MICR of America. This system can read data prerecorded on checks and deposit slips with a special ferrite-impregnated ink. The magnetized characters can be read and interpreted by MICR equipment. 74 Input and Output Devices -16 A B C D E -17 A B C D E -18 A B C D E -19 A B C D E -20 A B C D E -21 A B C D E -22 A B C D E -23 A B C D E -24 A B C D E -25 A B C D E -26 A B C D E -27 A B C D E -28 A B C D E -29 A B C D E -30 A B C D E -31 A B C D E -32 A B C D E -33 A B C D E Figure 4.3 Portion of a special-purpose optical mark form. Optical Mark Readers Optical mark readers (OMR) optically read marks on carefully printed forms. Optical mark forms are relatively expensive, as they must be Optical Mark Readers(OMR) printed with exact tolerances so that the marks will up under the optical sensing devices when read (Figure 4.3). The most popular use of such devices is optical character readers for scoring examinations in educational institutions. Optical Character Readers (OCR) Optical character recognition (OCR) devices can convert data from source documents to a machine-recognizable form. Current applications of optical scanning include billing, insurance premium notices, and charge sales invoices. At present, on OCR device can reliably read and Optical Character Recognition interpret script or handwriting. However, some can read handwriting (OCR). provided that certain general guidelines are observed when the data are written. Generally, optical character readers are limited with respect to hand-written characters and can only read handwritten digits and some symbols. Many OCR devices are available for the reading of typed characters, including digits, letters and some special characters. Not all printed characters can be read reliably on OCR readers. Generally, each reader is capable of reading only selected character styles. Even if the character style and spacing are acceptable, errors can result from reading a character that is not written perfectly. To reduce such errors, OCR devices generally compare the pattern read with the patterns to all acceptable character. The read character is assumed to be the 75 Computer Basics character whose stored pattern most closely matches the read pattern. This process is shown in Figure 4.4. A 18 Discrepancies C 10 Discrepancies D 0 Discrepancies 76 Input and Output Devices C 6 Discrepancies Figure 4.4: Character reads compare the digitized matrix of an unknown character against a stored set of templates. Because of the high cost of OCR devices, they are uneconomic unless a substantial number of documents are to be processed each day. CD, Web camera, disk drive, ATM, Scanner and bar code scanner can all be used as input devices. Pointing Systems Computer users frequently find it easier to point to something on a screen or at an item of text or graphical material they are entering into the computer, A number of devices are available to assist in fulfilling this need. Figure 4.5 : Various pointing input devices. 77 Computer Basics Light Pen The earliest pointing device is the light pen. This device is placed close to a screen or monitor and turned on. A photo sensor inside the light pen Light Pen detects the scanning beam sweeping back and forth across the screen. Accompanying circuitry converts the pen's reading into the position of the pen on the screen. Light pens are used to select items from a list or menu displayed on the screen. Light pens are used to select items from a list or menu displayed on the screen and to draw graphic displays on the video screen. Digitizer Pad A digitized pad looks like a graph pad with a pointer. It functions like a light pen on a display screen except that the pad is mounted horizontally. Digitizer Pad As the pointer is moved on the pad, the corresponding point on the screen is illuminated. The digitized pad is useful in converting graphic input, such as charts, graphs, and blueprints into patterns that can be manipulated and stored by the computer. Mouse A mouse is a hand-movable device that controls the position of the Mouse cursor on a screen. It has a box with buttons on the top and a ball on the bottom. The box is placed on a flat surface, with the user's hand over it. The ball's movement on the surface causes the cursor to move. Joystick and Trackball Joysticks are used with video games for user input. These devices may also be used to move the cursor around a screen to facilitate input to a Joystick and Trackball graphical display. A trackball is similar in operation to the joystick. It uses a billiard-sized ball to position the cursor. Several keyboard manufacturers have integrated them directly into their keyboards. Touchscreen Touchscreen detects the touch of a human finger. One popular technique used to detect the touch of a finger utilizes infrared light beams. In this Touchscreen technique, infrared light beams shine horizontally and vertically on the face of the screen. A pointing finger interrupts both horizontal and vertical beams, pointing its exact location. 78 Input and Output Devices Pen drive A pen drive is another name for a USB flash drive. Other names are flash drive. USB flash drive, Thumb drive, etc. They are devices that allow storage of computer files that you can remove and take from computer to computer. The price of the driver is determined by the size of its memory measured in megabytes or gigabytes. While 128 megabyte drivers used to be considered large, current pen drivers sizes can reach 1,2,4 or more gigabytes. The drivers inserted in the computers USB ports and are automatically recognized on PC operating systems beyond Windows 98 (which needs a separate installation of drivers). Pen drives can also have full blown application on them which are written in what is called U3 compatible software. Figure 4.6 : A Pen Drive. Scanner In computing an image scanner often abbreviated to just scanner is a device that optically scans images, printed text, handwriting, or an object, and converts it to a digital image. Common examples found in offices are variations of the desktop (or flatbed) scanner where the document is placed on a glass window for scanning. Hand-held scanners, Where the device is moved by hand , have evolved from text scanning “wands” to 3D scanners used for industrial design, reverse engineering, test and measurement, orthotics, gaming and other applications Mechanically driven scanner that move the document are typically used for large-format documents, where a flatbed design would be impractical. Figure 4.7 : Scanner. 79 Computer Basics CD-ROM Pronounced see-dee-rom. Short for Compact Disc-Read-only Memory, a type of optical disk capable of shorting large amounts of data up to 1GB, although the most common size is 650 MB (megabyte). A single CD- ROM has the storage capacity to 700 floppy disks, enough memory to store about 300,000 text pages. CD-ROMs are stamped by the vendor, and once stamped, they cannot be erased and filled with new data. To read a CD, you need a CD-ROM player. All CD-ROMs comform to a standard size and format, so you can load any type of CD-ROM into any CD-ROM player. In addition, CD-ROM players are capable of playing audio CDs, which share the same technology. CD-ROMs are particularly well-suited to information that requires large storage capacity. This includes large software applications that support color, graphics, sound, and especially video and are well suitable for tutoring, Figure 4.8 : A CD. Figure 4.9 : Composition of a CD. 80 Input and Output Devices 1.4 Exercise 1. Multiple choice questions a. Increase in productivity using Dvorak simplified keyboard is nearly (i) 60 percent (ii) 30 percent (iii) 40 percent (iv) 50 percent. b. Which one is used for scoring examinations? (i) MICR (ii) OMR (iii) OCR (iv) none of them. c. Which one is used with video games for user input? (i) Touchscreen (ii) Mouse (iii) Digitize pad (iv) Joystick. d. Touchscreen is usually used to detect the touch of a (i) Human finger (ii) Pen (iii) Wooden stick (iv) none of them. 2. Questions for short answers a. Briefly describe the advantages of Dvorak simplified keyboard. b. What is the basic difference between OMR and OCR? c. What is a mouse in computer system? d. Write down the application of digitized pad and touchscreen. 3. Analytical questions a. Describe the keyboard as input device. b. Describe basic types of character readers. c. Describe different pointing systems. 81 Computer Basics Lesson 2: Output Devices 2.1 Learning Objectives On completion of this lesson you will be able to : understand functions and characteristic of output devices know different types of devices. 2.2 Monitors It is the most commonly used display device. The monitor, utilizes a cathode ray tube (CRT). CRT monitors generally produce images by the raster-scan method. In this method, an electron beam varying in intensity, is moved back and forth horizontally across the face of the monitor. As the beam is directed to each spot on the phosphor-coated screen, it illuminates the spot in proportion to the voltage applied to the CRT monitors generally beam. Each spot represents a picture element or pixel. When the electron produce images by the beam has scanned the entire screen and illuminated each pixel, one can rasterscan method. see a complete image. The image that can be seen is the one traced on the retinas of eyes by the light beam. However, this image will fade unless it is refreshed. Thus, the electron beam must scan the screen very rapidly (a minimum of 60 times per second), so that the intensity of the image remains approximately the same and the screen does not appear to flicker. The screen resolution of a particular monitor is determined by the number of pixels that make up the screen. Monitors are currently available with 64,000 to more than 2 million pixels per screen. The greater the resolution of a monitor the greater the storage demand on the computer. This is because the image must be stored in memory before it can be displayed. Two techniques used to store computer images are: bit-mapped and character-addressable. In a bit-mapped display, each pixel is uniquely addressable. Information must be stored for each pixel on the screen. This technique needs quite a large computer memory and provides the most detailed display. For graphical applications, such as CAD/CAM, this detail is essential. However, for applications such as word processing, a character- addressable display is appropriate. In a character addressable display, the screen is divided into character positions. Only the characters to be displayed are stored in memory. As each character is retrieved from memory, it is converted into a pattern of dots or pixels by a special character generator module. 82 Input and Output Devices Monochrome or colour: Some monitors display images in only one colour while others are capable of producing images in colours. Monochrome monitors use a single electron beam and display one Monochrome monitors display colour, generally green, amber, or white, on a black background. The one colour. Colour monitors phosphor composition of the screen determines the colour. Colour produce multi-colour images by monitors produce multi-colour images by combining the red, blue, and combining the red, blue, and green colours in varying intensities. Each pixel is made up of three green colour in varying intensities. colour dots: red, blue, and green. It will appear to glow in different colours depending on the intensity of each individual dot in the pixel. Colour monitors are commonly referred to as RGB monitors since they employ three election beams, one for each colour. Colour monitors are categorized as CGA, EGA, VGA and SVGA depending on the resolution. CGA monitors provide the least resolution (approximately 300 × 200 Pixels) and SVGA monitors provide the greatest resolution (1000 × 800 pixels and greater). Monitor interface: A monitor requires an appropriate interface to communicate with a computer. For example, a colour graphics interface A monitor requires an board is needed for a colour monitor. This interface will generally not appropriate interface to work with a monochrome monitor and might even damage it. Dozens of communicate with a monitor interface boards are available for use with microcomputers. A computer. caution must be exercised to match the interface to both the monitor and the computer. Using a television: Some smaller microcomputer systems can be used with a standard television. The basic difference between a monitor and a television set is that the resolution of a television is substantially less Some smaller microcomputer than that with a monitor. Also the television requires the use of a systems can be used with a standard television. modulator to interface the computer output with the television. The modulator combines the separate audio and visual signals sent by the microcomputer into a single modulated signal as required by a television. Most inexpensive computer systems designed for use with a television set generally have a built-in modulator. Flat-Panel Displays For laptop computers more compact, low-power, durable monitors are used. A number of flat-panel display technologies are available for this. The most common are the plasma and liquid crystal displays. Plasma displays: A plasma display consists of an ionized neon or argon gas (plasma) sealed between two glass plates. One plate encases a set of Plasma displays. fine horizontal wires and the other a set of vertical wires. Pixels are formed by the intersections of the horizontal and vertical wires. A single pixel can be turned on by sending a current through its horizontal and vertical wires. This causes the gas between the wires to produce an 83 Computer Basics amber glow. The images produced by plasma displays are generally very clear, and not subject to the flicker. Plasma displays are generally more expensive than the CRT displays. Liquid crystal displays: Liquid crystal displays (LCDs) have been used for several years in calculators and digital watches. A thin layer of a Liquid Crystal Displays liquid crystal substance is suspended between two thin sheets of polarized glass and separated by a wire grid into tiny squares. As current is applied to the wires the liquid crystal substance within the square changes from clear to opaque or black. The thousands of clear and black squares produce patterns of characters. The disadvantage of LCD displays is lack of brightness and resolution as compared to CRT and plasma displays. The quality of the LCD display depends on the surrounding light and the viewing angle. It is sharpest and clearest when viewed in brightness from the front. 2.3 Printers The printer is the most common output device. It produces permanent visual record of the data output from a computer. It is capable of producing business reports and documents currently available. Printers are capable of printing from 150 to over 20,000 lines per minute, with each line having up to 150 characters. Thus, a maximum printing speeds of approximately 50,000 characters per second is possible. Printer is the most common output device. It produces permanent visual record of Printers print on plain paper or on specially prepared single-or multiple the data output from a copy forms, such as invoices, stationery, labels, checks, bills and other computer. special-purpose forms used in business and industry. They can print both text and graphics in black and white or in colour. Printers can be subdivided into two broad categories, impact and non- impact. The impact printers are the most common. 2.4 Impact Printers In impact printers, printing occurs as a result of a hammer striking a character form and the character form in turn striking an inked ribbon, Impact Printers causing the ribbon to press an image of the character on paper. Character printer devices print one character at a time at speeds of about 10 to 500 characters per second. The fastest of these printers is the wire or dot-matrix printer. It prints characters made up of a pattern of dots formed by the ends of small wires Figure 4.6 shows the letter "A" as printed with different densities. By extending certain wires beyond the 84 Input and Output Devices others, a dot pattern can be created that gives the appearance of numbers, letters or special characters. (a) (b) Dotmatrix Printers (c) Figure 4.10: Dotmatrix printers form characters with an array of dots. Here the letter A is shown printed by (a) a 9-pin printer, (b) a 24-pin printer. (c) a 9-pin letter-quality dot-matrix printer capable of overlapped dot printing. These extended wires are pressed against an inked ribbon to print the characters on the paper. Some slower and less expensive matrix printers 85 Computer Basics print a character as a series of columns each one dot wide. It can be used to print special character shapes that can be used with graphics. For a typewriter-quality output, a special dot-matrix or daisy metal print element, similar in appearance to the arrangement of petals on a daisy flower. This element is rotated until the correct character is in position, and then pressed against an inked ribbon. The process is repeated for each character to be printed on a line. Typical for such printers range from 25 to 100 characters per second. Impact character printers are the common output devices used with personal and small business microcomputer systems. They are significantly cheaper than the line printers. Impact line printers, capable of printing a whole line at a time, employ print wheels or a moving chain or drum. The print-wheel printer consists of print wheels, each containing a full complement of digits and alphabetic characters in addition to a set of special characters. For printing, all print wheels rate positioned to represent the data to be printed on one line. They then impact simultaneously at a speed of about 150 lines per minute. Impact line printers and the chain and drum printers are commonly used. As the print chain or drum revives, each character is printed as it comes into position. Up to 150 characters per line can be printed at speeds of up to 2,500 lines per minute. Impact line printers are used almost exclusively to support larger computer systems. 2.5 Nonimpact Printers Nonimpact line printers, using laser, xerographic, electrostatic, or ink jet methods are the fastest printers. Before the development of the ink jet and laser printers, nonimpacts were not heavily used, for several reasons: Nonimpact Printers Special and more expensive paper was required. Printed output was not as sharp or as clear as with impact printers. Only a single-part form can be printed at a time. Output could not be easily or satisfactorily copied on office copiers. Electrostatic and xerographic printers place a pattern of the desired character on sensitized paper by means of an electric current or beam of light. The paper then passes through a powdery black substance called toner, which contains dry ink particles. The ink particles are attracted to 86 Input and Output Devices the exposed paper and the character becomes visible. These printers can print at speeds of from 3500 to 20,000 lines per minute. The laser printer form characters by projecting a laser beam of dot- matrix pattern on a drum surface. Toner is then attracted to the area Laser Printer exposed by the laser and transferred to the paper. The paper is then passed over a heating element which melts the toner to form a permanent character. Many types of ink jet printers are available. The simplest of these contains a series of ink jet nozzles in the form of a matrix. Vibrating Ink Jet Printers crystals force ink droplets, roughly the diameter of a human hair, from selected nozzles to form an image in the same manner as an image is formed by a matrix printer. Different coloured inks may be used and combined to form additional colors. Several hundred nozzles are employed in the more sophisticated ink jet printers to direct a continuous stream of droplets across the page to form an image. These charged ink droplets travel at speeds of up to 40 miles per hour as they move between a set of plates that deflect the droplets. Droplets not needed are electrostatically attracted away from the paper for reuse. A stream of more than 100,000 droplets can form approximately 200 characters per second. 2.6 Plotters An inexpensive portable plotter capable of generating multicolor plots from data is stored on magnetic tape or disk. Plotters with multicolor Plotters capabilities generally use a writing mechanism containing several pens, each capable of producing a different color. Some devices for automated drafting are equipped with plotting surfaces larger than 10 square feet and cost as much as a minicomputer system. Whether an application is a general one (such as designing, mapping, or plotting schematics) or more specialized (such as three-dimensional data presentation, structural analysis, contouring, or business charts), there are plotters to do the tricks. 2.7 Microfilm Devices Computer output microfilm (COM) devices convert computer output to a human-readable form stored on rolls of microfilm or as microfilm frames stored on cards called microfilm. At speeds of 10,000 to over 30,000 lines per minute, COM is one of the fastest computer output techniques- more than ten times faster than the fastest impact printer. A single roll of 87 Computer Basics microfilm can store approximately 2000 frames and costs less than half the cost to print the same amount of data on paper. Because of the high cost of COM equipment, it is generally only practical for larger businesses or industries generating approximately Computer output microfilm (COM) devices several thousand documents per day. COM devices are commonly used in libraries, mail-order concerns, defense installations, government agencies, and similar, large operations. 88 Input and Output Devices 2.8 Exercise 1. Multiple choice questions a. Multi-colour images are produced in combination of (i) Red, green and blue (ii) Yellow, red and blue (iii) Black , blue and white (iv) none of the above combinations. b. Dot matrix printer is (i) an impact printer (ii) a non-impact printers (iii) laser printer (iv) none of these categories. c. In general which one of the following is the best quality printer? (i) Dot matrix (ii) Ink-jet (iii) Desk-jet (iv) Laser. 2. Questions for short answers a. What is pixel of a monitor? b. What are the following acronym stands for : COM and LCD? c. Describe flat-panel display. d. What is the difference between ink-jet and laser printers? e. What is the difference between printer and plotter? 3. Analytical questions a. Describe monitor display of computer systems. b. Describe details of different impact and non-impact printers. c. Draw the diagram of an ink-jet printer process and explain it briefly. 89 Computer Basics Lesson 3: Other Peripheral Devices 3.1 Learning Objectives On completion of this lesson you will be able to : know some special peripheral devices understand characteristics and mechanism of such devices. 3.2 Terminals The terminal is a popular input/output device. Terminals are used for two-way communications with the CPU or with other terminals a few Terminals are used for two- feet or thousands of miles away. With the aid of a terminal, a user can way communications with access computers around the world. the CPU or with other terminals a few feet or Terminals, also called workstations, allow to interact with a computer. It thousands of miles away. is required to use a keyboard to enter data and receive output displayed on a cathode ray tube (CRT) display screen, or monitor. Because data must be keyed into these devices one character at a time, the possibility of error is high and the data transmission rate very low, thus, limiting the use of these terminals to small-volume input and inquiries. Terminal Functions Some of the functions that can be performed using terminals are the following: Message switching : The communication of information from one terminal to one or more remote terminals. Data collection: Data are input to one or more terminals and recorded Terminal Functions on a secondary storage medium for subsequent processing. This eliminates the needs to record the information on a source document and then to key the information from the source document into the computer. Inquiry or transaction processing: Data stored in central data files can be accessed from remote terminals for updating or to determine answers to inquiries about information stored in these files. The system employed by most airlines to maintain and update flight information is an example of such a function. Remote job processing: Programs can be input from remote terminals directly to the CPU for processing. After execution, the results can be transmitted back to the terminal or to other terminals for output. 90 Input and Output Devices Graphic display and design: Data can be displayed in graphic form, and can also be manipulated and modified. Interactive graphic displays, from simple home video games displayed on a television set to sophisticated computerized systems, provide complex designs and three- dimensional displays in either black and white or color. Terminals are available with features to suit the multitude of applications to which they are applied. In general three broad types of terminals are: point of sale, interactive remote, and intelligent. 3.3 Speech Recognition and Voice Response Devices Speech recognition devices were introduced in the early 1970s. Typically, these systems contain a database of stored voice patterns. This database of voice patterns is generally stored in a recognition unit or in Speech recognition devices secondary storage. A microphone, attached to the keyboard or contain a database of stored recognition unit, records the spoken word patterns. A built-in voice patterns. microprocessor then compares, word by word, these patterns with the stored patterns and transmits the results of the comparisons to a computer for processing. A sentence must be spoken as a series of disjoined words and numbers spoken as a series of digits and not as a single number. Speech recognition devices are generally used in situations where access to a switch or control is not possible or where a user's hands are otherwise occupied. Because voice patterns vary greatly from person to person, most speech recognition services are speaker-dependent and must be fine-tuned to each operation. This is generally accomplished by having the operator speak each of the words or digits to be stored in the recognition unit dictionary several times. An average of the spoken voice patterns is taken and stored as the standard or mask against future voice communications will be compared. Speaker-independent systems are less common and have a very restricted vocabulary-generally the ten digits and a "yes' or "no" response. Despite their restricted vocabulary, speaker-independent systems are widely usable since they do not have to be fine-tuned but can be understood by anyone. Clearly, speaker-independent systems are more desirable than speaker-dependent systems. but their great expense, large database requirements and the limitations of current technology have made their development tiresomely slow. Speech recognition devices are currently employed in the preparation of numeric control tapes and in airline baggage sorting. Manufactures are beginning to offer very sophisticated speech recognition devices for the 91 Computer Basics more popular microcomputers. For example, more than a dozen such devices are available for the IBM microcomputers alone. Voice response devices are commonplace in today's automated world. Warning sounds like "Warning! Warning! Your oil pressure is low" are being "spoken" by the voice response device in cars. The audio response is generally composed from a prerecorded vocabulary maintained in an external disk file. As an inquiry is received by the device it is sent to the computer for decoding. The computer then decodes and evaluates the inquiry and, from the prerecorded vocabulary on disk, constructs an appropriate digitally coded voice message, which is sent back to the audio response unit. The audio response unit then converts this message to a vocal reply, which is "spoken" to the inquirer. Such systems are not limited to one language. Vortrax, for example, manufactures an audio response unit that is capable of speaking in English French, German and Spanish. Computer generated voice output devices cannot reproduce the subtle shading of intonation commonly used in everyday speech. Their main advantage lies in the fact that they can be understood more than 99 percent of the time and that people respond more quickly to the spoken word than to the written word. Areas of application are generally characterized by situations that require responses to inquiries or Computer generated voice. verification of data entered directly into a computer system. Audio- response devices are used in banks for reporting bank account balance information, in large businesses for credit checking and inventory status reporting. and in experimental research to alert a worker who might otherwise be distracted or involved. One of the strongest impacts made on the use of voice response has come from the manufacturers of microcomputers. The pricing and availability of voice response units are economically feasible for even the smallest concern. Voice response is no longer an isolated, esoteric discipline but another among the multitude of computer output techniques. 3.4 Vision Systems A vision system utilizes a camera, digitizer, computer, and a technique known as image processing. Image processing is concerned with A vision system utilizes a digitizing and storing of computer-processed images and with pattern camera, digitizer, computer, recognition. and a technique known as image processing. Familiar examples of computer-processed images are: computer generated digitized portraits for a few dollars at most amusement parks, computer-produced special effects in movies such as Star Wars, 92 Input and Output Devices digitized images of Jupiter and Saturn beamed from image processors onboard spacecraft to earth etc. All of these examples have one thing in common that is to digitize an image. In a visual system, all images that must be recognized or interpreted must first be digitized and stored in a database. Only after the database has been established the visual system can be applied to pattern recognition. Pattern recognition, the process of interpreting images, begins when the system digitizes the image of the object to be interpreted. This digitized image is then compared to those in the database to determine a probable match. As it is unlikely that a perfect match will be achieved, there is always a small possibility of error. 93 Computer Basics 3.5 Exercise 1. Multiple choice questions (a) The terminal is (i) input device (ii) output device (iii) input / output device (iv) none of the above. (b) Which one is the function of terminal? (i) vision system (ii) message switching (iii) CRT (iv) CPU. 2. Questions for short answers (a) What is a terminal ? (b) Briefly describe the functions of a terminal. (c) What is the purpose of the vision system ? (d) What do you understand of speech recognition ? 3. Analytical question (a) Explain in details about the I/O devices that can be used as both input and output devices. 94 Input and Output Devices 95 Introduction to Basic Gates and Functions Logic gates logic gate is an elementary building block of a digital circuit. Most logic gates have two inputs and one output. At any given moment, every terminal is in one of the two binary conditions low (0) or high (1), represented by different voltage levels. Digital systems are said to be constructed by using logic gates. These gates are the AND, OR, NOT, NAND, NOR, EXOR and EXNOR gates. The basic operations are described below with the aid of truth tables. Logic Gates AND gate OR gate NOT gate NAND gate NOR gate Ex-OR gate Ex-NOR gate Truth Tables Truth tables are used to help show the function of a logic gate. If you are unsure about truth tables and need guidance on how go about drawning them for individual gates or logic circuits then use the truth table Truth Tables Logic gates 1 Digital systems are said to be constructed by using logic gates. These gates are the AND, OR, NOT, NAND, NOR, EXOR and EXNOR gates. The basic operations are described below with the aid of truth tables. AND gate The AND gate is an electronic circuit that gives a high output (1) only if all its inputs are high. A dot (.) is used to show the AND operation i.e. A.B. Bear in mind that this dot is sometimes omitted i.e. AB OR gate The OR gate is an electronic circuit that gives a high output (1) if one or more of its inputs are high. A plus (+) is used to show the OR operation. NOT gate The NOT gate is an electronic circuit that produces an inverted version of the input at its output. It is also known as an inverter. If the input variable is A, the inverted output is known as NOT A. This is also shown as A', or A with a bar over the top, as shown at the outputs. The diagrams below show two ways that the NAND logic gate can be configured to produce a NOT gate. It can also be done using NOR logic gates in the same way. 2 Dept. of C.Sc, GFGC-Raibag NAND gate This is a NOT-AND gate which is equal to an AND gate followed by a NOT gate. The outputs of all NAND gates are high if any of the inputs are low. The symbol is an AND gate with a small circle on the output. The small circle represents inversion. NOR gate This is a NOT-OR gate which is equal to an OR gate followed by a NOT gate. The outputs of all NOR gates are low if any of the inputs are high.The symbol is an OR gate with a small circle on the output. The small circle represents inversion. EX-OR gate The 'Exclusive-OR' gate is a circuit which will give a high output if either, but not both, of its two inputs are high. An encircled plus sign ( ) is used to show the EOR operation. EX-NOR gate 3 Dept. of C.Sc, GFGC-Raibag Information Technology For Managers UNIT-1 Computer Software Topic Cover in this Unit Application and System Software Assemblers Compilers and Interpreters Process of Software Development Data Analysis using Spreadsheets Definition of Software Software is a set of instructions, data or programs used to operate computers and execute specific tasks. Software is a set of Computer Programs and associated documentation and data. The entire set of applications, protocols, and processes involved with a computer system's operation call as Software. History of software The term software was not used until the late 1950s. During this time, although different types of programming software were being created, they were typically not commercially available. Consequently, users -- mostly scientists and large enterprises -- often had to write their own software. The following is a brief timeline of the history of software: June 21, 1948. Tom Kilburn, a computer scientist, writes the world's first piece of software for the Manchester Baby computer at the University of Manchester in England. Early 1950s. General Motors creates the first OS, for the IBM 701 Electronic Data Processing Machine. It is called General Motors Operating System, or GM OS. 1958. Statistician John Tukey coins the word software in an article about computer programming. Late 1960s. Floppy disks are introduced and are used in the 1980s and 1990s to distribute software. Early software was written for specific computers and sold with the hardware it ran on. In the 1980s, software began to be sold on floppy disks, and later on CDs and DVDs. Today, most software is purchased and directly downloaded over the internet. Software can be found on vendor websites or application service provider websites Type of Software The two main categories of software 1) Application Software ◦ a) Desktop Application ◦ b) Web Application 2) System Software a) Operating system b) Device driver c) Firmware d) Translator e) Utility: Definition Application software application software is a computer software package that performs a specific function for a user. System software. A program that designed to coordinate the activities and functions of the hardware and various programs throughout the computer system. How it wok User Application Software System Software Hardware Types of Application Software A) Desktop Application These desktop applications are installed on a user's computer and use the computer memory to carry out tasks. They take up space on the computer's hard drive and do not need an internet connection to work. However, desktop applications must adhere to the requirements of the hardware devices they run on. B) Web Application only require internet access to work; they do not rely on the hardware and system software to run. Consequently, users can launch web applications from devices that have a web browser.. Types of System Software The five types of systems software, are all designed to control and coordinate the procedures and functions of computer hardware. They actually enable functional interaction between hardware, software and the user. Systems software carries out middleman tasks to ensure communication between other software and hardware to allow harmonious coexistence with the user. Systems software can be categorized under the following: Operating system: Harnesses communication between hardware, system programs, and other applications. Device driver: Enables device communication with the OS and other programs. Firmware: Enables device control and identification. Translator: Translates high-level languages to low-level machine codes. Utility: Ensures optimum functionality of devices and applications. 1. Operating System (OS) The operating system is a type of system software kernel that sits between computer hardware and end user. It is installed first on a computer to allow devices and applications to be identified and therefore functional. System software is the first layer of software to be loaded into memory every time a computer is powered up Suppose a user wants to write and print a report to an attached printer. A word processing application is required to accomplish this task. Data input is done using a keyboard or other input devices and then displayed on the monitor. The prepared data is then sent to the printer. In order for the word processor, keyboard, and printer to accomplish this task, they must work with the OS, which controls input and output functions, memory management, and printer spooling. Today, the user interacts with the operating system through the graphical user interface (GUI) on a monitor or touchscreen interface. The desktop in modern OSs is a graphical workspace, which contains menus, icons, and apps that are manipulated by the user through a mouse-driven cursor or the touch of a finger. The disk operating system (DOS) was a popular interface used in the 1980s. Examples of Operating Systems Popular OSs for computers are: Windows 10 Mac OS X Ubuntu 2. Device Drivers Driver software is a type of system software which brings computer devices and peripherals to life. Drivers make it possible for all connected components and external add-ons to perform their intended tasks and as directed by the OS. Without drivers, the OS would not assign any duties. Examples of devices which require drivers: Mouse Keyboard Soundcard Display card Network card Printer Usually, the operating system ships with drivers for most devices already in the market. By default, input devices such as the mouse and keyboard will have their drivers installed.They may never require third-party installations. 3. Firmware Firmware is the operational software embedded within a flash, ROM, or EPROM memory chip for the OS to identify it. It directly manages and controls all activities of any single hardware. Traditionally, firmware used to mean fixed software as denoted by the word firm. It was installed on non-volatile chips and could be upgraded only by swapping them with new, preprogrammed chips. This was done to differentiate them from high-level software, which could be updated without having to swap components. Today, firmware is stored in flash chips, which can be upgraded without swapping semiconductor chips. BIOS and UEFI The most important firmware in computers today is installed by the manufacturer on the motherboard and can be accessed through the old BIOS (Basic Input/Output System) or the new UEFI (Unified Extended Firmware Interface) platforms. 4. Programming Language Translators These are intermediate programs relied on by software programmers to translate high-level language source code to machine language code. The former is a collection of programming languages that are easy for humans to comprehend and code (i.e., Java, C++, Python, PHP, BASIC). The latter is a complex code only understood by the processor. Popular translator languages are compilers, assemblers, and interpreters. They're usually designed by computer manufacturers. Translator programs may perform a complete translation of program codes or translate every other instruction at a time. Machine code is written in a number system of base-2, written out in 0 or 1. This is the lowest level language possible. While seemingly meaningless to humans, the zeros and ones are actually sequenced intelligently by the processor to refer to every conceivable human code and word. 5. Utilities Utilities are types of system software which sits between system and application software. These are programs intended for diagnostic and maintenance tasks for the computer. They come in handy to ensure the computer functions optimally. Their tasks vary from crucial data security to disk drive defragmentation. Most are third-party tools but they may come bundled with the operating system. Third-party tools are available individually or bundled together such as with Hiren Boot CD, Ultimate Boot CD, and Kaspersky Rescue Disk. Examples and features of utility software include: Antivirus and security software for the security of files and applications, e.g., Malwarebytes, Microsoft Security Essentials, and AVG. Disk partition services such as Windows Disk Management, Easeus Partition Master, and Partition Magic. Disk defragmentation to organize scattered files on the drive. Examples include Disk Defragmenter, Perfect Disk, Disk Keeper, Comodo Free Firewall, and Little Snitch. File Compression to optimize disk space such as WinRAR, Winzip, and 7-Zip. What is Assembler ? An assembler is a program that takes basic computer instructions and converts them into a pattern of bits that the computer's processor can use to perform its basic operations. Some people call these instructions assembler language and others use the term assembly language. Here's how it works: Most computers come with a specified set of very basic instructions that correspond to the basic machine operations that the computer can perform. The programmer can write a program using a sequence of these assembler instructions. This sequence of assembler instructions, known as the source code or source program, is then specified to the assembler program when that program is started. The assembler program takes each program statement in the source program and generates a corresponding bit stream or pattern (a series of 0's and 1's of a given length). The output of the assembler program is called the object code or object program relative to the input source program. The sequence of 0's and 1's that constitute the object program is sometimes called machine code. In the earliest computers, programmers actually wrote programs in machine code, but assembler languages or instruction sets were soon developed to speed up programming. What is a compiler? A compiler is a special program that translates a programming language's source code into machine code, bytecode or another programming language. The source code is typically written in a high-level, human-readable language such as Java or C++. A programmer writes the source code in a code editor or an integrated development environment (IDE) that includes an editor, saving the source code to one or more text files. A compiler that supports the source programming language reads the files, analyzes the code, and translates it into a format suitable for the target platform. Some compilers can translate source code to bytecode instead machine code. Bytecode, which was first introduced in the Java programming language, is an intermediate language that can be executed on any system platform running a Java virtual machine (JVM) or bytecode interpreter. What is Interpreters ? interpreter is a computer program that directly executes instructions written in a programming or scripting language, without requiring them previously to have been compiled into a machine language program Differences between Interpreter and Compiler Interpreter translates just one statement of the program at a time into machine code. Compiler scans the entire program and translates the whole of it into machine code at once. An interpreter takes very less time to analyze the source code. However, the overall time to execute the process is much slower. A compiler takes a lot of time to analyze the source code. However, the overall time taken to execute the process is much faster. An interpreter does not generate an intermediary code. Hence, an interpreter is highly efficient in terms of its memory. A compiler always generates an intermediary object code. It will need further linking. Hence more memory is needed. Keeps translating the program continuously till the first error is confronted. If any error is spotted, it stops working and hence debugging becomes easy. A compiler generates the error message only after it scans the complete program and hence debugging is relatively harder while working with a compiler. Interpreters are used by programming languages like Ruby and Python for example. Compliers are used by programming languages like C and C++ for example. Software Development Process Steps The software development process consists of four major steps. Each of these steps is detailed below. Step 1: Planning Step 2: Implementing Step 3:Testing Step 4: Deployment and Maintenence Planning An important task in creating a software program is Requirements Analysis. Customers typically have an abstract idea of what they want as an end result, but not what software should do. Skilled and experienced software engineers recognize incomplete, ambiguous, or even contradictory requirements at this point. Frequently demonstrating live code may help reduce the risk that the requirements are incorrect. Once the general requirements are gathered from the client, an analysis of the scope of the development should be determined and clearly stated. This is often called a Statement of Objectives (SOO). Implementation Implementation is the part of the process where software engineers actually program the code for the project. Testing Software testing is an integral and important phase of the software development process.This part of the process ensures that defects are recognized as soon as possible. It can also provide an objective, independent view of the software to allow users to appreciate and understand the risks of software deployment. Software testing can be stated as the process of validating and verifying that a software Deployment and Maintenance Deployment starts after the code is appropriately tested, approved for release, and sold or otherwise distributed into a production environment.This may involve installation, customization, testing, and possibly an extended period of evaluation. Software training and support are important, as the software is only effective if it is used correctly. Maintaining and enhancing software to scope with newly discovered faults or requirements can take substantial time and effort, as missed requirements may force a redesign of the software. Software Development Life Cycle Software Development Life Cycle (SDLC) is a process used by the software industry to design, develop and test high quality softwares. The SDLC aims to produce a high-quality software that meets or exceeds customer expectations, reaches completion within times and cost estimates. SDLC is the acronym of Software Development Life Cycle. It is also called as Software Development Process. SDLC is a framework defining tasks performed at each step in the software development process. ISO/IEC 12207 is an international standard for software life-cycle processes. It aims to be the standard that defines all the tasks required for developing and maintaining Software Development Life Cycle (SDLC) Software Development Life Cycle (SDLC) Objectives ∙ The SDLC aims to produce a high-quality software that meets or exceeds customer expectations, reaches completion within times and cost estimates. ∙ It aims to Build solutions using different technologies, architectures and life-cycle approaches in the context of different organizational structures Software Development Life Cycle (SDLC) Outcomes ∙ One can able to develop and conduct appropriate experimentation, analysis and data interpretation, and use engineering judgment to draw conclusions in choosing an apt software development model ∙ One can able to satisfy the customer expectations, reaches completion within time and cost evaluations, and works effectively and efficiently in the current and planned Information Technology infrastructure by choosing a Suitable software development model. ∙ One can able to acquire and apply new knowledge as needed, using appropriate learning strategies Software Development Life Cycle (SDLC) Pre-requisites ∙ Basic Knowledge of systematic and operational language ∙ Need Basic Knowledge of Sound Engineering Principles Software Development Life Cycle (SDLC) Definition Software Development Life Cycle (SDLC) is a process used by the software industry to design, develop and test high quality software. Software Development Life Cycle (SDLC) SDLC is a process followed for a software project, within a software organization. It consists of a detailed plan describing how to develop, maintain, replace and alter or enhance specific software. The life cycle defines a methodology for improving the quality of software and the overall development process. Stages of a typical SDLC Stage 1: Planning and Requirement Analysis Requirement analysis is the most important and fundamental stage in SDLC. It is performed by the senior members of the team with inputs from the customer, the sales department, market surveys and domain experts in the industry. This information is then used to plan the basic project approach and to conduct product feasibility study in the economical, operational and technical areas. Planning for the quality assurance requirements and identification of the risks associated with the project is also done in the planning stage. The outcome of the technical feasibility study is to define the various technical approaches that can be followed to implement the project successfully with minimum risks. Stage 2: Defining Requirements Once the requirement analysis is done the next step is to clearly define and document the product requirements and get them approved from the customer or the market analysts. This is done through an SRS (Software Requirement Specification) document which consists of all the product requirements to be designed and developed during the project life cycle. Stage 3: Designing the Product Architecture SRS is the reference for product architects to come out with the best architecture for the product to be developed. Based on the requirements specified in SRS, usually more than one design approach for the product architecture is proposed and documented in a DDS - Design Document Specification. This DDS is reviewed by all the important stakeholders and based on various parameters as risk assessment, product robustness, design modularity, budget and time constraints, the best design approach is selected for the product. A design approach clearly defines all the architectural modules of the product along with its communication and data flow representation with the external and third party modules (if any). The internal design of all the modules of the proposed architecture should be clearly defined with the minutest of the details in DDS. Stage 4: Building or Developing the Product In this stage of SDLC the actual development starts and the product is built. The programming code is generated as per DDS during this stage. If the design is performed in a detailed and organized manner, code generation can be accomplished without much hassle. Developers must follow the coding guidelines defined by their organization and programming tools like compilers, interpreters, debuggers, etc. are used to generate the code. Different high level programming languages such as C, C++, Pascal, Java and PHP are used for coding. The programming language is chosen with respect to the type of software being developed. Stage 5: Testing the Product This stage is usually a subset of all the stages as in the modern SDLC models, the testing activities are mostly involved in all the stages of SDLC. However, this stage refers to the testing only stage of the product where product defects are reported, tracked, fixed and retested, until the product reaches the quality standards defined in the SRS. Stage 6: Deployment in the Market and Maintenance Once the product is tested and ready to be deployed it is released formally in the appropriate market. Sometimes product deployment happens in stages as per the business strategy of that organization. The product may first be released in a limited segment and tested in the real business environment (UAT- User acceptance testing). Then based on the feedback, the product may be released as it is or with suggested enhancements in the targeting market segment. After the product is released in the market, its maintenance is done for the existing customer base. Software Development Life Cycle Models Waterfall Model (For sample, Only this model is explained in detail) Iterative Model Evolutionary Model Prototype Model Spiral Model RAD Model Agile Model Incremental Model Waterfall Model The Waterfall Model was the first Process Model to be introduced. It is also referred to as a linear-sequential life cycle model. It is very simple to understand and use. In a waterfall model, each phase must be completed before the next phase can begin and there is no overlapping in the phases. The Waterfall model is the earliest SDLC approach that was used for software development. The waterfall Model illustrates the software development process in a linear sequential flow. This means that any phase in the development process begins only if the previous phase is complete. In this waterfall model, the phases do not overlap. Waterfall Model Waterfall approach was first SDLC Model to be used widely in Software Engineering to ensure success of the project. In "The Waterfall" approach, the whole process of software development is divided into separate phases. In this Waterfall model, typically, the outcome of one phase acts as the input for the next phase sequentially. Sequential phases in Waterfall model Requirement Gathering and analysis − All possible requirements of the system to be developed are captured in this phase and documented in a requirement specification document. System Design − The requirement specifications from first phase are studied in this phase and the system design is prepared. This system design helps in specifying hardware and system requirements and helps in defining the overall system architecture. Implementation − With inputs from the system design, the system is first developed in small programs called units, which are integrated in the next phase. Each unit is developed and tested for its functionality, which is referred to as Unit Testing. Sequential phases in Waterfall model Integration and Testing − All the units developed in the implementation phase are integrated into a system after testing of each unit. Post integration the entire system is tested for any faults and failures. Deployment of system − Once the functional and non-functional testing is done; the product is deployed in the customer environment or released into the market. Maintenance − There are some issues which come up in the client environment. To fix those issues, patches are released. Also to enhance the product some better versions are released. Maintenance is done to deliver these changes in the customer environment. Waterfall Model - Application Every software developed is different and requires a suitable SDLC approach to be followed based on the internal and external factors. Some situations where the use of Waterfall model is most appropriate are − Requirements are very well documented, clear and fixed. Product definition is stable. Technology is understood and is not dynamic. There are no ambiguous requirements. Ample resources with required expertise are available to support the product. The project is short. Waterfall Model - Advantages The advantages of waterfall development are that it allows for departmentalization and control. A schedule can be set with deadlines for each stage of development and a product can proceed through the development process model phases one by one. Development moves from concept, through design, implementation, testing, installation, troubleshooting, and ends up at operation and maintenance. Each phase of development proceeds in strict order. Waterfall Model - Advantages Some of the major advantages of the Waterfall Model are as follows − Simple and easy to understand and use Easy to manage due to the rigidity of the model. Each phase has specific deliverables and a review process. Phases are processed and completed one at a time. Works well for smaller projects where requirements are very well understood. Clearly defined stages. Well understood milestones. Easy to arrange tasks. Process and results are well documented. Waterfall Model - Disadvantages The disadvanta