Computer Engineering as a Discipline PDF
Document Details
Uploaded by Deleted User
Tags
Summary
This document provides a detailed overview of computer hardware, different types of computers, and their history. It covers a wide range of computer classifications, from personal computers to supercomputers, along with their associated characteristics and applications. The history section traces the key milestones and innovations in computer development, offering insights into the progression from early mechanical concepts to modern electronic devices.
Full Transcript
LESSON 1: COMPUTER HARDWARE LEARNING OBJECTIVES Define what is a computer Explain the different types of computers Know the different parts of computers and its peripheral devices Know the history of development of computers WHAT IS A COMPUTER A...
LESSON 1: COMPUTER HARDWARE LEARNING OBJECTIVES Define what is a computer Explain the different types of computers Know the different parts of computers and its peripheral devices Know the history of development of computers WHAT IS A COMPUTER A computer is a machine or device that performs processes, calculations and operations based on instructions provided by a software or hardware program. It is designed to execute applications and provides a variety of solutions by combining integrated hardware and software components. The term computer is derived from the Latin term ‗computare‘, this means to calculate or programmable machine. Computer cannot do anything without a Program. It represents the decimal numbers through a string of binary digits. The Word 'Computer' usually refers to the Center Processor Unit Plus Internal memory. The term "computer" was originally given to humans (human computers) who performed numerical calculations using mechanical calculators, such as the abacus and slide rule. The term was later given to a mechanical device as they began replacing the human computers. Today's computers are electronic devices that accept data (input), process that data, produce output, and store (storage) the results. COMPUTER TYPES Computers are classified into five main categories which relies on the size and power of the unit: 1. Personal Computer (PC) - A small, single-user computer based on a microprocessor. Defined as a small, relatively inexpensive computer designed for an individual user. In price, personal computers range anywhere from a few hundred pounds to over five thousand pounds. All are based on the microprocessor technology that enables manufacturers to put an entire CPU on one chip. Businesses use personal computers for word processing, accounting, desktop publishing, and for running spreadsheet and database management applications. At home, the most popular use for personal computers is for playing games and recently for surfing the Internet. Furtherly classified into its subtypes: a. Tower Model - The term refers to a computer in which the power supply, motherboard, and mass storage devices are stacked on top of each other in a cabinet. This is in contrast to desktop models, in which these components are housed in a more compact box. The main advantage of tower models is that there are fewer space constraints, which makes installation of additional storage devices easier. b. Desktop Model - A computer designed to fit comfortably on top of a desk, typically with the monitor sitting on top of the computer. Desktop model computers are broad and low, whereas tower model computers are narrow and tall. Because of their shape, desktop model computers are generally limited to three internal mass storage devices. Desktop models designed to be very small are sometimes referred to as slimline models. c. Notebook Computer - An extremely lightweight personal computer. Notebook computers typically weigh less than 6 pounds and are small enough to fit easily in a briefcase. Aside from size, the principal difference between a notebook computer and a personal computer is the display screen. Notebook computers use a variety of techniques, known as flat panel technologies, to produce a lightweight and non-bulky display screen. The quality of notebook display screens varies considerably. In terms of computing power, modern notebook computers are nearly equivalent to personal computers. They have the same CPUs, memory capacity, and disk drives. However, all this power in a small package is expensive. Notebook computers cost about twice as much as equivalent regular-sized computers. Notebook computers come with battery packs that enable you to run them without plugging them in. However, the batteries need to be recharged every few hours. d. Laptop - A small, portable computer -- small enough that it can sit on your lap. Nowadays, laptop computers are more frequently called notebook computers. 2. Workstation - A powerful, single-user computer. A workstation is like a personal computer, but it has a more powerful microprocessor and, in general, a higher quality monitor. It is a type of computer used for engineering applications (CAD/CAM), desktop publishing, software development, and other types of applications that require a moderate amount of computing power and relatively high quality graphics capabilities. Workstations generally come with a large, high- resolution graphics screen, at large amount of RAM, built-in network support, and a. graphical user interface. Most workstations also have a mass storage device such as a disk drive, but a special type of workstation, called a diskless workstation, comes without a disk drive. The most common operating systems for workstations are UNIX and Windows NT. Like personal computers, most workstations are single-user computers. However, workstations are typically linked together to form a local-area network, although they can also be used as stand-alone systems. 3. Minicomputer - A multi-user computer capable of supporting up to hundreds of users simultaneously. It is a midsize computer. In the past decade, the distinction between large minicomputers and small mainframes has blurred, however, as has the distinction between small minicomputers and workstations. But in general, a minicomputer is a multiprocessing system capable of supporting from up to 200 users simultaneously. 4. Mainframe - A powerful multi-user computer capable of supporting many hundreds or thousands of users simultaneously. Mainframe was a term originally referring to the cabinet containing the central processor unit or "main frame" of a room-filling Stone Age batch machine. After the emergence of smaller "minicomputer" designs in the early 1970s, the traditional big iron machines were described as "mainframe computers" and eventually just as mainframes. Nowadays a Mainframe is a very large and expensive computer capable of supporting hundreds, or even thousands, of users simultaneously. 5. Supercomputer - An extremely fast computer that can perform hundreds of millions of instructions per second. Supercomputer is a broad term for one of the fastest computers currently available. Supercomputers are very expensive and are employed for specialized applications that require immense amounts of mathematical calculations (number crunching). For example, weather forecasting requires a supercomputer. Other uses of supercomputers scientific simulations, (animated) graphics, fluid dynamic calculations, nuclear energy research, electronic design, and analysis of geological data (e.g. in petrochemical prospecting). HISTORY OF COMPUTERS DATE EVENTS 1801 In France, Joseph Marie Jacquard invents a loom that uses punched wooden cards to automatically weave fabric designs. Early computers would use similar punch cards. 1890 Herman Hollerith designs a punch card system to calculate the 1880 census, accomplishing the task in just three years and saving the government $5 million. He establishes a company that would ultimately become IBM. 1936 Alan Turing presents the notion of a universal machine, later called the Turing machine, capable of computing anything that is computable. The central concept of the modern computer was based on his ideas. 1937 J.V. Atanasoff, a professor of physics and mathematics at Iowa State University, attempts to build the first computer without gears, cams, belts or shafts. Hewlett-Packard is founded by David Packard and Bill Hewlett in a Palo Alto, California, garage, according to the Computer History Museum. 1941 Atanasoff and his graduate student, Clifford Berry, design a computer that can solve 29 equations simultaneously. This marks the first time a computer is able to store information on its main memory. 1943-44 Two University of Pennsylvania professors, John Mauchly and J. Presper Eckert, build the Electronic Numerical Integrator and Calculator (ENIAC). Considered the grandfather of digital computers, it fills a 20-foot by 40-foot room and has 18,000 vacuum tubes. 1946 Mauchly and Presper leave the University of Pennsylvania and receive funding from the Census Bureau to build the UNIVAC, the first commercial computer for business and government applications. 1947 William Shockley, John Bardeen and Walter Brattain of Bell Laboratories invent the transistor. They discovered how to make an electric switch with solid materials and no need for a vacuum. 1953 Grace Hopper develops the first computer language, which eventually becomes known as COBOL. Thomas Johnson Watson Jr., son of IBM CEO Thomas Johnson Watson Sr., conceives the IBM 701 EDPM to help the United Nations keep tabs on Korea during the war. 1954 The FORTRAN programming language, an acronym for FORmula TRANslation, is developed by a team of programmers at IBM led by John Backus, according to the University of Michigan 1958 Jack Kilby and Robert Noyce unveil the integrated circuit, known as the computer chip. Kilby was awarded the Nobel Prize in Physics in 2000 for his work. 1964 Douglas Engelbart shows a prototype of the modern computer, with a mouse and a graphical user interface (GUI). This marks the evolution of the cor from a specialized machine for scientists and mathematicians to technology that is more accessible to the general public. 1969 A group of developers at Bell Labs produce UNIX, an operating system that addressed compatibility issues. Written in the C programming language, UNIX was portable across multiple platforms and became the operating system of choice among mainframes at large companies and government entities. Due to the slow nature of the system, it never quite gained traction among home PC users. 1971 Alan Shugart leads a team of IBM engineers who invent the "floppy disk," allowing data to be shared among computers. 1973 Robert Metcalfe, a member of the research staff for Xerox, develops Ethernet for connecting multiple computers and other hardware. 1974-1977 A number of personal computers hit the market, including Scelbi & Mark-8 Altair, IBM 5100, Radio Shack's TRS-80 — affectionately known as the "Trash 80" — and the Commodore PET. 1975 The January issue of Popular Electronics magazine features the Altair 8080, described as the "world's first minicomputer kit to rival commercial models." Two "computer geeks," Paul Allen and Bill Gates, offer to write software for the Altair, using the new BASIC language. On April 4, after the success of this first endeavor, the two childhood friends form their own software company, Microsoft. 1976 Steve Jobs and Steve Wozniak start Apple Computers on April Fool's Day and roll out the Apple I, the first computer with a single-circuit board, according to Stanford University. 1977 Radio Shack's initial production run of the TRS-80 was just 3,000. It sold like crazy. For the first time, non-geeks could write programs and make a computer do what they wished. 1977 Jobs and Wozniak incorporate Apple and show the Apple II at the first West Coast Computer Faire. It offers color graphics and incorporates an audio cassette drive for storage. 1978 Accountants rejoice at the introduction of VisiCalc, the first computerized spreadsheet program. 1979 Word processing becomes a reality as MicroPro International releases WordStar. "The defining change was to add margins and word wrap," said creator Rob Barnaby in email to Mike Petrie in 2000. "Additional changes included getting rid of command mode and adding a print function. I was the technical brains — I figured out how to do it, and did it, and documented it. " The first IBM personal computer, introduced on Aug. 12, 1981, used the MS-DOS operating system. 1981 The first IBM personal computer, code-named "Acorn," is introduced. It uses Microsoft's MS- DOS operating system. It has an Intel chip, two floppy disks and an optional color monitor. Sears & Roebuck and Computerland sell the machines, marking the first time a computer is available through outside distributors. It also popularizes the term PC. 1983 Apple's Lisa is the first personal computer with a GUI. It also features a drop- down menu and icons. It flops but eventually evolves into the Macintosh. The Gavilan SC is the first portable computer with the familiar flip form factor and the first to be marketed as a "laptop." 1985 Microsoft announces Windows, according to Encyclopedia Britannica. This was the company's response to Apple's GUI. Commodore unveils the Amiga 1000, which features advanced audio and video capabilities. 1985 The first dot-com domain name is registered on March 15, years before the World Wide Web would mark the formal beginning of Internet history. The Symbolics Computer Company, a small Massachusetts computer manufacturer, registers Symbolics.com. More than two years later, only 100 dot-coms had been registered. 1986 Compaq brings the Deskpro 386 to market. Its 32bit architecture provides as speed comparable to mainframes. 1990 Tim Berners-Lee, a researcher at CERN, the highenergy physics laboratory in Geneva, develops HyperText Markup Language (HTML), giving rise to the World Wide Web. 1993 The Pentium microprocessor advances the use of graphics and music on PCs. 1994 PCs become gaming machines as "Command & Conquer," "Alone in the Dark 2," "Theme Park," "Magic Carpet," "Descent" and "Little Big Adventure" are among the games to hit the market. 1996 Sergey Brin and Larry Page develop the Google search engine at Stanford University. 1997 Microsoft invests $150 million in Apple, which was struggling at the time, ending Apple's court case against Microsoft in which it alleged that Microsoft copied the "look and feel" of its operating system. 1999 The term Wi-Fi becomes part of the computing language and users begin connecting to the Internet without wires. 2001 Apple unveils the Mac OS X operating system, which provides protected memory architecture and pre-emptive multi-tasking, among other benefits. Not to be outdone, Microsoft rolls out Windows XP, which has a significantly redesigned GUI. 2003 The first 64-bit processor, AMD's Athlon 64, becomes available to the consumer market. 2004 Mozilla's Firefox 1.0 challenges Microsoft's Internet Explorer, the dominant Web browser. Facebook, a social networking site, launches. 2005 YouTube, a video sharing service, is founded. Google acquires Android, a Linux- based mobile phone operating system. 2006 Apple introduces the MacBook Pro, its first Intel based, dual-core mobile computer, as well as an Intel-based iMac. Nintendo's Wii game console hits the market. 2007 The iPhone brings many computer functions to the smartphone. 2009 Microsoft launches Windows 7, which offers the ability to pin applications to the taskbar and advances in touch and handwriting recognition, among other features. 2010 Apple unveils the iPad, changing the way consumers view media and jumpstarting the dormant tablet computer segment 2011 Google releases the Chromebook, a laptop that runs the Google Chrome OS. 2015 Apple releases the Apple Watch. Microsoft releases Windows 10. 2016 The first reprogrammable quantum computer was created. "Until now, there hasn't been any quantum-computing platform that had the capability to program new algorithms into their system. They're usually each tailored to attack a particular algorithm," said study lead author Shantanu Debnath, a quantum physicist and optical engineer at the University of Maryland, College Park. 2017 The Defense Advanced Research Projects Agency (DARPA) is developing a new "Molecular Informatics" program that uses molecules as computers. "Chemistry offers a rich set of properties that we may be able to harness for rapid, scalable information storage and processing," Anne Fischer, program manager in DARPA's Defense Sciences Office, said in a statement. "Millions of molecules exist, and each molecule has a unique three-dimensional atomic structure as well as variables such as shape, size, or even color. This richness provides a vast design space for exploring novel and multi-value ways to encode and process data beyond the 0s and 1s of current logic-based, digital architectures." Activity 1. TRUE OR FALSE Directions: Answer the following questions with True or False on the space provided. 1. In 1954 scientist were able to predict exactly what computers would like today. 2. Logging off the computer will close any open program 3. Search engines make it harder to find information on the internet. 4. John Lovelace created a machine called the Analytical Engine. His ideas were some of the first that led to the creation of computers 5. Charles Babbage was created the first computer program. The program was made to help the Analytical Engine calculate numbers. 6. Steve Jobs was known as inventor of the modern computer. He actually created the first fully electronic computer. 7. The invention was 1,000 times faster than any machines built before it. It was big and known as apple. 8. The transistor replaced vacuum tubes and made computers much smaller and faster. 9. Al Gore invented the internet. 10. Resistor invented to helped make computer much smaller and faster LESSON 2: COMPUTER PROCESSOR and AUTHENTICATION IDENTIFICATION FUNCTIONS LEARNING OBJECTIVES Know what a Computer Processor is Know the Types of Computer Processor Know the history of development of computer processor COMPUTER PROCESSOR A processor, or "microprocessor," is a small chip that resides in computers and other electronic devices. Its basic job is to receive input and provide the appropriate output. While this may seem like a simple task, modern processors can handle trillions of calculations per second. A computer processor is the part of a computer that analyzes, controls and disperses data. Commonly referred to as the central processing unit or CPU, a computer processor acts as the brains of the computer, telling which program and application to do what at a specific time and interval. Modern computer processors operate with speeds of 2.6 to 3.66 gigahertz. The most advanced models are even faster. It takes the form of a small microchip that fits into a series of sockets in the motherboard. The more powerful the computer processor is on the computer, the faster and more efficient the machine will run. Besides the central processing unit, most desktop and laptop computers also include a GPU. This processor is specifically designed for rendering graphics that are output on a monitor. Desktop computers often have a video card that contains the GPU, while mobile devices usually contain a graphics chip that is integrated into the motherboard. By using separate processors for system and graphics processing, computers are able to handle graphic-intensive applications more efficiently. FUNCTION OF A COMPUTER PROCESSOR A microprocessor is a silicon chip containing millions of microscopic transistors. This chip functions as the computer's brain. It processes the instructions or operations contained within executable computer programs. Instead of taking instructions directly off of the hard drive, the processor takes its instructions from memory. This greatly increases the computer's speed. TYPES OF COMPUTER PROCESSOR A microprocessor is a silicon chip containing millions of microscopic transistors. This chip functions as the computer's brain. It processes the instructions or operations contained within executable computer programs. Instead of taking instructions directly off of the hard drive, the processor takes its instructions from memory. This greatly increases the computer's speed. Modern processors are designed by two distinct companies: Intel and Advanced Micro Devices (AMD). Intel processors are most commonly used in prefabricated computer systems, such as those from Dell and HP. The company focuses on two different lines of processors: the Pentium and the Celeron. Pentium processors are the larger microchip style that works on most desktop and some laptops. They can handle high-demand processing, such as that found in 3D gaming, video editing and other multimedia- intense applications. Celeron processors are more compact models with the ability to run a basic computer efficiently and cost-effectively. AMD's line of computer processors can be found in prefabricated models, however, are most commonplace with home-built systems or specially designed machines. AMD was the first to build a 64-bit processor, capable of high-end applications use with graphic intensive operations. The previous industry standard had been 32-bit processing. Some AMD processors offer built-in virus protection. FEATURES OF A COMPUTER PROCESSOR Each processor has a clock speed which is measured in gigahertz (GHz). Also, a processor has a front side bus which connects it with the system's random access memory (RAM.) CPUs also typically have two or three levels of cache. Cache is a type of fast memory which serves as a buffer between RAM and the processor. The processor's socket type determines the motherboard type where it can be installed. When it comes to processors, size matters. Whether you're buying a new computer or upgrading your old one, you must get the fastest processor you can afford. This is because the processor will become obsolete very quickly. Choosing a 3.6 GHz processor over a 2 GHz today can buy you several years of cheap computing time. Also check the speed of the front side bus (FSB) when purchasing your new computer or CPU. A front side bus of 800 MHz or greater is essential for fast processing speeds. The processor's cache is also important. Make sure it has at least 1 MB of last level cache if your computing needs are average. If you're an extreme gamer or if you run intensive graphics programs, get the processor with the largest cache that fits your budget. There can be hundreds of dollars' difference between the cheapest processors and the most expensive ones. However, investing just a little extra cash can get you a much better processor. HISTORY OF DEVELOPMENT OF COMPUTER PROCESSOR The earliest forms of computer processors were designed from vacuum tubes and electrical relays. By the 1950s, these had been replaced by the advent of the transistor. These transistors were built onto printed circuit boards, copper that is etched onto a non-electrical board, and various components were added. These computer processors were large and bulky, sometimes taking up whole rooms. During the construction of the Apollo guidance computer for NASA, scientists were able construct integrated circuits that allowed large numbers of transistors to be manufactured into a single semiconductor. This was found to be more reliable that previous models and much more compact. The microprocessor was invented by Intel in 1970. The 4004 was as fast as its larger cousins, but could be used in much smaller devices. With the advent of the personal computer, the majority of processor technology uses the microprocessor model. Engineers and technicians routinely reach a point in processor design in which they face limits in making the device faster. They have been challenged by size and materials. At one time, designers believed they could not get passed the 1 gigahertz speed level, that was accomplished by the AMD Athlon in 2000. The 64-bit barrier was broken by the same company in 2003. Processors have since become duo-core and quad-core, meaning they are capable of executing nearly twice as much data transfers and flow as with a single-core. Many motherboards are now coming equipped for two or more processors to work in unison. The most advanced research being accomplished is that which uses new technologies to expand the speed and capability of the processor. IBM has designed computer processor technology using lasers, much like fiber optics. The Georgia Institute of Technology has developed biological computer processors using the brain cells of leeches. Other scientists are developing ways to pass data along gaseous phenomena. Other lines of processors are used in older models of computers. Macintosh computers specifically used its own line for many years between 1984 and 2006. The company switched to Intel processors in all its new machines after this period. During the early years of Apple Computers, 1984 to 1996, the company used Motorola branded computer processors to handle its operating systems and data flow. These were known as the 68000 series and featured processors with speeds between 16 and 33 megahertz. Following 1996, Apple used IBM-designed processors in nearly all of its machines. These ranged in speeds between 66 megahertz to 2.5 gigahertz by 2006. AUTHENTICATION AND IDENTIFICATION FUNCTION IDENTIFICATION -the ability to identify uniquely a user of a system or an application that is running in the system. AUTHENTICATION -the ability to prove that a user or application is genuinely who that person or what that application claims to be. For example, consider a user who logs on to a system by entering a user ID and password. The system uses the user ID to identify the user. The system authenticates the user at the time of logon by checking that the supplied password is correct BIOMETRICS Verifies an individual’s identity by analyzing a unique personal attribute or behavior It is the most effective and accurate method for verifying identification. (also the most expensive) TYPES OF BIOMETRICS Fingerprint are based on the ridge endings, bifurcation exhibited by the friction edges and some minutiae of the finger. (most common) Palm Scan are based on the creases, ridges, and grooves that are unique in each individuals palm. Hand geometry are based on the shape (length, width) of a person’s hand and fingers. Hand Topography based on the different peaks, valleys, overall shape and curvature of the hand. Retina Scan is based on the blood vessel pattern of the retina on the backside of the eyeball. Iris Scan is based on the colored portion of the eye that surrounds the pupil. The iris has unique patterns, rifts, colors, rings, coronas and furrows Facial Scan based on the different bone structures, nose ridges, eye widths, forehead sizes and chin shapes of the face. Voice Print based on human voice SIGNATURE DYNAMICS is based on electrical signals generated due to physical motion of the hand during signing a document. KEYBOARD DYNAMICS is based on electrical signals generated while the user types in the keys (passphrase) on the keyboard. PASSWORD A password is a protected string of characters that is used to authenticate an individual. (most common form of system identification and authentication mechanism) TWO TYPES OF PASSWORD Cognitive Passwords -facts or opinion-based information used to verify an individual identity (e.g.: mothers maidens name). One-Time or Dynamic Passwords -a token based system used for authentication purposes where the service is used only once. (used in environments that require a higher level of security) PASSPHRASE a sequence of characters that is longer than a password and in some cases, takes the place of a password during an authentication process. It is more secure than passwords CRYPTOGRAPHIC KEYS Uses private keys and Digital Signatures Provides a higher level of security than passwords. MEMORY CARDS Holds information but cannot process them More secure than passwords but costly (Swipe cards, ATM cards SMART CARDS Holds information and has the capability to process information and can provide a two factor authentication (knows and has) Categories of Smart Cards: Contact Contactless TIMELINE OF COMPUTER PROCESSOR DEVELOPMENT Retrieved from: Year Event 1823 Baron Jons Jackob Berzelius discovers silicon (Si), which today is the basic component of processors. 1903 Nikola Tesla patented electrical logic circuits called "gates" or "switches" in 1903. 1947 John Bardeen, Walter Brattain, and William Shockley invent the first transistor at the Bell Laboratories on December 23, 1947. 1948 John Bardeen, Walter Brattain, and William Shockley patent the first transistor in 1948. 1956 John Bardeen, Walter Brattain, and William Shockley were awarded the Nobel Prize in physics for their work on the transistor. 1958 The first working integrated circuit was developed by Robert Noyce of Fairchild Semiconductor and Jack Kilby of Texas Instruments. The first IC was demonstrated on September 12, 1958. (Geoffrey Dummer is credited as being the first person to conceptualize and build a protoType of the integrated circuit.) 1960 IBM developed the first automatic mass-production facility for transistors in New York in 1960. 1968 Intel Corporation was founded by Robert Noyce and Gordon Moore in 1968. 1969 AMD (Advanced Micro Devices) was founded on May 1, 1969. 1971 Intel with the help of Ted Hoff introduced the first microprocessor, the Intel 4004 on November 15, 1971. The 4004 had 2,300 transistors, performed 60,000 OPS (operations per second), addressed 640 bytes of memory, and cost $200.00. 1972 Intel introduced the 8008 processor on April 1, 1972. 1974 Intel's improved microprocessor chip was introduced on April 1, 1974; the 8080 became a standard in the computer industry. 1976 Intel introduced the 8085 processor in March 1976. 1976 The Intel 8086 was introduced on June 8, 1976. 1979 The Intel 8088 was released on June 1, 1979. 1979 The Motorola 68000, a 16/32-bit processor was released and was later chosen as the processor for the Apple Macintosh and Amiga computers. 1982 The Intel 80286 was introduced on February 1, 1982. 1985 Intel introduced the first 80386 in October 1985. 1987 The SPARC processor was first introduced by Sun. 1988 Intel 80386SX was introduced in 1988. 1989 Cyrix released their first coprocessors, the FasMath 83D87 and 83S87, in 1989. These were x87 compatible and designed for 386 computers. The FasMath coprocessors were up to 50% faster than the Intel 80387 processor. 1991 AMD introduced the AM386 microprocessor family in March 1991. 1991 Intel introduced the Intel 486SX chip in April in efforts to help bring a lower-cost processor to the PC market selling for $258.00. 1992 Intel released the 486DX2 chip on March 2, 1992, with a clock doubling ability that generates higher operating speeds. 1993 Intel released the Pentium processor on March 22, 1993. The processor was a 60 MHz processor, incorporates 3.1 million transistors and sells for $878.00. 1994 Intel released the second generation of Intel Pentium processors on March 7, 1994. 1995 Cyrix released the Cx5x86 processor in 1995, in an attempt to compete with the Intel Pentium processors. 1995 Intel introduced the Intel Pentium Pro in November 1995. 1996 Cyrix released their MediaGX processor in 1996. It combined a processor with sound and video processing on one chip. 1996 Intel announced the availability of the Pentium 150 MHz with 60 MHz bus and 166 MHz with 66 MHz bus on January 4, 1996. 1996 AMD introduced the K5 processor on March 27, 1996, with speeds of 75 MHz to 133 MHz and bus speeds of 50 MHz, 60 MHz, or 66 MHz The K5 was the first processor developed completely in-house by AMD. 1997 AMD released their K6 processor line in April 1997, with speeds of 166 MHz to 300 MHz and a 66 MHz bus speed. 1997 Intel Pentium II was introduced on May 7, 1997. 1998 AMD introduced their new K6-2 processor line on May 28, 1998, with speeds of 266 MHz to 550 MHz and bus speeds of 66 MHz to 100 MHz The K6-2 processor was an enhanced version of AMD's K6 processor. 1998 Intel released the first Xeon processor, the Pentium II Xeon 400 (512 K or 1 M cache, 400 MHz, 100 MHz FSB) in June 1998. 1999 Intel released the Celeron 366 MHz and 400 MHz processors on January 4, 1999. 1999 AMD released its K6-III processors on February 22, 1999, with speeds of 400 MHz or 450 MHz and bus speeds of 66 MHz to 100 MHz It also featured an on-die L2 cache. 1999 The Intel Pentium III 500 MHz was released on February 26, 1999. 1999 The Intel Pentium III 550 MHz was released on May 17, 1999. 1999 AMD introduced the Athlon processor series on June 23, 1999. The Athlon would be produced for the next six years in speeds ranging from 500 MHz up to 2.33 GHz. 1999 The Intel Pentium III 600 MHz was released on August 2, 1999. 1999 The Intel Pentium III 533B and 600B MHz was released on September 27, 1999. 1999 The Intel Pentium III Coppermine series was first introduced on October 25, 1999. 2000 On January 5, 2000, AMD released the 800 MHz Athlon processor. 2000 Intel released the Celeron 533 MHz with a 66 MHz bus processor on January 4, 2000. 2000 AMD first released the Duron processor on June 19, 2000, with speeds of 600 MHz to 1.8 GHz and bus speeds of 200 MHz to 266 MHz The Duron was built on the same K7 architecture as the Athlon processor. 2000 Intel announces on August 28th that it will recall its 1.3 GHz Pentium III processors due to a glitch. Users with these processors should contact their vendors for additional information about the recall. 2001 On January 3, 2001, Intel released the 800 MHz Celeron processor with a 100 MHz bus. 2001 On January 3, 2001, Intel released the 1.3 GHz Pentium 4 processor. 2001 AMD announced a new branding scheme on October 9, 2001. Instead of identifying processors by their clock speed, the AMD Athlon XP processors will bear monikers of 1500+, 1600+, 1700+, 1800+, 1900+, 2000+, etc. Each higher model number will represent a higher clock speed. 2002 Intel released the Celeron 1.3 GHz with a 100 MHz bus and 256 kB of level 2 cache. 2003 Intel Pentium M was introduced in March 2003. 2003 AMD released the first single-core Opteron processors, with speeds of 1.4 GHz to 2.4 GHz and 1024 KB L2 cache, on April 22, 2003. 2003 AMD released the first Athlon 64 processor, the 3200+ model, and the first Athlon 64 FX processor, the FX-51 model, on September 23, 2003. 2004 AMD released the first Sempron processor on July 28, 2004, with a 1.5 GHz to 2.0 GHz clock speed and 166 MHz bus speed. 2005 AMD released their first dual-core processor, the Athlon 64 X2 3800+ (2.0 GHz, 512 KB L2 cache per core), on April 21, 2005. 2006 AMD released their new Athlon 64 FX-60 processor, featuring 2x 1024 KB L2 cache, on January 9, 2006. 2006 Intel released the Core 2 Duo processor E6320 (4 M cache, 1.86 GHz, 1066 MHz FSB) on April 22, 2006. 2006 Intel introduced the Intel Core 2 Duo processors with the Core 2 Duo processor E6300 (2 M cache, 1.86 GHz, 1066 MHz FSB) on July 27, 2006. 2006 Intel introduced the Intel Core 2 Duo processor for the laptop computer with the Core 2 Duo processor T5500, as well as other Core 2 Duo T series processors, in August 2006. 2007 Intel released the Core 2 Quad processor Q6600 (8 M cache, 2.40 GHz, 1066 MHz FSB) in January 2007. 2007 Intel released the Core 2 Duo processor E4300 (2 M cache, 1.80 GHz, 800 MHz FSB) on January 21, 2007. 2007 Intel released the Core 2 Quad processor Q6700 (8 M cache, 2.67 GHz, 1066 MHz FSB) in April 2007. 2007 Intel released the Core 2 Duo processor E4400 (2 M cache, 2.00 GHz, 800 MHz FSB) on April 22, 2007. 2007 AMD renamed the Athlon 64 X2 processor line to Athlon X2 and released the first in that line, the Brisbane series (1.9 to 2.6 GHz, 512 KB L2 cache) on June 1, 2007. 2007 Intel released the Core 2 Duo processor E4500 (2 M cache, 2.20 GHz, 800 MHz FSB) on July 22, 2007. 2007 Intel released the Core 2 Duo processor E4600 (2 M cache, 2.40 GHz, 800 MHz FSB) on October 21, 2007. 2007 AMD released the first Phenom X4 processors (2 M cache, 1.8 GHz to 2.6 GHz, 1066 MHz FSB) on November 19, 2007. 2008 Intel released the Core 2 Quad processor Q9300 and the Core 2 Quad processor Q9450 in March 2008. 2008 Intel released the Core 2 Duo processor E4700 (2 M cache, 2.60 GHz, 800 MHz FSB) on March 2, 2008. 2008 AMD released the first Phenom X3 processors (2 M cache, 2.1 GHz to 2.5 GHz, 1066 MHz FSB) on March 27, 2008. 2008 Intel released the first of the Intel Atom series of processors, the Z5xx series, in April 2008. They are single core processors with a 200 MHz GPU. 2008 Intel released the Core 2 Duo processor E7200 (3 M cache, 2.53 GHz, 1066 MHz FSB) on April 20, 2008. 2008 Intel released the Core 2 Duo processor E7300 (3 M cache, 2.66 GHz, 1066 MHz FSB) on August 10, 2008. 2008 Intel released several Core 2 Quad processors in August 2008: the Q8200, the Q9400, and the Q9650. 2008 Intel released the Core 2 Duo processor E7400 (3 M cache, 2.80 GHz, 1066 MHz FSB) on October 19, 2008. 2008 Intel released the first Core i7 desktop processors in November 2008: the i7-920, the i7-940, and the i7965 Extreme Edition. 2009 AMD released the first Phenom II X4 (quad-core) processors (6 M cache, 2.5 to 3.7 GHz, 1066 MHz or 1333 MHz FSB) on January 8, 2009. 2009 AMD released the first Athlon Neo processor, the MV-40 model, (1.6 GHz and 512 KB L2 cache) on January 8, 2009. 2009 Intel released the Core 2 Duo processor E7500 (3 M cache, 2.93 GHz, 1066 MHz FSB) on January 18, 2009. 2009 AMD released the first Phenom II X3 (triple core) processors (6 M cache, 2.5 to 3.0 GHz, 1066 MHz or 1333 MHz FSB) on February 9, 2009. 2009 Intel released the Core 2 Quad processor Q8400 (4 M cache, 2.67 GHz, 1333 MHz FSB) in April 2009. 2009 Intel released the Core 2 Duo processor E7600 (3 M cache, 3.06 GHz, 1066 MHz FSB) on May 31, 2009. 2009 AMD released the first Athlon II X2 (dual-core) processors (1024KB L2 cache, 1.6 to 3.5 GHz, 1066 MHz or 1333 MHz FSB) in June 2009. 2009 AMD released the first Phenom II X2 (dual-core) processors (6 M cache, 3.0 to 3.5 GHz, 1066 MHz or 1333 MHz FSB) on June 1, 2009. 2009 AMD released the first Athlon II X4 (quad-core) processors (512 KB L2 cache, 2.2 to 3.1 GHz, 1066 MHz or 1333 MHz FSB) in September 2009. 2009 Intel released the first Core i7 mobile processor, the i7-720QM, in September 2009. It uses the Socket G1 socket type, runs at 1.6 GHZ, and features 6 MB L3 cache. 2009 Intel released the first Core i5 desktop processor with four cores, the i5-750 (8 M cache, 2.67 GHz, 1333 MHz FSB), on September 8, 2009. 2009 AMD released the first Athlon II X3 (triple core) processors in October 2009. 2010 Intel released the Core 2 Quad processor Q9500 (6 M cache, 2.83 GHz, 1333 MHz FSB) in January 2010. 2010 Intel released the first Core i5 mobile processors, the i5-430M and the i5-520E in January 2010. 2010 Intel released the first Core i5 desktop processor over 3.0 GHz, the i5-650 in January 2010. 2010 Intel released the first Core i3 desktop processors, the i3-530, and i3-540 on January 7, 2010. 2010 Intel released the first Core i3 mobile processors, the i3-330M (3 M cache, 2.13 GHz, 1066 MHz FSB) and the i3-350M, on January 7, 2010. 2010 AMD released the first Phenom II X6 (hex/six core) processors on April 27, 2010. 2010 Intel released the first Core i7 desktop processor with six cores, the i3-970, in July 2010. It runs at 3.2 GHz and features 12 MB L3 cache. 2011 Intel released seven new Core i5 processors with four cores, the i5-2xxx series in January 2011. 2017 Intel released the first desktop processor with 16 cores, the Core i9-7960X, in September 2017. It runs at 2.8 GHZ and features 22 MB L3 cache. 2017 Intel released the first desktop processor with 18 cores, the Core i9-7980X, in September 2017. It runs at 2.6 GHZ and features 24.75 MB L3 cache. 2018 Intel released the first Core i9 mobile processor, the i9-8950HK, in April 2018. It uses the BGA 1440 socket, runs at 2.9 GHZ, has six cores, and features 12 MB L3 cache. Activity 2: Identification Directions: Identify the following. Write your answer on the space provided. Be ready for the submission 1. Refers to the industry technology who invented the graphical user interface. 2. Refers to the transfer rate of a standard USB 2.0 device. 3. The hyper transport have all but replaced this in current hardware, but what used to be the single most important factor in overall system speed? 4. When was the first commercial microprocessor introduced? 5. It refers to the width of the smallest wire on a computer chip is typically measured in 6. The external system bus architecture is created using from _________ architecture 7.The accumulator based microprocessor example are _________________ 8. Computer has a built-in system clock that emits millions of regularly spaced electric pulses per __________ called clock cycles. 9. A circuitry that processes that responds to and processes the basic instructions that are required to drive a computer system is _____________. 10. The CPU controls the transfer of data between ___________and other devices. II. Answer the question: What can you say on the development of processor in terms of sizes, speed, process and durability factor LESSON 3: COMPUTER COMPONENTS LEARNING OBJECTIVES Know what a Motherboard is Know the different parts and ports of a Motherboard Know the basic working principle of Motherboard. PARTS OF A COMPUTER 1. INPUT DEVICES - Data and instructions must enter the computer system before any computation can be performed on the supplied data. The input unit that links the external environment with the computer system performs this task. Data and instructions enter input units in forms that depend upon the particular device used. For example, data is entered from a keyboard in a manner similar to typing, and this differs from the way in which data is entered through a mouse, which is another type of input device. However, regardless of the form in which they receive their inputs, all input devices must provide a computer with data that are transformed into the binary codes that the primary memory of the computer is designed to accept. This transformation is accomplished by units that called input interfaces. Input interfaces are designed to match the unique physical or electrical characteristics of input devices to the requirements of the computer system. Keyboard is the most common and very popular input device which helps to input data to the computer. The layout of the keyboard is like that of traditional typewriter, although there are some additional keys provided for performing additional functions Mouse is the most popular pointing device. It is a very famous cursor-control device having a small palm size box with a round ball at its base, which senses the movement of the mouse and sends corresponding signals to the CPU when the mouse buttons are pressed. Generally, it has two buttons called the left and the right button and a wheel is present between the buttons. A mouse can be used to control the position of the cursor on the screen, but it cannot be used to enter text into the compute Microphone is an input device to input sound that is then stored in a digital form. The microphone is used for various applications such as adding sound to a multimedia presentation or for mixing music. Joystick is also a pointing device, which is used to move the cursor position on a monitor screen. It is a stick having a spherical ball at its both lower and upper ends. The lower spherical ball moves in a socket. The joystick can be moved in all four directions. The function of the joystick is similar to that of a mouse. It is mainly used in Computer Aided Designing (CAD) and playing computer games. Scanner is an input device, which works more like a photocopy machine. It is used when some information is available on paper and it is to be transferred to the hard disk of the computer for further manipulation. Scanner captures images from the source which are then converted into a digital form that can be stored on the disk. These images can be edited before they are printed. 2. OUTPUT DEVICES - The job of an output unit is just the reverse of that of an input unit. It supplied information and results of computation to the outside world. Thus it links the computer with the external environment. As computers work with binary code, the results produced are also in the binary form. Hence, before supplying the results to the outside world, it must be converted to human acceptable (readable) form. This task is accomplished by units called output interfaces. Monitors, commonly called as Visual Display Unit (VDU), are the main output device of a computer. It forms images from tiny dots, called pixels that are arranged in a rectangular form. The sharpness of the image depends upon the number of pixels. Printers are another common output device found in homes in offices. In computing terms, they take electronic data stored on a computer and generates a hard copy of it. Usually that means printing images and text onto paper. There are numerous different types of printer, with Inkjet and laser printers being two of the most common. Modern printers usually connect to a computer with a USB cable or via Wi-Fi. Computer speakers are hardware devices that transform the signal from the computer's sound card into audio. Speakers are essential if you want a louder sound, surround sound, fuller bass, or just a better quality of audio. External computer speakers began to appear in stores in the early 1990's when computer gaming, digital music, and other forms of media became popular. Some computer speakers are wireless nowadays, connecting to the computer via Bluetooth. 3. STORAGE UNIT - The data and instructions that are entered into the computer system through input units have to be stored inside the computer before the actual processing starts. Similarly, the results produced by the computer after processing must also be kept somewhere inside the computer system before being passed on to the output units. Moreover, the intermediate results produced by the computer must also be preserved for ongoing processing. The Storage Unit or the primary / main storage of a computer system is designed to do all these things. It provides space for storing data and instructions, space for intermediate results and also space for the final results. Cache memory is a very high speed semiconductor memory which can speed up the CPU. It acts as a buffer between the CPU and the main memory. It is used to hold those parts of data and program which are most frequently used by the CPU. The parts of data and programs are transferred from the disk to cache memory by the operating system, from where the CPU can access them. Primary memory holds only those data and instructions on which the computer is currently working. It has a limited capacity and data is lost when power is switched off. It is generally made up of semiconductor device. These memories are not as fast as registers. The data and instruction required to be processed resides in the main memory. It is divided into two subcategories RAM and ROM. This type of memory is also known as external memory or non-volatile. It is slower than the main memory. These are used for storing data/information permanently. CPU directly does not access these memories, instead they are accessed via input-output routines. The contents of secondary memories are first transferred to the main memory, and then the CPU can access it. For example, disk, CD-ROM, DVD, etc. 4. CENTRAL PROCESSING UNIT - The main unit inside the computer is the CPU. This unit is responsible for all events inside the computer. It controls all internal and external devices, performs “Arithmetic and Logical operations”. The operations a Microprocessor performs are called “instruction set” of this processor. The instruction set is ―hard wired‖ in the CPU and determines the machine language for the CPU. The more complicated the instruction set is, the slower the CPU works. Processors differed from one another by the instruction set. If the same program can run on two different computer brands they are said to be compatible. Programs written for IBM compatible computers will not run on Apple computers because these two architectures are not compatible. The control Unit and the Arithmetic and Logic unit of a computer system are jointly known as the Central Processing Unit (CPU). The CPU is the brain of any computer system. In a human body, all major decisions are taken by the brain and the other parts of the body function as directed by the brain. Similarly, in a computer system, all major calculations and comparisons are made inside the CPU and the CPU is also responsible for activating and controlling the operations of other units of a computer system Activity 3: TEST YOUR COMPREHENSION Enumeration: 1. Example of Input Devices 2. Example of output devices ________________ 3. 2 basic types of memory storage unit _____ 4. Mention briefly the steps involved in the execution of a program CPU ______ LESSON 4: COMPUTER MOTHERBOARD LEARNING OBJECTIVES Know what a Motherboard is Know the different parts and ports of a Motherboard Know the basic working principle of Motherboard. THE MOTHERBOARD A computer has many components, each with their own roles and functions. The role of the motherboard is to allow all these components to communicate with each other. Considering the fact that all the other components are installed on the motherboard or connected to it, it is safe to say that the motherboard is the central piece of a PC, the component that brings it all together. DIFFERENT TYPES OF MOTHERBOARD 1. AT MOTHERBOARD - The oldest of the main boards, these motherboards were used in earlier 286/386 or 486 computers. The AT means the board consists of advanced technology(AT) power connectors. There are two power connectors of 6 pin each mounted on the AT motherboards. The AT motherboards were available in the early 80‘s. 2. ATX MOTHERBOARDS - (Motherboard for P1/P2 processors) The ATX motherboards started in 90‘s and are still available. The ATX connector on the motherboard consists of a single connector. These boards are used for P2/P3 or P/4 processors. COMPONENTS OF A MOTHERBOARD 1. EXPANSION SLOTS - Expansions have the role of letting you install additional components to enhance or expand the functionality of your PC. You can install a TV tuner, a video capture card, a better soundcard, etc. – you get the idea. These ports are located under the video card slot, and come in the form of PCI slots (on older motherboards) or a scaled-down version of PCI-Express slots (on newer motherboards). Some motherboards come with both types of expansion slots. The number of slots is usually dependent on the format of the motherboard – larger motherboards (full ATX) have more, while smaller formats (micro-ATX) have fewer, if any. ISA slots. These were the oldest expansion slots in the history of motherboards. They were found in AT boards and are identified by black color. Conventional display cards or sound cards were installed in these slots. The full form of ISA is Industry Standard Architecture and is a 16- bit bus. PCI Slots. The full form of PCI is Peripheral Component Interconnect. The PCI slot is one of the important motherboard components today and is vastly used to install add-on cards on the motherboard. The PCI supports 64-bit high-speed bus. PCI express. Also known as PCIe, these are the latest and the fastest component of the motherboard to support add-on cards. It supports full duplex serial bus. AGP slot. Accelerated graphics port (AGP) is specifically used to install a latest graphics card. AGP runs on a 32-bit bus and both PCIe and AGP can be used to install high-end gaming display cards. 2. RAM (MEMORY) SLOTS - Located in the upper-right part of the motherboard, the memory slots are used to house the computer ‘s memory modules. The number of slots can vary, depending on the motherboard, from 2, in low-end motherboards, all the way up to 8 memory slots, on high-end and gaming motherboards. SIMM slots. The full form is a single in-line memory module. These slots were found in older motherboards, up to 486-boards. The SIMM supports 32-bit bus. DIMM slots. The full form of DIMM is a Double inline memory module. These are the latest RAM slots which run on a faster 64-bit bus. The DIMM used on Laptop boards are called SO-DIMM. 3. CENTRAL PROCESSING UNIT (CPU) SOCKET - Another vital motherboard component is the CPU socket which is used to install the processor on the motherboard. Some important sockets are explained below. Socket7. It is a 321 pin socket that supported older processors like Intel Pentium 1/2/MMX, AMD k5/K6, and Cyrix M2. Socket370. It is a 370 pin socket that supports Celeron processors and Pentium-3 processors. Socket 775. It is a 775-pin socket that supports Inter dual core, C2D, P-4 and Xeon processors. Socket 1156. Found on latest types of motherboards, it is an 1156-pin socket that supports the latest Intel i-3, i-5 and i-7 processors. Socket 1366. The socket is of 1366 pins and supports latest i-7 900 processors. 4. BASIC INPUT/OUTPUT SYSTEM (BIOS) - The full form of BIOS is Basic Input Output System. It is a motherboard component in the form of an Integrated chip. This chip contains all the information and settings of the motherboard which you can modify by entering the BIOS mode from your computer 5. COMPLEMENTARY METAL-OXIDE SEMICONDUCTOR (CMOS) BATTERY - The battery is a 3.0-Volt lithium type cell. The cell is responsible for storing the information in BIOS. 6. POWER CONNECTORS - No computer component can operate without power, and a motherboard is no exception. The power connector, commonly a 20 or 24-pin connector, can be situated either near the right edge of the motherboard, or somewhere close to the processor socket on older motherboards. This is where the power supply ‘s main connector gets attached, providing power to the motherboard and all the other components. Newer motherboards have an additional 4-pin or 8-pin connector near the processor, used to supply additional power directly to the processor. 7. IDE AND SATA PORTS - IDE and SATA ports are used to provide connectivity for the storage devices and optical drives. The IDE interface is somewhat outdated, so you shouldn’t ‘t be surprised if you see a lot of new motherboards coming without this type of port. It was replaced by the smaller and much faster SATA interface, which currently reached its 3rd revision, being able to achieve maximum speeds of up to 600 MB/s, as opposed to the IDE interface, which can reach a maximum of 133 MB/s. 8. PROCESSOR SOCKET - The processor socket is the central piece of a motherboard, usually being located near the center of the motherboard. It’s also the central piece because it holds the processor – the brain of your computer. 9. NORTHBRIDGE AND SOUTHBRIDGE - If you have a look at your motherboard, chances are you ‘ll see a square metal component somewhere in the lower-right part of the board. This metal component is actually a heatsink, and its role is to provide thermal protection for the Northbridge – one of the most important components of a motherboard. The northbridge is responsible for coordinating the data flow between the memory, the video card and the processor. A secondary chip, known as Southbridge, has a similar function, coordinating the data flow between the processor and peripherals such as sound cards or network cards. 10. CABINET CONNECTIONS - The cabinet in which the motherboard is installed has many buttons that connect to the motherboard. Some of the common connectors are Power Switch, Reset Switch, Front USB, Front Audio, Power indicator (LED) and HDD LED. 11. INPUT/OUTPUT INTERFACE CONNECTORS - The input–output interface connects the computer to the outside world. It decodes the address and identifies the unique computer peripheral with which a data transfer operation is to be executed. The interface also has to interpret the command on the control bus so that the timing of data transfer is correct. One further very important function of the input–output interface is to provide a physical electronic highway for the flow of data between the computer data bus and the external peripheral Activity 4: Specs Design Challenge. Prepare a matrix design (table format) of complete computer system unit specification is up to you what kind of processor that you want to put on the table. LESSON 5: E-WASTE LEARNING OBJECTIVES Learn about Technological Disposal Know what are E-Waste Know the impacts of E-Waste to the Society What is E-waste? Electronic waste, also called e-waste, various forms of electric and electronic equipment that have ceased to be of value to their users or no longer satisfy their original purpose. Electronic waste (e-waste) products have exhausted their utility value through either redundancy, replacement, or breakage and include both ―white goods‖ such as refrigerators, washing machines, and microwaves and ―brown goods‖ such as televisions, radios, computers, and cell phones. Given that the information and technology revolution has exponentially increased the use of new electronic equipment, it has also produced growing volumes of obsolete products; e-waste is one of the fastest-growing waste streams. Although e-waste contains complex combinations of highly toxic substances that pose a danger to health and the environment, many of the products also contain recoverable precious materials, making it a different kind of waste compared with traditional municipal waste. Electronic system Waste product Globally, e-waste constitutes more than 5 percent of all municipal solid waste and is increasing with the rise of sales of electronic products in developing countries. The majority of the world ‘s e-waste is recycled in developing countries, where informal and hazardous setups for the extraction and sale of metals are common. Recycling companies in developed countries face strict environmental regulatory regimes and an increasing cost of waste disposal and thus may find exportation to small traders in developing countries more profitable than recycling in their own countries. There is also significant illegal transboundary movement of e-waste in the form of donations and charity from rich industrialized nations to developing countries. E-waste profiteers can harvest substantial profits owing to lax environmental laws, corrupt officials, and poorly paid workers, and there is an urgent need to develop policies and strategies to dispose of and recycle e-waste safely in order to achieve a sustainable future. Impacts On Human Health The complex composition and improper handling of e-waste adversely affect human health. A growing body of epidemiological and clinical evidence has led to increased concern about the potential threat of e-waste to human health, especially in developing countries such as India and China. The primitive methods used by unregulated backyard operators to reclaim, reprocess, and recycle e-waste materials expose the workers to a number of toxic substances. Processes such as dismantling components, wet chemical processing, and incineration are used and result in direct exposure and inhalation of harmful chemicals. Safety equipment such as gloves, face masks, and ventilation fans are virtually unknown, and workers often have little idea of what they are handling. For instance, in terms of health hazards, open burning of printed wiring boards increases the concentration of dioxins in the surrounding areas. These toxins cause an increased risk of cancer if inhaled by workers and residents. Toxic metals and poison can also enter the bloodstream during the manual extraction and collection of tiny quantities of precious metals, and workers are continuously exposed to poisonous chemicals and fumes of highly concentrated acids. Recovering resalable copper by burning insulated wires causes neurological disorders, and acute exposure to cadmium, found in semiconductors and chip resistors, can damage the kidneys and liver and cause bone loss. Long-term exposure to lead on printed circuit boards and computer and television screens can damage the central and peripheral nervous system and kidneys, and children are more susceptible to these harmful effects. Environmental Impacts Although electronics constitute an indispensable part of everyday life, their hazardous effects on the environment cannot be overlooked or underestimated. The interface between electrical and electronic equipment and the environment takes place during the manufacturing, reprocessing, and disposal of these products. The emission of fumes, gases, and particulate matter into the air, the discharge of liquid waste into water and drainage systems, and the disposal of hazardous wastes contribute to environmental degradation. In addition to tighter regulation of e-waste recycling and disposal, there is a need for policies that extend the responsibility of all stakeholders, particularly the producer. Activity 5: Critical Thinking Analysis Answer the question at short bond paper printed output 2 slides of power point/captured picture for non- internet connection. The third page will be your explanation using msword. Develop the best idea to present the E-waste Question: Why is it important to recycle e-waste LESSON 6: SOFTWARE CONCEPTS LEARNING OBJECTIVES To understand fundamental software concepts To understand and identify different kinds of software To know the importance of debugging in computer systems and software development Understand the usage of CAPTCHA Identify and understand the various computer components To understand and identify security threats and the importance of security Software Software is a set of instructions, data or programs used to operate computers and execute specific tasks. Opposite of hardware, which describes the physical aspects of a computer, software is a generic term used to refer to applications, scripts and programs that run on a device. Software can be thought of as the variable part of a computer and hardware the invariable part. Software is often divided into application software, or user downloaded programs that fulfill a want or need, and system software, which includes operating systems and any program that supports application software. The term middleware is sometimes used to describe programming that mediates between application and system software or between two different kinds of application software. For example, middleware could be used to send a remote work request from an application in a computer that has one kind of operating system to an application in a computer with a different operating system. An additional category of software is the utility, which is a small, useful program with limited capability. Some utilities come with operating systems. Like applications, utilities tend to be separately installable and capable of being used independently from the rest of the operating system. Similarly, applets are small applications that sometimes come with the operating system as accessories. They can also be created independently using the Java or other programming languages. Software can be purchased or acquired in the following ways: Shareware- usually distributed on a free or trial basis with the intention of sale when the period is over. Lite ware- a type of shareware with some capabilities disabled until the full version is purchased. Freeware- can be downloaded for free but with copyright restrictions. Public domain software- can be downloaded for free without restrictions. Open source- a type of software where the source code is furnished and users agree not to limit the distribution of improvements. Today, much of the purchased software, shareware and freeware is directly downloaded over the Internet. In these cases, software can be found on specific vendor websites or application service providers. However, software can also be packaged on CD-ROMs or diskettes and sold physically to a consumer. Some general kinds of application software include: Productivity software, which includes tools such as word processors and spreadsheets. Presentation software, also known as slideware. Graphics software. CAD/CAM. Vertical market or industry-specific software, for example, banking, insurance and retail applications. Examples and types of software Below is a list of the different kinds of software a computer may have installed with examples of related programs. Click any of the links below for additional information. It should be noted that although application software is thought of as a program, it can be anything that runs on a computer. The table below also includes a program column to clarify any software that is not a program. A specialized type of software that allows hardware to run is firmware. This is a type of programming that is embedded onto a special area of the hardware's nonvolatile memory, such as a microprocessor or read-only memory, on a one-time or infrequent basis so that thereafter it seems to be part of the hardware. Software can be purchased at a retail computer store or online and come in a box containing all the disks (floppy diskette, CD, DVD, or Blu-ray), manuals, warranty, and other documentation. Software can also be downloaded to a computer over the Internet. Once downloaded, setup files are run to start the installation process on your computer. Free software There are also a lot of free software programs available that are separated into different categories. Shareware or trial software is software that gives you a few days to try the software before you have to buy the program. After the trial time expires, you'll be asked to enter a code or register the product before you can continue to use it. Freeware is completely free software that never requires payment, as long as it is not modified. Open source software is similar to freeware. Not only is the program given away for free, but the source code used to make the program is as well, allowing anyone to modify the program or view how it was created. How do you use computer software? Once the software is installed on the computer hard drive, the program can be used anytime by finding the program on the computer. On a Windows computer, a program icon is added to the Start menu or Start screen, depending on your version of Windows. How to maintain software After the software is installed on your computer, it may need to be updated to fix any found errors. Updating a program can be done using software patches. Once updates are installed, any problems may that may have been experienced in the program will no longer occur. How is software created and how does it work? A computer programmer (or several computer programmers) write the instructions using a programming language, defining how the software should operate on structured data. The program then be interpreted, or compiled into machine code. When I save a document, is that file also considered software? When you create or edit a file using your software — a Microsoft Word document, for instance, or a Photoshop image — that file is considered a "resource" or "asset" used by the software. However, the file itself is not considered "software" even though it is an essential part of what your software is doing What was the first piece of computer software? The first software program that was held in electronic memory was written by Tom Kilburn. The program calculated the highest factor of the integer 218 = 262,144, and was successfully executed on June 21, 1948, at the University of Manchester, England. The computer that held that program was called the SSEM (Small Scale Experimental Machine), otherwise known as the "Manchester Baby." This event is widely celebrated as the birth of software. SYSTEM SOFTWARE [21-24] This type of software allows the direct interaction between the user and the hardware components of the computer system. Humans compared to the machine speaks and understand different language, there must be an interface that will allow the end user to interact with the computer system. System Software is also called as the main or alpha software of a computer system because it handles the major operations of the running hardware. This System Software is further divided into four major types: 1. The Operating System (OS) – It is a major program that is responsible in the governance and maintenance of the inner-cooperation of components of the computer system. eg., Microsoft Windows, Linux, Mac OS etc. 2. The Language Processor – The hardware components of a computer system’s language are not understandable by humans. These are the three languages involved in human-machine interaction: o Machine-level Language: the machines can only understand the digital signals or the binary codes of 0’s and 1’s. This language is a machine dependent language. o Assembly- level Language: this language is also referred to as the Low-level Language (LLL), That forms a correspondence between machine level instruction and general assembly level statement. Mnemonics are used to represent each low-level machine instructions or operation codes (op-codes). e.g., ADD for adding two entities, HALT for stopping a process etc. It is also a machine dependent and it varies among processor to processor. o High-level Language: this is the language understandable by humans, this is used to program and code. It is to read and understand. Examples of this language is Java, C, C++, Python etc. Machine Level Language is very complex, therefore the users choose the High-level Language more often, because of its convenience in terms of coding. The codes will be converted to machine language so the computer system will understand it. This conversion process is performed by the Language Processor which is made up of three components: o Assembler: Language processor that converts assembly language into a machine language. [Assembler language Machine Language] o Compiler: This language processor converts High-Level Language into a machine-level Language in one go and the execution is fast. It checks the error of the whole program; thus the error detection is difficult. Languages like C, C++ and Scala use compiler. o Interpreter: This language processor also converts High-Level Language into a machine- level Language line-by-line thus the execution tie is longer. But, the error detection is easier because it returns the line where the error occurred. Programming languages such as Python, Ruby and Java uses interpreter. 3. The Device Drivers – this acts an interface between various Input-Output device and the users or the operating systems. This includes Printers, External web Cameras come with driver disk for the installation to the system before it can be used in the system. 4. The BIOS- stands for Basic Input Output System, it a small firmware, that controls the peripheral or the input-output device attached to the system, this is also responsible for the starting the OS or initiating booting process or the POST (Power On Self-Test) Application Software These are the basic software used to run to accomplish a particular action and task. These are the dedicated software, dedicated to performing simple and single tasks. For e.g., a single software cannot serve to both the reservation system and banking system. These are divided into two types: 1. The General Purpose Application Software: These are the types of application software that comes in-built and ready to use, manufactures by some company or someone. For e.g., Microsoft Excel – Used to prepare excel sheets. VLC Media Player – Used to play audio/video files. Adobe Photoshop – Used for designing and animation and many more. 2. The Specific Purpose Application Software: These are the type of software that is customizable and mostly used in real-time or business environment. For e.g., Ticket Reservation System Healthcare Management System Hotel Management System Payroll Management System WHAT IS A WORD PROCESSOR? Sometimes abbreviated as WP, a word processor is a software program capable of creating, storing, and printing typed documents. Today, the word processor is one of the most frequently used software programs on a computer, with Microsoft Word being the most popular word processor. Word processors can be used to create multiple types of files, including text files (.txt), rich text files (.rtf), HTML files (.htm &.html), and Word files (.doc &.docx). Some word processors can also be used to create XML files (.xml). OVERVIEW OF A WORD PROCESSOR In a word processor, you are presented with a blank white sheet as shown below. The text is added to the document area and after it has been inserted formatted or adjusted to your preference. Below is an example of a blank Microsoft Word window with areas of the window highlighted. FEATURE OF A WORD PROCESSOR A word processor offers dozens of additional features that can give your document or other text a more professional appearance. Below is a listing of some of the most popular features of a word processor. Résumé - Create or maintain your résumé. WHAT IS A PRESENTATION TOOL? A presentation tool is a software package used to display information in the form of a slide show. It has three major functions: an editor that allows text to be inserted and formatted, a method for inserting and manipulating graphic images, and a slide-show system to display the content. Presentation software can be viewed as enabling a functionally-specific category of electronic media, with its own distinct culture and practices as compared to traditional presentation media. OVERVIEW OF A PRESENTATION TOOL In a presentation tool, you are presented with a blank white sheet as shown below. The text is added to the document area and after it has been inserted formatted or adjusted to your preference. Below is an example of a blank Microsoft PowerPoint window. FEATURES OF A PRESENTATION TOOL PowerPoint is the presentation software of the Microsoft Office software suite. One of the most widely used office programs, PowerPoint has applications for personal use, academics and business. Below are five features you should be using – if you aren't already. Learn everything about these tips: they will improve your presentation skills and allow you to communicate your message successfully. ADDING SMART ART SmartArt is a comprehensive and flexible business diagram tool that greatly improves upon the ‘Diagram Gallery’ feature found in previous versions of Office. SmartArt can be used to create professional diagrams that include pictures and text or combinations of the two. An obvious use of SmartArt would be to create an organization chart but it can be used for many different kinds of diagrams and even to provide some variety to slides using text bullet points. INSERTING SHAPES If you need to include some sort of diagram in your presentation, then the quickest and easiest way is probably to use SmartArt. However, it is important to be able to include shapes independently of SmartArt and worth being familiar with the various Drawing Tool format options. Not only will they be useful if you do need to manually draw a diagram (and SmartArt doesn’t suit all diagrams), but they can also be applied to objects on a slide that you might not immediately think of as shapes. As you can see, the gallery of available shapes is very extensive. Once you have selected your chosen shape, you can just click in your slide to insert a default version of the shape or, to set a particular size and position, click and drag with the mouse to create the shape and size you want. INSERTING AN IMAGE Here are two content type icons which appear in new content Placeholders for inserting pictures. You can Insert Picture from File or Insert Clip Art. Alternatively, the Illustrations group of the Insert ribbon tab includes the same two tools. Insert Picture from File allows you to browse to an image file saved somewhere on your system whereas Clip Art is held in an indexed gallery of different media types. Clip Art is not limited to pictures it also includes the following: Illustrations Photographs Video Audio Once you have found the image you want to use, click on it to insert it into the current slide. You can now re-size and move the image accordingly with further editing options available when you right click the desired image. SLIDE TRANSITIONS Properly used, slide transitions can make your presentations clearer and more interesting and, when appropriate, more fun. Badly used, the effect of slide transitions can be closer to irritating or even nauseating. Simple animation effects are often used to add interest to bullet point text. Much more extreme animation effects are available but, in most cases, should be used sparingly if at all. Two main kinds of animation are available in a PowerPoint presentation: the transition from one slide to the next and the animation of images/text on a specific slide. In PowerPoint 2010 & 2013 there is also a separate Transitions ribbon tab that includes a gallery of different transition effects. These can be applied to selected slides or all slides. If you want to apply different transition effects to different groups of slides, then you might want to choose ‘Slide Sorter’ view from the Presentation Views group of the View ribbon. ADDING ANIMATION Whereas the transition effects are limited to a single event per slide, animations can be applied to every object on a slide – including titles and other text boxes. Many objects can even have animation applied to different components, for example each shape in a SmartArt graphic, each paragraph in a text box and each column in a chart. Animations can be applied to three separate ‘events’ for each object: Entrance – how the object arrives on the slide Emphasis – an effect to focus attention on an object while it is visible Exit – how the object disappears from the slide To apply an animation effect, choose the object or objects to be animated, then choose Animation Styles or Add Animation from the Animations toolbar. Where an animation is applied to an object with different components (for instance a SmartArt graphic made up of several boxes), the Effect Options tool becomes available to control how each component will be animated. So for example, your animation can be used to introduce elements of an organization chart to your slide one by one. EXAMPLE AND TOP USE OF PRESENTATION TOOL Presenting a topic Business presentations Teaching SOME OTHER EXAMPLE OF PRESENTATION TOOLS These are some of the other presentation tools that are available: VISME- is a cloud-based presentation tool that allows you to create highly visual presentations to engage viewers and communicate your ideas. It features an intuitive, drag- and-drop design method for creating presentations. The business version also prioritizes brand consistency and company-wide image storage. When you or your employees create a presentation, it will feature colors, logos and images that are on brand for your organization. HAIKU DECK-is a platform that prioritizes simplicity. Business owners can create elegant, basic presentations with high-quality images. The spartan approach allows for connecting with audiences instead of losing them in information overload due to text-heavy slides. PITCHERIFIC-is not only a presentation solution, but also a platform for building and practicing your presentation. It's a template-based program that guides you through the presentation creation process. Instead of drafting a few slides, Pitcherific prompts you to write out the areas of each part of your speech. The outline for an elevator pitch, for example, includes a hook, problem, solution and closing. CANVA-is an online platform that provides templates for a wide range of business-related publications, like resumes, newsletters, business cards, media kits, brochures and infographics. You can also use it to construct presentations. SLIDECAMP- provide slide templates for creating company presentations. You can adjust color schemes, add company logos, import charts and data, build infographics, and organize presentations into sections with Slide Camp. This is a great solution for maintaining presentation consistency across multiple presentations from your organization. POWTOON-is an animated presentation and video platform for creating short informational videos and presentations about your brand or product. Explainer videos are an important part of a brand's message, and Powtoon is an affordable tool for creating animated videos and presentations to educate consumers and clients about your business. You can easily edit presentations and videos, add voiceover, and build a professional experience for your customers. VIDEOSCRIBE-is a whiteboard video presentation platform that allows small businesses to customize their presentations to fit their needs. These videos, which feature a whiteboard and hand that "draws" different objects and slides in the presentation, are ideal for quick explainers and marketing videos on your business or product. You can easily place objects, insert text, and even draw your own objects or text with VideoScribe's platform. PREZI- is another template-based presentation solution that you can use to create persuasive and engaging presentations with unique movement between "slides" and key points. Prezi maps out your whole presentation on an overall track that you decide. When you switch slides, it doesn't simply advance to the next one; it takes the viewer through the track to the point that needs to be made. This allows your audience to visualize the progression of your presentation. You can arrange content under different sections and create an overview so your audience can see your entire presentation plan. This method keeps the presentation organized and your audience engaged. You can also navigate freely through your presentation – your track is not locked in and you can adjust when you address which points as you're presenting. WHAT IS A SPREADSHEET? A spreadsheet or worksheet is a file made of rows and columns that help sort data, arrange data easily, and calculate numerical data. What makes a spreadsheet software program unique is its ability to calculate values using mathematical formulas and the data in cells. A good example of how a spreadsheet may be utilized is creating an overview of your bank's balance. OVERVIEW OF A SPREADSHEET Below is a basic example of what a Microsoft Excel spreadsheet looks like, as well as all the important features of a spreadsheet highlighted. In the above example, this spreadsheet is listing three different checks, the date, their description, and the value of each check. These values are then added together to get the total of $162.00 in cell D6. That value is subtracted from the check balance to give an available $361.00 in cell D8. Difference between a workbook, worksheet, and spreadsheet Because the terms spreadsheet, workbook, and worksheet are so similar, there can be a lot of confusion when trying to understand their differences. When you open Microsoft Excel (a spreadsheet program), you're opening a workbook. A workbook can contain one or more different worksheets that can be accessed through the tabs at the bottom of the worksheet you’re currently viewing. What's often most confusing is that a worksheet is synonymous with a spreadsheet. In other words, a spreadsheet and worksheet mean the same thing. However, most people only refer to the program as a spreadsheet program and the files it creates as spreadsheet files or worksheets. Although spreadsheets are most often used with anything containing numbers, the uses of a spreadsheet are almost endless. Below are some other popular uses of spreadsheets. Finance- Spreadsheets are ideal for financial data, such as your checking account information, budgets, taxes, transactions, billing, invoices, receipts, forecasts, and any payment system. Forms- Form templates can be created to handle inventory, evaluations, performance reviews, quizzes, time sheets, patient information, and surveys. School and grades- Teachers can use spreadsheets to track students, calculate grades, and identify relevant data, such as high and low scores, missing tests, and students who are struggling. Lists- Managing a list in a spreadsheet is a great example of data that does not contain numbers, but still can be used in a spreadsheet. Great examples of spreadsheet lists include telephone, to-do, and grocery lists. Sports- Spreadsheets can keep track of your favorite player stats or stats on the whole team. With the collected data, you can also find averages, high scores, and statistical data. Spreadsheets can even be used to create tournament brackets. What is an active worksheet? An active worksheet is the worksheet that is currently open. For example, in the Excel picture above, the sheet tabs at the bottom of the window show "Sheet1," "Sheet2," and "Sheet3," with Sheet1 being the active worksheet. The active tab usually has a white background behind the tab name. How many worksheets open by default? In Microsoft Excel 2016 and earlier and OpenOffice Calc, by default, there are three sheet tabs that open (Sheet1, Sheet2, and Sheet3). In Google Sheets, your spreadsheets start with one sheet (Sheet1). In Microsoft Excel 365, by default, there is only one sheet tab that opens (Sheet1). What is the length limit of a worksheet name? Not to be confused with the file name, in Microsoft Excel, there is a 31 character limit for each worksheet name. How are rows and columns labeled? In all spreadsheet programs, including Microsoft Excel, rows are labeled using numbers (e.g., 1 to 1,048,576). All columns are labeled with letters from A to Z, then with two letters. For example, after the letter Z, the next column is AA, AB, AC,..., AZ and then incrementing to BA, BB, BC, etc., to the last column XFD. When working with a cell, you combine the column with the row. For example, the very first cell is in column A and on row 1, so the cell is labeled as A1. Why not use a word processor instead of a spreadsheet? While it may be true that some of the things mentioned above could be done in a wordprocessor, spreadsheets have a huge advantage over word processors when it comes to numbers. It would be impossible to calculate multiple numbers in a word processor and have the value of the calculation immediately appear. Spreadsheets are also much more dynamic with the data and can hide, show, and sort information to make processing lots of information easier. WHAT IS A LIBRARY MANAGEMENT? Library Management Software is an application that allows automation of libraries and book databases. The software is commonly used by libraries and librarians to be able to manage and access their library resources through a single, computer-based platform. Such applications make it easy for library staff to manage books and records. Self-service or web-based library management applications allow users to efficiently search online libraries for desired book or material and read it online. FEATURES OF A LIBRARY MANAGEMENT SOFTWARE In a library management software, it gives you a lot of option and features that you can use in order to have a more accessible system. Acquisition management: helping a library keep track of new print and digital additions to the collection Barcode scanning: simply being able to check items in and out Barcoding: the capacity to add a barcode to a new or damaged acquisition Catalog management: keeping track digitally of what is available in the collection or, when interlibrary loan is relevant, in the broader available system Circulation management: tracking who has what and when items are due Fee collection: keeping track of fines owed to the library OPAC: access to an online public access catalogue of various public libraries Patron management: keeping track of information about patrons and their records Periodicals management: managing journals and magazines available digitally in print Reserve shelf management: for libraries that allow teachers to keep items on reserve Search functions: allows patrons and librarians to complete a catalog search on various levels Self check-in/check-out: allowing patrons to check their own items in and out Serials management: keeping track of the serials in the library. Activity 6 (ASSIGNMENT) Directions: Self-Assessment Write your answer on the SEPARATE PAPER determine the possible answers. 1. What are the examples of applications software 2. How do I pass a software engineer interview? 3. What is the most useful application software for you? Note: After this chapter review for a while and take the midterm exam LESSON 7: UTILITY SOFTWARE LEARNING OBJECTIVES Know what is the utility software Know the types of utility software and the uses Know the basic principle and strategies of software utilities Utility Software Utility software, often referred as utility is a system software that is designed to help analyze, configure, optimize or maintain a computer and enhance the computer’s performance. It is a program that performs a specific task, which is usually related to managing the system resources. Utilities are sometimes also installed as memory-resident programs. Utility software usually focuses on how the computer infrastructure that includes computer hardware, application software, operating system and data storage programs operates. These utilities could range from the small and simple to the large and complex that can perform either a single task or a multiple task. Some of the functions performed by these utilities are data compression, disk defragmentation, data recovery, management of computer resources and files, system diagnosis, virus detection, and many more. Utility software has been designed specifically to help in management and tuning of operating system, computer hardware and application software of a system. It performs a specific and useful function to maintain and increase the efficiency of a computer system Aids in keeping the computer free from unwanted software threats such as viruses or spyware Adds functionality that allow the user to customize your desktop and user interface Manages computer memory and enhances performance in general, these programs assist the user to make in general, these programs assist the user to make and run their computer better. They are also used for password protection, memory management, virus protection, and file compression in order to manage all the computer functions, resources and files efficiency Types of Utility Software These are the different types of utility software. Application Launchers – is a computer program that helps a user to locate and start other computer programs.... In the comparison of desktop application launchers that follows, each section is devoted to a different desktop environment. Antivirus software -Antivirus software, or anti-virus software (abbreviated to AV software), also known as anti-malware, is a computer program used to prevent, detect, and remove malware. Antivirus software was originally developed to detect and remove computer viruses, hence the name. However, with the proliferation of other kinds of malware, antivirus software started to provide protection from other computer threats. Backup software – Backup software are computer programs used to perform backup; they create supplementary exact copies of files, databases or entire computers. These programs may later use the supplementary copies to restore the original contents in the eve