Full Transcript

History of Computing A computer might be described with deceptive simplicity as “an apparatus that performs routine calculations automatically.” Such a definition would owe its deceptiveness to a naive and narrow view of calculation as a strictly mathematical process. In fact, calculation underlies...

History of Computing A computer might be described with deceptive simplicity as “an apparatus that performs routine calculations automatically.” Such a definition would owe its deceptiveness to a naive and narrow view of calculation as a strictly mathematical process. In fact, calculation underlies many activities that are not normally thought of as mathematical. Walking across a room, for instance, requires many complexes, albeit subconscious, calculations. Computers, too, have proved capable of solving a vast array of problems, from balancing a checkbook to even—in the form of guidance systems for robots—walking across a room. Before the true power of computing could be realized, therefore, the naive view of calculation had to be overcome. The inventors who labored to bring the computer into the world had to learn that the thing they were inventing was not just a number cruncher, not merely a calculator. For example, they had to learn that it was not necessary to invent a new computer for every new calculation and that a computer could be designed to solve numerous problems, even problems not yet imagined when the computer was built. They also had to learn how to tell such a general problem-solving computer what problem to solve. In other words, they had to invent programming. They had to solve all the heady problems of developing such a device, of implementing the design, of actually building the thing. The history of the solving of these problems is the history of the computer. The Abacus The earliest known calculating device is probably the abacus. It dates back at least to 1100 BCE and is still in use today, particularly in Asia. Now, as then, it typically consists of a rectangular frame with thin parallel rods strung with beads. Long before any systematic positional notation was adopted for the writing of numbers, the abacus assigned different units, or weights, to each rod. This scheme allowed a wide range of numbers to be represented by just a few beads and, together with the invention of zero in India, may have inspired the invention of the Hindu-Arabic number system. In any case, abacus beads can be readily manipulated to perform the common arithmetical operations—addition, subtraction, multiplication, and division—that are useful for commercial transactions and in bookkeeping. From Napier’s Logarithms to the Slide Rule Calculating devices took a different turn when John Napier, a Scottish mathematician, published his discovery of logarithms in 1614. As any person can attest, adding two 10-digit numbers is much simpler than multiplying them together, and the transformation of a multiplication problem into an addition problem is exactly what logarithms enable. This simplification is possible because of the following logarithmic property: the logarithm of the product of two numbers is equal to the sum of the logarithms of the numbers. By 1624, tables with 14 significant digits were available for the logarithms of numbers from 1 to 20,000, and scientists quickly adopted the new labour-saving tool for tedious astronomical calculations. Most significant for the development of computing, the transformation of multiplication into addition greatly simplified the possibility of mechanization. Analog calculating devices based on Napier’s logarithms—representing digital values with analogous physical lengths—soon appeared. In 1620 Edmund Gunter, the English mathematician who coined the terms cosine and cotangent, built a device for performing navigational calculations: the Gunter scale, or, as navigators simply called it, the gunter. About 1632 an English clergyman and mathematician named William Oughtred built the first slide rule, drawing on Napier’s ideas. That first slide rule was circular, but Oughtred also built the first rectangular one in 1633. The analog devices of Gunter and Oughtred had various advantages and disadvantages compared with digital devices such as the abacus. What is important is that the consequences of these design decisions were being tested in the real world. Schickard’s calculator (1623) Wilhelm Schickard was credited with inventing the first adding machine after Dr. Franz Hammer, a biographer of Johannes Kepler, claimed that drawings of a calculating clock had been discovered in two letters written by Schickard to Johannes Kepler in 1623 and 1624. Schickard’s “Calculating Clock” is composed of a multiplying device, a mechanism for recording intermediate results, and a 6-digit decimal adding device. Pascaline (1642) Pascaline, also called Arithmetic Machine, the first calculator or adding machine to be produced in any quantity and actually used. The Pascaline was designed and built by the French mathematician-philosopher Blaise Pascal between 1642 and 1644. It could only do addition and subtraction, with numbers being entered by manipulating its dials. Pascal invented the machine for his father, a tax collector, so it was the first business machine too (if one does not count the abacus). He built 50 of them over the next 10 years. Leibniz calculator (1673) In the year 1671, the scientist named Gottfried Leibniz generally modified the Pascal calculator, and he designed his own machine for performing various mathematical calculations which are based on multiplication and division as well. It is also known as the Leibniz wheel or stepped reckoner. It is the type of machine which is used for calculating the engine of a class of mechanical calculators. The Leibniz calculator is also called a Leibniz wheel or stepped drum. Gottfried Leibniz designed a calculating machine which is called Step Reckoner. Leibniz’s calculator is also known as the first true four-function calculator. De Colmar’s Arithmometer (1820) In 1820 Charles Xavier Thomas of Alsace, an entrepreneur in the insurance industry, invented the arithmometer, the first commercially produced adding machine, presumably to speed up and make more accurate, the enormous amount of daily computation insurance companies required. Remarkably, Thomas received almost immediate acknowledgment for this invention, as he was made Chevalier of the Legion of Honor only one year later, in 1821. At this time he changed his name to Charles Xavier Thomas, de Colmar, later abbreviated to Thomas de Colmar. Difference Engine (1822) Difference Engine, an early calculating machine, verging on being the first computer, designed and partially built during the 1820s and ’30s by Charles Babbage. Babbage was an English mathematician and inventor; he invented the cowcatcher, reformed the British postal system, and was a pioneer in the fields of operations research and actuarial science. It was Babbage who first suggested that the weather of years past could be read from tree rings. He also had a lifelong fascination with keys, ciphers, and mechanical dolls. As a founding member of the Royal Astronomical Society, Babbage had seen a clear need to design and build a mechanical device that could automate long, tedious astronomical calculations. He began by writing a letter in 1822 to Sir Humphry Davy, president of the Royal Society, about the possibility of automating the construction of mathematical tables—specifically, logarithm tables for use in navigation. He then wrote a paper, “On the Theoretical Principles of the Machinery for Calculating Tables,” which he read to the society later that year. (It won the Royal Society’s first Gold Medal in 1823.) Tables then in use often contained errors, which could be a life-and-death matter for sailors at sea, and Babbage argued that, by automating the production of the tables, he could assure their accuracy. Having gained support in the society for his Difference Engine, as he called it, Babbage next turned to the British government to fund development, obtaining one of the world’s first government grants for research and technological development. Analytical engine (1834) Analytical Engine, generally considered the first computer, designed and partly built by the English inventor Charles Babbage in the 19th century (he worked on it until his death in 1871). While working on the Difference Engine, a simpler calculating machine commissioned by the British government, Babbage began to imagine ways to improve it. Chiefly he thought about generalizing its operation so that it could perform other kinds of calculations. By the time funding ran out for his Difference Engine in 1833, he had conceived of something far more revolutionary: a general-purpose computing machine called the Analytical Engine. The Analytical Engine was to be a general-purpose, fully program-controlled, automatic mechanical digital computer. It would be able to perform any calculation set before it. There is no evidence that anyone before Babbage had ever conceived of such a device, let alone attempted to build one. The machine was designed to consist of four components: the mill, the store, the reader, and the printer. These components are the essential components of every computer today. The mill was the calculating unit, analogous to the central processing unit (CPU) in a modern computer; the store was where data were held prior to processing, exactly analogous to memory and storage in today’s computers; and the reader and printer were the input and output devices. The period of first generation was from 1946-1959. The computers of first generation used vacuum tubes as the basic components for memory and circuitry for CPU (Central Processing Unit). These tubes, like electric bulbs, produced a lot of heat and the installations used to fuse frequently. Therefore, they were very expensive and only large organizations were able to afford it. In this generation, mainly batch processing operating system was used. Punch cards, paper tape, and magnetic tape was used as input and output devices. The computers in this generation used machine code as the programming language. The main features of the first generation are: Vacuum tube technology Unreliable Supported machine language only Very costly Generates lot of heat Slow input and output devices Huge size Need of AC Non-portable Consumes lot of electricity Some computers of this generation were: ENIAC EDVAC UNIVAC IBM-701 IBM-750 The period of second generation was from 1959-1965. In this generation, transistors were used that were cheaper, consumed less power, more compact in size, more reliable and faster than the first-generation machines made of vacuum tubes. In this generation, magnetic cores were used as the primary memory and magnetic tape and magnetic disks as secondary storage devices. In this generation, assembly language and high-level programming languages like FORTRAN, COBOL were used. The computers used batch processing and multiprogramming operating system. The main features of second generation are: Use of transistors Reliable in comparison to first generation computers Smaller size as compared to first generation computers Generates less heat as compared to first generation computers Consumed less electricity as compared to first generation computers Faster than first generation computers Still very costly AC required Supported machine and assembly languages Some computers of this generation were: IBM 1620 IBM 7094 CDC 1604 CDC 3600 UNIVAC 1108 Third Generation Computers The period of third generation was from 1965-1971. The computers of third generation used Integrated Circuits (ICs) in place of transistors. A single IC has many transistors, resistors, and capacitors along with the associated circuitry. The IC was invented by Jack Kilby. This development made computers smaller in size, reliable, and efficient. In this generation, remote processing, time-sharing, multi-programming operating systems were used. High-level languages (FORTRAN-II TO IV, COBOL, PASCAL PL/1, BASIC, ALGOL-68 etc.) were used during this generation. The main features of third-generation are: IC used More reliable in comparison to previous two generations Smaller size Generated less heat Faster Lesser maintenance Costly AC required Consumed lesser electricity Supported high-level language Some computers of this generation were: IBM-360 series Honeywell-6000 series PDP (Personal Data Processor) IBM-370/168 TDC-316 Fourth Generation Computers The period of fourth generation was from 1971-1980. Computers of fourth generation used Very Large Scale Integrated (VLSI) circuits. VLSI circuits having about 5000 transistors and other circuit elements with their associated circuits on a single chip made it possible to have microcomputers of fourth generation. Fourth generation computers became more powerful, compact, reliable, and affordable. As a result, it gave rise to Personal Computer (PC) revolution. In this generation, time sharing, real time networks, distributed operating system were used. All the high-level languages like C, C++, DBASE etc., were used in this generation. The main features of fourth generation are: VLSI technology used Very cheap Portable and reliable Use of PCs Very small size Pipeline processing No AC required Concept of internet was introduced Great developments in the fields of networks Computers became easily available Some computers of this generation were: DEC 10 STAR 1000 PDP 11 CRAY-1(Super Computer) CRAY-X-MP(Super Computer) A computer is an electrical appliance that performs predetermined tasks. A computer system is a group of distinct objects that work together to perform a task. It can equally be defined as an IPO (Input Process Output system) as follow: a set of electronic appliance that accepts (input), processes (processing), stores (Storage), outputs (output) and communicates information based on a predetermined program. This definition differentiates four basic functions of a Standalone computer as Input, Processing, Storage and Output; and five functions of a Network computer as Input, Output, Storage, Processing, and Communication. For starters computers are electronic devices that handles information and data. From digital calculators to cellphones to super computers, they all share the same element of "Input, Process, Output and Storage" A little fun fact, our brains are technically considered as computers too! Well maybe not in the electronic way but they share the similar elements to that of it's electronic counterpart; think of it as our brains are the equivalent of the computer's CPU(Central Processing Unit.) Compared to computers in the past, technology has drastically changed over the course of many decades. You might be asking yourselves why were computers so large back then? Well one answer to that is in the past technology was still emerging, manufactures could only make large components compared to today because that's what they only knew before. Can we blame them? Of course not! Just like concepts or prototypes changes will be made in the future either from design to hardware. There's always room for improvement. So what is Hardware and Software? A Hardware is a physical component that a computer system requires to function. A simple computer contains the following hardware: A case, CPU(central processing unit), monitor, mouse, keyboard, computer data storage, graphics card, sound card, speakers and motherboard. Next a Software is a collection of instructions and data that tell a computer how to work. This is in contrast to physical hardware, from which the system is built and actually performs the work. What examples are considered as software? Operating System(OS), settings, Internet Browser, Movie Player just to name a few. Just like a pair of shoes, if one is missing then the other is practically useless. Hardware and Software must be together to maximize their purpose. Originally all computers and other information processing devices stood alone and their information was available only to those who had direct connections to them. Since the value of data increases dramatically when it is easier to collect and distribute, computers and other information processing devices are now connected to one another in networks. Digital convergence is the fusion of computer and communications technologies. Today's new information environment came about gradually from the merger of two streams of technological development—computers and communications. Information flows through networks almost everywhere. In some places, the signs of its flow are evident. Cables run every which way on telephone poles, dishes are mounted on the sides of buildings, and large antennas occupy the high ground. But in most places the signs are hidden. Cables run underground and invisible broadcast signals fill the air. You walk through a thick soup of data that flows from point to point. Some of the information being sent is familiar, for example, telephone conversations and radio and television broadcasts. Much of it is not public. For example, signals from an ATM machine while someone makes a withdrawl. Whatever the information is and wherever and however it is flowing, it is being communicated with modern information technology and computers are involved at many points in the process. Digital Signal Computers are based on on/off electrical states, because they use the binary number system, which consists of only two digits 0 and 1. At their most basic level computers can distinguish between just these two values, 0 and 1, or off and on. There is no simple way to represent all the values in between, such as 0.50. All data that a computer processes must be encoded digitally, as a series of 0s and 1s. In general, digital means "computer-based". Specifically, digital describes any system based on discontinuous data or events; in the case of computers, it refers to communications signals or information represented in a two-state (binary) way using electronic or electromagnetic signals. Each 0 and 1 signal represents a bit. Advantages of Digital Signal The advantages of digital signal are given below: 1. Digital Data : Digital transmission certainly has the advantages where binary computer data is being transmitted. The equipment required converting digital data to analog format and transmitting the digital bit streams over an analog network can be expensive, susceptible to failure, and can create errors in the information. 2. Compression: Digital data can be compressed relatively easily, thereby increasing the efficiency of transmission. As a result, substantial volumes of voice, data, video, and image information can be transmitted using relatively little raw bandwidth. 3. Security : Digital systems offer better security. While analog systems offer some measure of security through the scrambling of several frequencies. Scrambling is fairly simple to defeat. Digital information, on the other hand, can be encrypted to create the appearance of a single, pseudorandom bit stream. Thereby, the true measuring of individual bits, sets of bits, or the total bit stream cannot be determined without having the key to unlock the encryption algorithm employed. 4. Quality : Digital transmission offers improved error performance (quality) as compared to analog. This is due to the devices that boost the signal at periodic intervals in the transmission system in order to overcome the effects of attenuation. Additionally, digital networks deal more effectively with noise, which always is present in transmission networks. 5. Cost : The cost of the computer components required in digital conversion and transmission has dropped considerably, while the ruggedness i.e., unevenness and reliability of those components has increased over the years. 6. Upgradability : Since digital networks are comprised of computer (digital) components, they are relatively easy to upgrade. Such upgrading can increase bandwidth, reduces the incidence of error and enhance functional value. Some upgrading can be effected remotely over a network, eliminating the need to dispatch expensive techniques for that purpose. 7. Management : Generally speaking, digital networks can be managed much more easily and effectively due to the fact that such networks consist of computerised components. Such components can sense their own level of performance, isolate and diagnose failures, initiate alarms, respond to queries and respond to commands to correct any failure. Further, the cost of these components continues to drop. Analog Signal Most of the phenomena of the world are analog, continuously varying in strength and/or quality%~luctuating, evolving, or continually changing. Sound, light, temperature, and pressure values, for instance, can be anywhere on a continuum or range. The highs, lows, and in-between states have historically been represented with analog devices rather than in digital form. Example of analog devices are a speedometer, a thermometer, and a tyrepressure gauge, all of which can measure continuous fluctuations. Advantages of Analog Signal The advantages of analog signal are given below : 1. Cost Effective : Analog has an inherent advantage as voice, image and video are analog in nature. Therefore, the process of transmission of such information in relatively straightforward in an analog format, whereas conversion to a digital bit stream requires conversion equipment. Such equipment increases cost, makes it susceptible to failure, and can negatively affect the quality of the signal through the conversion process, itself. 2. Bandwidth : A raw information stream consumes less bandwidth in analog form than in digital form. This is particularly evident in CATV transmission, whereas 50 or more analog channels routinely are provided over a single coaxial cable system. Without the application of compression techniques on the same cable system, only a few digital channels could be supported. 3. Presence : The analog transmission systems are already in place, worldwide interconnection of those systems is very common and all standards are well established. As majority of network traffic is voice and as the vast majority of voice terminals are analog devices therefore, voice communications largely, depends on analog networks. Conversion to digital networks would require expensive, wholesale conversion of such terminal equipment. 4. Analog transmission offers advantages in the transmission of analog information. Additionally, it is more bandwidth-conservatives and is widely available. Humans experience most of the world in analog form—our vision, for instance, perceives shapes and colours as smooth gradations. But most analog events can be simulated digitally. Traditionally, electronic transmission of telephone, radio, television, and cable-TV signals has been analog. The electrical signals on a telephone line, for instance, have been analog data representations of the original voices, transmitted in the shape of a wave (called a carrier wave). Why bother to change analog signals into digital ones, especially since the digital representations are only approximations of analog events? The reason is that digital signals are easier to store and manipulate electronically. Let us discuss some common data communication terminologies. Channel A communication channel is a path which helps in data transmission. Baud The communication data transfer rate is measured in a unit known as baud. Technically, baud refers to number of signal (state) changes per second. In general, 1 baud represents only 1 signal change per second, and is equivalent to 1 bit per second. Bandwidth The bandwidth is the range, or band, of frequencies that a transmission medium can carry in a given period of time. For analog signals, bandwidth is expressed in hertz (Hz), or cycles per second. For example, certain cellphones operate within the range 824-849 Megahertz—that is, their bandwidth is 25 Megahertz. The wider a medium's bandwidth, the more frequencies it can use to transmit data and thus the faster the transmission. Broadband connections are characterized by very high speed. For digital signals, bandwidth can be expressed in hertz but also in bits per second (bps). For instance, the connections that carry the newest types of digital cellphone signals range from 144 Kilobits (14, 400 bits) per second to 2 Megabits (2 million bits) per second. Digital cellphones may use the same radio spectrum frequencies as analog cellphones, but they transmit data faster because they use compressed digital signals, which can carry much more information than can analog signals. Data Transfer Rate It represents the amount of data transferred per second by a communications channel or a computing or storage device. We measure data transfer rate in bits per second (bps). The following terms are used : Bits per second (bps). Kilobits per second (Kbps)—thousands of bits per second. Megabits per second (Mbps)—millions of bits per second. Gigabits per second (Gbps)—billions of bits per second. Terabits per second (Tbps)—trillions of bits per second. Transmission Media It used to be that two-way individual communications were accomplished mainly in two ways. They were carried by the medium of (1) a telephone wire or (2) a wireless method such as shortwave radio. Today there are many kinds of communications media, although they are still wired or wireless. Communications media carry signals over a communications path, the route between two or more communications media devices. The speed, or data transfer rate, at which transmission occurs—and how much data can be carried by a signal—depends on the media and the type of signal. Wired Communications Media Three types of wired communications media are twisted-pair wire (conventional telephone lines), coaxial cable, and fiber-optic cable. Twisted-pair wire The telephone line that runs from your house to the pole outside, or underground, is probably twisted-pair wire. Twisted-pair wire consists of two strands of insulated copper wire, twisted around each other. This twisted-pair configuration (compared to straight wire) somewhat reduces interference (called "crosstalk") from electrical fields. Twisted-pair is relatively slow, carrying data at the rate of 1-128 MegaBits Per Second (MBPs). Moreover, it does not protect well against electrical interference. However, because so much of the world is already served by twisted-pair wire, it will no doubt be used for years to come, both for voice messages and for modem-transmitted computer data (dial-up connections). Figure 5 shows a twisted-pair wire. Advantages The main advantages of twisted-pair wire are given below : 1. It is simple. 2. It is relatively easy for telecommunications companies to upgrade the physical connections between cities and even between neighbourhoods. 3. It is physically flexible. 4. It has a low weight. 5. It is very inexpensive. Disadvantages The disadvantages of twisted-pair wire are given below : 1. It is expensive for telecommunications companies to replace the "last mile" (the distance from your home to your telephone's switching office, the local loop, is often called the "last mile") of twisted-pair wire that connects to individual houses. 2. Due to high attenuation, it is not suitable for carrying a signal over long distances without using repeaters. 3. It is unsuitable for broadband applications as it has low bandwidth capabilities. Coaxial cable Coaxial cable, commonly called "co-ax," consists of insulated copper wire wrapped in a solid or braided metal shield and then in an external plastic cover. Co-ax is widely used for cable television and cable internet connections. Thanks to the extra insulation, coaxial cable is much better than twisted-pair wiring at resisting noise. Moreover, it can carry voice and data at a faster rate (up to 200 Megabits Per Second). Often many coaxial cables are bundled together. Advantages The main advantages of coaxial cable are given below : 1. It's data transmission characteristics are far better than twisted-pair wiring. 2. It is widely used for cable television and cable internet connections. 3. It can be used for broadband transmission i.e., several channels can be transmitted simultaneously. Disadvantages The disadvantages of coaxial cable are given below : 1. It is expensive as compared to twisted-pair wiring. 2. These are not compatible with twisted-pair wiring. Fiber-optic Cable A fiber-optic cable consists of dozens or hundreds of thin strands of glass or plastic that transmit pulsating beams of light rather than electricity. These strands, each as thin as a human hair, can transmit up to about 2 billion pulses per second (2 Gigabits); each "on" pulse represents 1 bit. When bundled together, fiber-optic strands in a cable 0.12 inch thick can support a quarter-to a half million voice conversations at the same time. Moreover, unlike electrical signals, light pulses are not affected by random electromagnetic interference in the environment. Advantages The main advantages of fiber-optic cable are given below : 1. It has a much lower error rate than normal telephone wire and cable. 2. It is lighter and more durable than twisted-pair wire and co-ax cable. 3. It cannot easily be wiretapped, so transmissions are more secure. 4. It can be used for broadband transmission. Disadvantages The disadvantages of fiber-optic cable are given below : 1. Installation problem. 2. It is most expensive of all the cables and wires. 3. Connection losses are common problems. Wireless Communications Media Four types of wireless media are infrared transmission, broadcast radio, microwave radio, and communications satellite. Infrared Transmission Infrared wireless transmission sends data signals using infrared-light waves at a frequency too low (1-7 megabits per second) for human eyes to receive and interpret. Infrared ports can be found on some laptop computers, digital cameras, and printers, as well as wireless mice. Advantages The main advantages of infrared transmission are given below : 1. It is used by TV remote-control units, automotive garage door and wireless speakers etc. 2. It is very secure. Disadvantages The disadvantages of infrared transmission are given below : 1. The line-of-sight communication is required—there must be an unobstructed view between transmitter and receiver. 2. Transmission is confined to short range. Broadcast Radio When you tune into an AM or FM radio station, you are using broadcast radio, a wireless transmission medium that sends data over long distances at up to 2 Megabits Per Second—between regions, states, or countries. A transmitter is required to send messages and a receiver to receive them; sometimes both sending and receiving functions are combined in a transceiver. In the lower frequencies of the radio spectrum, several broadcast radio bands are reserved not only for conventional AM/FM radio but also for broadcast television, cellphones, and private radio-band mobile services (such as police, fire, and taxi dispatch). Some organizations use specific radio frequencies and networks to support wireless communications. For example, UPC (Universal Product Code) bar-code readers are used by grocery-store clerks restocking store shelves to communicate with a main computer so that the store can control inventory levels. Advantages The main advantages of broadcast radio are given below : 1. It provides mobility. 2. It is cheaper than digging trenches for laying cables and maintaining repeaters and cables if cables are damaged for various reasons. 3. It offers freedom from land owner rights that are required for laying, repairing the cables and wires. 4. It offers ease of communication. Disadvantages The disadvantages of broadcast radio are given below : 1. It is an insecure communication. 2. It is susceptible to weather effects like rains, thunder storms etc. Microwave Radio Microwave radio transmits voice and data at 75 Megabits Per Second through the atmosphere as superhigh-frequency radio waves called microwaves, which vibrate at 1 Gigahertz (1 billion hertz) per second or higher. These frequencies are used not only to operate microwave ovens but also to transmit messages between ground based stations and satellite communications systems. Nowadays dish-or horn-shaped microwave reflective dishes, which contain transceivers and antennas, are nearly everywhere—on towers, buildings, and hilltops. Why, you might wonder, do we have to interfere with nature by putting a microwave dish on top of a mountain? As with infrared waves, microwaves are line-of-sight; they cannot bend around corners or around the earth's curvature, so there must be an unobstructed view between transmitter and receiver. Thus, microwave stations need to be placed within 25-30 miles of each other, with no obstructions in between. The size of the dish varies with the distance (perhaps 2-4 feet in diameter for short distances, 10 feet or more for long distances). In a string of microwave relay stations, each station will receive incoming messages, boost the signal strength, and relay the signal to the next station. More than half of today's telephone systems uses dish microwave transmission. However, the airwaves are becoming so saturated with microwave signals that future needs will have to be satisfied by other channels, such as satellite systems. Advantages The main advantages of microwave radio are given below : 1. It is cheaper than digging trenches for laying cables and maintaining repeaters and cables if cables are damaged for various reasons. 2. It offers freedom from land owner rights that are required for laying, repairing the cables and wires. 3. It offers ease of communication. 4. Microwave radio can communicate over oceans. Disadvantages The disadvantages of microwave radio are given below : 1. It is an insecure communication. 2. The signal strength may be reduced due to setting of antenna. 3. It is susceptible to weather effects like rains, thunder storms etc. 4. It has extremely limited bandwidth allocation. 5. It has high cost of design, implementation, and maintenance. Communications Satellites To avoid some of the limitations of microwave earth stations, communications companies have added microwave "sky stations"—communications satellites. Communications satellites are microwave relay stations in orbit around the earth. Transmitting a signal from a ground station to a satellite is called uplinking; the reverse is called downlinking. The delivery process will be slowed if, as is often the case, more than one satellite is required to get the message delivered. Satellite systems may occupy one of three zones in space : GEO, MEO, and LEO. The highest level, known as geostationary earth orbit (GEO), is 22,300 miles and up and is always directly above the equator. Because the satellites in this orbit travel at the same speed as the earth, they appear to an observer on the ground to be stationary in space—that is, they are geostationary. Consequently, microwave earth stations are always able to beam signals to a fixed location above. The orbiting satellite has solar-powered transceivers to receive the signals, amplify them, and retransmit them to another earth station. At this high orbit, fewer satellites are required for global coverage; however, their quarter-second delay makes two-way conversations difficult. The medium-earth orbit (MEO) is 5,000-10,000 miles up. It requires more satellites for global coverage than does GEO. The low-earth orbit (LEO) is 200-1,000 miles up and has no signal delay. LEO satellites may be smaller and are cheaper to launch. Satellites cost are very-very high and a satellite launch also costs very high. Recently India created history by launching many satellites together. Advantages The main advantages of communications satellites are given below : 1. It covers a quite large area. 2. It is the best alternative as laying and maintenance of intercontinental cable is difficult and expensive. 3. It is commercially attractive. 4. Due to large coverage area, it is very useful for sparsely populated areas (i.e., areas having long distances between houses). Disadvantages The disadvantages of communications satellites are given below : 1. The deployment of large, high gain antennas on the satellite platform are prevented due to technological limitations. 2. Over-crowding of available bandwidths due to low antenna gains. 3. Very high investment cost and insurance cost due to chances of failure. 4. High atmospheric losses over 30 GHz carrier frequencies. Networks A network, or communications network, is a system of interconnected computers, telephones, or other communications devices that can communicate with one another and share applications and data. The tying together of so many communications devices in so many ways is changing the world we live in. Network Users To use a network, you first log on with an ID and a password. The ID is assigned by the network administrator. The password is selected by you and can (and should) be changed frequently to improve security. Because network software can be customized, what you see when you log on depends on the system's design. Normally you will find printers, hard disks, and other shared assets listed on your system's dialog boxes even though they are located elsewhere on the network. Advantages of Networks People and organizations use networks for the following reasons, the most important of which is the sharing of resources. Sharing of peripheral devices Peripheral devices such as laser printers, disk drives, and scanners can be expensive. Consequently, to justify their purchase, management wants to maximize their use. Usually the best way to do this is to connect the peripheral to a network serving several computer users. Sharing of programs and data In most organizations, people use the same software and need access to the same information. It is less expensive for a company to buy one word processing program that serves many employees than to buy a separate word processing program for each employee. Moreover, if all employees have assess to the same data on a shared storage device, the organization can save money and avoid serious problems. If each employee has a separate machine, some employees may update customer addresses, while others remain ignorant of the changes. Updating information on a shared server is much easier than updating every user's individual system. Finally, network-linked employees can more easily work together online on shared projects. Better communications One of the greatest features of networks is electronic mail. With e-mail, everyone on a network can easily keep others posted about important information. Security of information Before networks became commonplace, an individual employee might be the only one with a particular piece of information, which was stored in his or her desktop computer. If the employee was dismissed—or if a fire or flood demolished the office—the company would lose that information. Today such data would be backed up or duplicated on a networked storage device shared by others. Access to databases Networks enable users to tap into numerous databases, whether private company databases or public databases available online through the Internet. Disadvantages of Networks The disadvantages of networking are given below : Crashes The main disadvantage is on a server-based network. When it crashes, work gets disrupted as all network resources and its benefits are lost. Thus, the proper precautions are needed to ensure regular backups as the crash may result in the loss of important data and wastage of time. Data security problems In a network, generally, all the data resources are pooled together. So, it is very much possible for unauthorised personel to access classified information if network security is weak or poorly implemented. Lack of privacy A network may also result in loss of privacy, as anyone, especially your senior, with the right network privileges may read or even destroy your private e-mail messages. Networks, which consist of various combinations of computers, storage devices, and communications devices, may be divided into three main categories, differing primarily in their geographic range. Local Area Network A Local Area Network (LAN) connects computers and devices in a limited geographic area, such as one office, one building, or a group of buildings close together (for instance, a college campus). A small LAN in a modest office, or even in a home, might link a file server with a few terminals or PCs and a printer or two. Such small LANs are sometimes called PANs, for "personal area networks." Metropolitan Area Network A Metropolitan Area Network (MAN) is a communications network covering a city or a suburb. The purpose of a MAN is often to bypass local telephone companies when accessing long-distance services. Many cellphone systems are MANs. Wide Area Network A Wide Area Network (WAN) is a communications network that covers a wide geographic area, such as a country or the world. Most long-distance and some regional telephone companies are WANs. A WAN may use a combination of satellites, fiberoptic cable, microwave, and copper wire connections and link a variety of computers, from mainframes to terminals. WANs are used to connect local area networks together, so that users and computers in one location can communicate with users and computers in other locations. A wide area network may be privately owned or rented, but the term usually connotes the inclusion of public (shared-user) networks. The best example of a WAN is the Internet. Most large computer networks have at least one host computer, a mainframe or midsize central computer that controls the network. The other devices within the network are called nodes. A node is any device that is attached to a network—for example, a microcomputer, terminal, storage device, or printer. Networks may be connected together—LANs to MANs and MANs to WANs. Backbones are high-speed networks that connect LANs and MANs to the Internet. Networks can be laid out in different ways. The logical layout, or shape, of a network is called a topology. The various topologies are given below: Bus Topology The bus topology works like a bus system at rush hour, with various buses pausing in different bus zones to pick up passengers. In a bus topology, all communications devices are connected to a common channel. (See Figure 22) Bus Topology (A single channel connects all communications devices.) In a bus topology, all nodes are connected to a single wire or cable, the bus, which has two endpoints. Each communications device on the network transmits electronic messages to other devices. If some of those messages collide, the sending device waits and tries to transmit again. Advantages of Bus Topology The advantages of bus topology are given below : 1. It may be organized as a client/server or peer-to-peer network. 2. It is very simple to setup and needs a short cable length as compared to ring topology. 3. It is simple to install due to its simple architecture. It is an older topology so technicians are easily available for bus topology. 4. It is very easy to expand. All you have to do is to decide the point of installation of new node and connect the new node by T connector. Disadvantages of Bus Topology The disadvantages of bus topology are given below : 1. Extra circuitry and software are needed to avoid collisions between data. 2. If a connection in the bus is broken—as when someone moves a desk and knocks the connection out—the entire network may stop working. 3. When the length of the cable grows upto a certain limit, the network becomes slow as the signal loose their power due to long length. 4. Complex protocols are used to decide who will be the next sender when the current sender finishes its transmission. Ring Topology A ring topology is one in which all microcomputers and other communications devices are connected in a continuous loop. (See Figure 23) There are no endpoints. Ring Topology (This arrangement connects the network's devices in a closed loop.) Electronic messages are passed around the ring until they reach the right destination. There is no central server. An example of a ring network is IBM's Token Ring Network, in which a bit pattern (called a "token") determines which user on the network can send information. Advantages of Ring Topology The advantages of ring topology are : 2. In ring topology, a shorter cable length is needed as compared to star topology. 3. Each node is connected to the other by using a single connection. 4. Messages flow in only one direction. Thus, there is no danger of collisions. 5. Ring topology delivers fast and efficient performance. 6. It is suitable to setup high speed network using optical fibres. Disadvantages of Ring Topology The disadvantages of ring topology are given below : 1. If a connection is broken, the entire network stops working. The network cannot be operational till the complete ring is working. 2. As the single channel is shared by various nodes, it is difficult to diagnose the fault. 3. It is difficult to add, remove or reconfigure the nodes in the network. 4. In case of node failure, bypassing the traffic requires costly devices. Star Topology A star topology is one in which all microcomputers and other communications devices are directly connected to a central server. (See Figure 24). Star Topology (This arrangement connects all the network's devices to a central host computer, through which all communications must pass.). Electronic messages are routed through the central hub to their destinations. The central hub monitors the flow of traffic. A PBX system is an example of a star network. Traditional star networks are designed to be easily expandable because hubs can be connected to additional hubs of other networks. Advantages of Star Topology The advantages of star topology are given below : 1. It is easy to install as each node has a single connection to the central device called hub. The hub prevents collisions between messages. 2. It only needs one connection per node to install a node in the network. 3. If a connection is broken between any communications device and the hub, the rest of the devices on the network will continue operating. 4. It allows easy management of network. 5. It is very easy to detect faults in the network. 6. It uses simple communication protocols. Disadvantages of the Star Topology The disadvantages of star topology are given below : 1. In star topology, each node is connected individually to the hub. This way it requires large quantity of cables. This in turn increases the cost of network. 2. The hub offers limited number of connections. Therefore, the network can be expanded upto a certain limit; after which a new hub is needed and we have to go for tree topology. 3. The working of the network depends on the working of the hub. If the hub goes down, the entire network will stop. 4. Generally, it is the most popular topology for small LANs. Tree Topology Tree typology is another popular topology, which is suitable for networks having hierarchical flow of data. By the term "hierarchical flow", we mean that the data travels level by level. It starts from one level and travels upto one level down and then to subsequent levels. (See Figure 25). In the tree topology, computers are connected like an inverted tree. The server or the host computer is connected at the top. To the server, the most important terminals are attached at next level and to these terminals, clients are attached. Data can flow from top to bottom and bottom to top in level by level manner. It is an extension to a star topology network. Mesh Topology In mesh topology, each node is connected to more than one nodes in the system. In this way, there exist multiple paths between two nodes of the network. In case of failure of one path, the other one can be used. The mesh topology is used in networks spread in wide geographical area of several kilometres. These kinds of networks have storage capabilities at intermediate nodes. There exist special devices called "routers" that are used to decide a route for the incoming packet and send it to towards its destination. It is generally used in interconnected networks that connects multiple LANs. (See Figure 26). Fully Connected It is a topology in which each node is connected directly to every other node of the network through individual connection. This kind of topology is extremely costly to implement and maintain. It is very rarely used in environments where crucial data is required with lightening speed. (See Figure 27). Graph Topology Graph topology is also a very rarely used topology in networks. In this topology, each node may or may not be connected to the other node. There is no rule about the structure and working of graph topology. If there exists a path between each node of the graph then we say that it is a connected graph. (See Figure 28). et us discuss the private internet networks. Intranets—for internal use only An intranet is an organization's internal private network that uses the infrastructure and standards of the Internet and the web. When a corporation develops a public website, it is making selected information available to consumers and other interested parties. When it creates an intranet, it enables employees to have quicker access to internal information and to share knowledge so that they can do their jobs better. Information exchanged on intranets may include employee e-mail addresses and telephone numbers, product information, sales data, employee benefit information, and lists of jobs available within the organization. Extranets—for certain outsiders Taking intranet technology a few steps further, extranets offer security and controlled access. As we have seen, intranets are internal systems, designed to connect the members of a specific group or a single company. By contrast, extranets are private intranets that connect not only internal personnel but also selected suppliers and other strategic parties. Extranets have become popular for standard transactions such as purchasing. Ford Motor Company, for instance, has an extranet that connects more than 15, 000 Ford dealers worldwide. Called FocalPt, the extranet supports sales and servicing of cars, with the aim of improving service to Ford customers. Firewalls—to keep out unauthorized users Security is essential to an intranet (or even an extranet). Sensitive company data, such as payroll information, must be kept private, by means of a firewall. A firewall is a system of hardware and software that blocks unauthorized users inside and outside the organization from entering the intranet. The firewall software monitors all internet and other network activity, looking for suspicious data and preventing unauthorized access. Always-on Internet connections such as cable modem and DSL, as well as WiFi devices, are particularly susceptible to unauthorized intrusion, so users are advised to install a firewall. A firewall consists of two parts, a choke and a gate. The choke forces all data packets flowing between the Internet and the intranet to pass though a gate. The gate regulates the flow between the two networks. It identifies authorized users, searches for viruses, and implements other security measures. Thus, intranet users can gain access to the Internet (including key sites connected by hyperlinks), but outside Internet users cannot enter the intranet. Virtual Private Network Wide-area networks use leased lines of various bandwidths. Maintaining a WAN can be expensive, especially as distances between offices increase. To decrease communications costs, some companies have established Virtual Private Networks (VPNs), private networks that use a public network (usually the Internet) to connect remote sites. (See Figure 3 0). Company intranets, extranets, and LANS can all be parts of a VPN. The ISPs local access number for your area is its point of presence (POP).

Use Quizgecko on...
Browser
Browser