Document Details

Uploaded by Deleted User

Tags

history of computing calculating devices technology computers

Full Transcript

CC 10.1 - MODULE 1 Lesson 1.1: History of Computers History of Computing A computer might be described with deceptive simplicity as “an apparatus that performs routine calculations automatically.” Such a definition would owe its deceptiveness to a naive and narrow view of calculation as a strictly...

CC 10.1 - MODULE 1 Lesson 1.1: History of Computers History of Computing A computer might be described with deceptive simplicity as “an apparatus that performs routine calculations automatically.” Such a definition would owe its deceptiveness to a naive and narrow view of calculation as a strictly mathematical process. In fact, calculation underlies many activities that are not normally thought of as mathematical. Walking across a room, for instance, requires many complexes, albeit subconscious, calculations. Computers, too, have proved capable of solving a vast array of problems, from balancing a checkbook to even—in the form of guidance systems for robots—walking across a room. Before the true power of computing could be realized, therefore, the naive view of calculation had to be overcome. The inventors who labored to bring the computer into the world had to learn that the thing they were inventing was not just a number cruncher, not merely a calculator. For example, they had to learn that it was not necessary to invent a new computer for every new calculation and that a computer could be designed to solve numerous problems, even problems not yet imagined when the computer was built. They also had to learn how to tell such a general problem-solving computer what problem to solve. In other words, they had to invent programming. They had to solve all the heady problems of developing such a device, of implementing the design, of actually building the thing. The history of the solving of these problems is the history of the computer. The Abacus The earliest known calculating device is probably the abacus. It dates back at least to 1100 BCE and is still in use today, particularly in Asia. Now, as then, it typically consists of a rectangular frame with thin parallel rods strung with beads. Long before any systematic positional notation was adopted for the writing of numbers, the abacus assigned different units, or weights, to each rod. This scheme allowed a wide range of numbers to be represented by just a few beads and, together with the invention of zero in India, may have inspired the invention of the Hindu-Arabic number system. In any case, abacus beads can be readily manipulated to perform the common arithmetical operations—addition, subtraction, multiplication, and division—that are useful for commercial transactions and in bookkeeping. From Napier’s Logarithms to the Slide Rule Calculating devices took a different turn when John Napier, a Scottish mathematician, published his discovery of logarithms in 1614. As any person can attest, adding two 10-digit numbers is much simpler than multiplying them together, and the transformation of a multiplication problem into an addition problem is exactly what logarithms enable. This simplification is possible because of the following logarithmic property: the logarithm of the product of two numbers is equal to the sum of the logarithms of the numbers. By 1624, tables with 14 significant digits were available for the logarithms of numbers from 1 to 20,000, and scientists quickly adopted the new labour-saving tool for tedious astronomical calculations. Most significant for the development of computing, the transformation of multiplication into addition greatly simplified the possibility of mechanization. Analog calculating devices based on Napier’s logarithms—representing digital values with analogous physical lengths—soon appeared. In 1620 Edmund Gunter, the English mathematician who coined the terms cosine and cotangent, built a device for performing navigational calculations: the Gunter scale, or, as navigators simply called it, the gunter. About 1632 an English clergyman and mathematician named William Oughtred built the first slide rule, drawing on Napier’s ideas. That first slide rule was circular, but Oughtred also built the first rectangular one in 1633. The analog devices of Gunter and Oughtred had various advantages and disadvantages compared with digital devices such as the abacus. What is important is that the consequences of these design decisions were being tested in the real world. Schickard’s calculator (1623) Wilhelm Schickard was credited with inventing the first adding machine after Dr. Franz Hammer, a biographer of Johannes Kepler, claimed that drawings of a calculating clock had been discovered in two letters written by Schickard to Johannes Kepler in 1623 and 1624. Schickard’s “Calculating Clock” is composed of a multiplying device, a mechanism for recording intermediate results, and a 6-digit decimal adding device. Pascaline (1642) Pascaline, also called Arithmetic Machine, the first calculator or adding machine to be produced in any quantity and actually used. The Pascaline was designed and built by the French mathematician-philosopher Blaise Pascal between 1642 and 1644. It could only do addition and subtraction, with numbers being entered by manipulating its dials. Pascal invented the machine for his father, a tax collector, so it was the first business machine too (if one does not count the abacus). He built 50 of them over the next 10 years. Leibniz calculator (1673) In the year 1671, the scientist named Gottfried Leibniz generally modified the Pascal calculator, and he designed his own machine for performing various mathematical calculations which are based on multiplication and division as well. It is also known as the Leibniz wheel or stepped reckoner. It is the type of machine which is used for calculating the engine of a class of mechanical calculators. The Leibniz calculator is also called a Leibniz wheel or stepped drum. Gottfried Leibniz designed a calculating machine which is called Step Reckoner. Leibniz’s calculator is also known as the first true four-function calculator. De Colmar’s Arithmometer (1820) In 1820 Charles Xavier Thomas of Alsace, an entrepreneur in the insurance industry, invented the arithmometer, the first commercially produced adding machine, presumably to speed up and make more accurate, the enormous amount of daily computation insurance companies required. Remarkably, Thomas received almost immediate acknowledgment for this invention, as he was made Chevalier of the Legion of Honor only one year later, in 1821. At this time he changed his name to Charles Xavier Thomas, de Colmar, later abbreviated to Thomas de Colmar. Difference Engine (1822) Difference Engine, an early calculating machine, verging on being the first computer, designed and partially built during the 1820s and ’30s by Charles Babbage. Babbage was an English mathematician and inventor; he invented the cowcatcher, reformed the British postal system, and was a pioneer in the fields of operations research and actuarial science. It was Babbage who first suggested that the weather of years past could be read from tree rings. He also had a lifelong fascination with keys, ciphers, and mechanical dolls. As a founding member of the Royal Astronomical Society, Babbage had seen a clear need to design and build a mechanical device that could automate long, tedious astronomical calculations. He began by writing a letter in 1822 to Sir Humphry Davy, president of the Royal Society, about the possibility of automating the construction of mathematical tables—specifically, logarithm tables for use in navigation. He then wrote a paper, “On the Theoretical Principles of the Machinery for Calculating Tables,” which he read to the society later that year. (It won the Royal Society’s first Gold Medal in 1823.) Tables then in use often contained errors, which could be a life-and-death matter for sailors at sea, and Babbage argued that, by automating the production of the tables, he could assure their accuracy. Having gained support in the society for his Difference Engine, as he called it, Babbage next turned to the British government to fund development, obtaining one of the world’s first government grants for research and technological development. Analytical Engine Analytical engine (1834) Analytical Engine, generally considered the first computer, designed and partly built by the English inventor Charles Babbage in the 19th century (he worked on it until his death in 1871). While working on the Difference Engine, a simpler calculating machine commissioned by the British government, Babbage began to imagine ways to improve it. Chiefly he thought about generalizing its operation so that it could perform other kinds of calculations. By the time funding ran out for his Difference Engine in 1833, he had conceived of something far more revolutionary: a general-purpose computing machine called the Analytical Engine. The Analytical Engine was to be a general-purpose, fully program-controlled, automatic mechanical digital computer. It would be able to perform any calculation set before it. There is no evidence that anyone before Babbage had ever conceived of such a device, let alone attempted to build one. The machine was designed to consist of four components: the mill, the store, the reader, and the printer. These components are the essential components of every computer today. The mill was the calculating unit, analogous to the central processing unit (CPU) in a modern computer; the store was where data were held prior to processing, exactly analogous to memory and storage in today’s computers; and the reader and printer were the input and output devices. First Generation Computers The period of first generation was from 1946-1959. The computers of first generation used vacuum tubes as the basic components for memory and circuitry for CPU (Central Processing Unit). These tubes, like electric bulbs, produced a lot of heat and the installations used to fuse frequently. Therefore, they were very expensive and only large organizations were able to afford it. In this generation, mainly batch processing operating system was used. Punch cards, paper tape, and magnetic tape was used as input and output devices. The computers in this generation used machine code as the programming language. The main features of the first generation are: Vacuum tube technology Unreliable Supported machine language only Very costly Generates lot of heat Slow input and output devices Huge size Need of AC Non-portable Consumes lot of electricity Some computers of this generation were: ENIAC EDVAC UNIVAC IBM-701 IBM-750 Second Generation Computers The period of second generation was from 1959-1965. In this generation, transistors were used that were cheaper, consumed less power, more compact in size, more reliable and faster than the first-generation machines made of vacuum tubes. In this generation, magnetic cores were used as the primary memory and magnetic tape and magnetic disks as secondary storage devices. In this generation, assembly language and high-level programming languages like FORTRAN, COBOL were used. The computers used batch processing and multiprogramming operating system. The main features of second generation are: Use of transistors Reliable in comparison to first generation computers Smaller size as compared to first generation computers Generates less heat as compared to first generation computers Consumed less electricity as compared to first generation computers Faster than first generation computers Still very costly AC required Supported machine and assembly languages Some computers of this generation were: IBM 1620 IBM 7094 CDC 1604 CDC 3600 UNIVAC 1108 Third Generation Computers The period of third generation was from 1965-1971. The computers of third generation used Integrated Circuits (ICs) in place of transistors. A single IC has many transistors, resistors, and capacitors along with the associated circuitry. The IC was invented by Jack Kilby. This development made computers smaller in size, reliable, and efficient. In this generation, remote processing, time-sharing, multi-programming operating systems were used. High-level languages (FORTRAN-II TO IV, COBOL, PASCAL PL/1, BASIC, ALGOL-68 etc.) were used during this generation. The main features of third-generation are: IC used More reliable in comparison to previous two generations Smaller size Generated less heat Faster Lesser maintenance Costly AC required Consumed lesser electricity Supported high-level language Some computers of this generation were: IBM-360 series Honeywell-6000 series PDP (Personal Data Processor) IBM-370/168 TDC-316 Fourth Generation Computers The period of fourth generation was from 1971-1980. Computers of fourth generation used Very Large Scale Integrated (VLSI) circuits. VLSI circuits having about 5000 transistors and other circuit elements with their associated circuits on a single chip made it possible to have microcomputers of fourth generation. Fourth generation computers became more powerful, compact, reliable, and affordable. As a result, it gave rise to Personal Computer (PC) revolution. In this generation, time sharing, real time networks, distributed operating system were used. All the high-level languages like C, C++, DBASE etc., were used in this generation. The main features of fourth generation are: VLSI technology used Very cheap Portable and reliable Use of PCs Very small size Pipeline processing No AC required Concept of internet was introduced Great developments in the fields of networks Computers became easily available Some computers of this generation were: DEC 10 STAR 1000 PDP 11 CRAY-1(Super Computer) CRAY-X-MP(Super Computer) Lesson 1.2: Digital Devices Hardware and Software Operating Systems Fundamental Concepts of Computer Systems A computer is an electrical appliance that performs predetermined tasks. A computer system is a group of distinct objects that work together to perform a task. It can equally be defined as an IPO (Input Process Output system) as follow: a set of electronic appliance that accepts (input), processes (processing), stores (Storage), outputs (output) and communicates information based on a predetermined program. This definition differentiates four basic functions of a Standalone computer as Input, Processing, Storage and Output; and five functions of a Network computer as Input, Output, Storage, Processing, and Communication. What makes a computer a computer? For starters computers are electronic devices that handles information and data. From digital calculators to cellphones to super computers, they all share the same element of "Input, Process, Output and Storage" A little fun fact, our brains are technically considered as computers too! Well maybe not in the electronic way but they share the similar elements to that of it's electronic counterpart; think of it as our brains are the equivalent of the computer's CPU(Central Processing Unit.) Compared to computers in the past, technology has drastically changed over the course of many decades. You might be asking yourselves why were computers so large back then? Well one answer to that is in the past technology was still emerging, manufactures could only make large components compared to today because that's what they only knew before. Can we blame them? Of course not! Just like concepts or prototypes changes will be made in the future either from design to hardware. There's always room for improvement. Hardware vs Software So what is Hardware and Software? A Hardware is a physical component that a computer system requires to function. A simple computer contains the following hardware: A case, CPU(central processing unit), monitor, mouse, keyboard, computer data storage, graphics card, sound card, speakers and motherboard. Next a Software is a collection of instructions and data that tell a computer how to work. This is in contrast to physical hardware, from which the system is built and actually performs the work. What examples are considered as software? Operating System(OS), settings, Internet Browser, Movie Player just to name a few. Just like a pair of shoes, if one is missing then the other is practically useless. Hardware and Software must be together to maximize their purpose. Lesson 1.3: Data Communications and Networking Introduction Originally all computers and other information processing devices stood alone and their information was available only to those who had direct connections to them. Since the value of data increases dramatically when it is easier to collect and distribute, computers and other information processing devices are now connected to one another in networks. Digital convergence is the fusion of computer and communications technologies. Today's new information environment came about gradually from the merger of two streams of technological development—computers and communications. Information flows through networks almost everywhere. In some places, the signs of its flow are evident. Cables run every which way on telephone poles, dishes are mounted on the sides of buildings, and large antennas occupy the high ground. But in most places the signs are hidden. Cables run underground and invisible broadcast signals fill the air. You walk through a thick soup of data that flows from point to point. Some of the information being sent is familiar, for example, telephone conversations and radio and television broadcasts. Much of it is not public. For example, signals from an ATM machine while someone makes a withdrawl. Whatever the information is and wherever and however it is flowing, it is being communicated with modern information technology and computers are involved at many points in the process. Data Transmission Digital Signal Computers are based on on/off electrical states, because they use the binary number system, which consists of only two digits 0 and 1. At their most basic level computers can distinguish between just these two values, 0 and 1, or off and on. There is no simple way to represent all the values in between, such as 0.50. All data that a computer processes must be encoded digitally, as a series of 0s and 1s. In general, digital means "computer-based". Specifically, digital describes any system based on discontinuous data or events; in the case of computers, it refers to communications signals or information represented in a two-state (binary) way using electronic or electromagnetic signals. Each 0 and 1 signal represents a bit. Advantages of Digital Signal The advantages of digital signal are given below: 1. Digital Data : Digital transmission certainly has the advantages where binary computer data is being transmitted. The equipment required converting digital data to analog format and transmitting the digital bit streams over an analog network can be expensive, susceptible to failure, and can create errors in the information. 2. Compression: Digital data can be compressed relatively easily, thereby increasing the efficiency of transmission. As a result, substantial volumes of voice, data, video, and image information can be transmitted using relatively little raw bandwidth. 3. Security : Digital systems offer better security. While analog systems offer some measure of security through the scrambling of several frequencies. Scrambling is fairly simple to defeat. Digital information, on the other hand, can be encrypted to create the appearance of a single, pseudorandom bit stream. Thereby, the true measuring of individual bits, sets of bits, or the total bit stream cannot be determined without having the key to unlock the encryption algorithm employed. 4. Quality : Digital transmission offers improved error performance (quality) as compared to analog. This is due to the devices that boost the signal at periodic intervals in the transmission system in order to overcome the effects of attenuation. Additionally, digital networks deal more effectively with noise, which always is present in transmission networks. 5. Cost : The cost of the computer components required in digital conversion and transmission has dropped considerably, while the ruggedness i.e., unevenness and reliability of those components has increased over the years. 6. Upgradability : Since digital networks are comprised of computer (digital) components, they are relatively easy to upgrade. Such upgrading can increase bandwidth, reduces the incidence of error and enhance functional value. Some upgrading can be effected remotely over a network, eliminating the need to dispatch expensive techniques for that purpose. 7. Management : Generally speaking, digital networks can be managed much more easily and effectively due to the fact that such networks consist of computerised components. Such components can sense their own level of performance, isolate and diagnose failures, initiate alarms, respond to queries and respond to commands to correct any failure. Further, the cost of these components continues to drop. Analog Signal Most of the phenomena of the world are analog, continuously varying in strength and/or quality%~luctuating, evolving, or continually changing. Sound, light, temperature, and pressure values, for instance, can be anywhere on a continuum or range. The highs, lows, and in-between states have historically been represented with analog devices rather than in digital form. Example of analog devices are a speedometer, a thermometer, and a tyrepressure gauge, all of which can measure continuous fluctuations. Advantages of Analog Signal The advantages of analog signal are given below : 1. Cost Effective : Analog has an inherent advantage as voice, image and video are analog in nature. Therefore, the process of transmission of such information in relatively straightforward in an analog format, whereas conversion to a digital bit stream requires conversion equipment. Such equipment increases cost, makes it susceptible to failure, and can negatively affect the quality of the signal through the conversion process, itself. 2. Bandwidth : A raw information stream consumes less bandwidth in analog form than in digital form. This is particularly evident in CATV transmission, whereas 50 or more analog channels routinely are provided over a single coaxial cable system. Without the application of compression techniques on the same cable system, only a few digital channels could be supported. 3. Presence : The analog transmission systems are already in place, worldwide interconnection of those systems is very common and all standards are well established. As majority of network traffic is voice and as the vast majority of voice terminals are analog devices therefore, voice communications largely, depends on analog networks. Conversion to digital networks would require expensive, wholesale conversion of such terminal equipment. 4. Analog transmission offers advantages in the transmission of analog information. Additionally, it is more bandwidth-conservatives and is widely available. Humans experience most of the world in analog form—our vision, for instance, perceives shapes and colours as smooth gradations. But most analog events can be simulated digitally. Traditionally, electronic transmission of telephone, radio, television, and cable-TV signals has been analog. The electrical signals on a telephone line, for instance, have been analog data representations of the original voices, transmitted in the shape of a wave (called a carrier wave). Why bother to change analog signals into digital ones, especially since the digital representations are only approximations of analog events? The reason is that digital signals are easier to store and manipulate electronically. Data Communication Terminologies Let us discuss some common data communication terminologies. Channel A communication channel is a path which helps in data transmission. Baud The communication data transfer rate is measured in a unit known as baud. Technically, baud refers to number of signal (state) changes per second. In general, 1 baud represents only 1 signal change per second, and is equivalent to 1 bit per second. Bandwidth The bandwidth is the range, or band, of frequencies that a transmission medium can carry in a given period of time. For analog signals, bandwidth is expressed in hertz (Hz), or cycles per second. For example, certain cellphones operate within the range 824-849 Megahertz—that is, their bandwidth is 25 Megahertz. The wider a medium's bandwidth, the more frequencies it can use to transmit data and thus the faster the transmission. Broadband connections are characterized by very high speed. For digital signals, bandwidth can be expressed in hertz but also in bits per second (bps). For instance, the connections that carry the newest types of digital cellphone signals range from 144 Kilobits (14, 400 bits) per second to 2 Megabits (2 million bits) per second. Digital cellphones may use the same radio spectrum frequencies as analog cellphones, but they transmit data faster because they use compressed digital signals, which can carry much more information than can analog signals. Data Transfer Rate It represents the amount of data transferred per second by a communications channel or a computing or storage device. We measure data transfer rate in bits per second (bps). The following terms are used : Bits per second (bps). Kilobits per second (Kbps)—thousands of bits per second. Megabits per second (Mbps)—millions of bits per second. Gigabits per second (Gbps)—billions of bits per second. Terabits per second (Tbps)—trillions of bits per second. Transmission Media It used to be that two-way individual communications were accomplished mainly in two ways. They were carried by the medium of (1) a telephone wire or (2) a wireless method such as shortwave radio. Today there are many kinds of communications media, although they are still wired or wireless. Communications media carry signals over a communications path, the route between two or more communications media devices. The speed, or data transfer rate, at which transmission occurs—and how much data can be carried by a signal—depends on the media and the type of signal. Wired Communications Media Three types of wired communications media are twisted-pair wire (conventional telephone lines), coaxial cable, and fiber-optic cable. Twisted-pair wire The telephone line that runs from your house to the pole outside, or underground, is probably twisted-pair wire. Twisted-pair wire consists of two strands of insulated copper wire, twisted around each other. This twisted-pair configuration (compared to straight wire) somewhat reduces interference (called "crosstalk") from electrical fields. Twisted-pair is relatively slow, carrying data at the rate of 1-128 MegaBits Per Second (MBPs). Moreover, it does not protect well against electrical interference. However, because so much of the world is already served by twisted-pair wire, it will no doubt be used for years to come, both for voice messages and for modem-transmitted computer data (dial-up connections). Figure 5 shows a twisted-pair wire. Advantages The main advantages of twisted-pair wire are given below : 1. It is simple. 2. It is relatively easy for telecommunications companies to upgrade the physical connections between cities and even between neighbourhoods. 3. It is physically flexible. 4. It has a low weight. 5. It is very inexpensive. Disadvantages The disadvantages of twisted-pair wire are given below : 1. It is expensive for telecommunications companies to replace the "last mile" (the distance from your home to your telephone's switching office, the local loop, is often called the "last mile") of twisted-pair wire that connects to individual houses. 2. Due to high attenuation, it is not suitable for carrying a signal over long distances without using repeaters. 3. It is unsuitable for broadband applications as it has low bandwidth capabilities. Coaxial cable Coaxial cable, commonly called "co-ax," consists of insulated copper wire wrapped in a solid or braided metal shield and then in an external plastic cover. Co-ax is widely used for cable television and cable internet connections. Thanks to the extra insulation, coaxial cable is much better than twisted-pair wiring at resisting noise. Moreover, it can carry voice and data at a faster rate (up to 200 Megabits Per Second). Often many coaxial cables are bundled together. Advantages The main advantages of coaxial cable are given below : 1. It's data transmission characteristics are far better than twisted-pair wiring. 2. It is widely used for cable television and cable internet connections. 3. It can be used for broadband transmission i.e., several channels can be transmitted simultaneously. Disadvantages The disadvantages of coaxial cable are given below : 1. It is expensive as compared to twisted-pair wiring. 2. These are not compatible with twisted-pair wiring. Fiber-optic Cable A fiber-optic cable consists of dozens or hundreds of thin strands of glass or plastic that transmit pulsating beams of light rather than electricity. These strands, each as thin as a human hair, can transmit up to about 2 billion pulses per second (2 Gigabits); each "on" pulse represents 1 bit. When bundled together, fiber-optic strands in a cable 0.12 inch thick can support a quarter-to a half million voice conversations at the same time. Moreover, unlike electrical signals, light pulses are not affected by random electromagnetic interference in the environment. Advantages The main advantages of fiber-optic cable are given below : 1. It has a much lower error rate than normal telephone wire and cable. 2. It is lighter and more durable than twisted-pair wire and co-ax cable. 3. It cannot easily be wiretapped, so transmissions are more secure. 4. It can be used for broadband transmission. Disadvantages The disadvantages of fiber-optic cable are given below : 1. Installation problem. 2. It is most expensive of all the cables and wires. 3. Connection losses are common problems. Transmission Media Wireless Communications Media Four types of wireless media are infrared transmission, broadcast radio, microwave radio, and communications satellite. Infrared Transmission Infrared wireless transmission sends data signals using infrared-light waves at a frequency too low (1-7 megabits per second) for human eyes to receive and interpret. Infrared ports can be found on some laptop computers, digital cameras, and printers, as well as wireless mice. Advantages The main advantages of infrared transmission are given below : 1. It is used by TV remote-control units, automotive garage door and wireless speakers etc. 2. It is very secure. Disadvantages The disadvantages of infrared transmission are given below : 1. The line-of-sight communication is required—there must be an unobstructed view between transmitter and receiver. 2. Transmission is confined to short range. Broadcast Radio When you tune into an AM or FM radio station, you are using broadcast radio, a wireless transmission medium that sends data over long distances at up to 2 Megabits Per Second—between regions, states, or countries. A transmitter is required to send messages and a receiver to receive them; sometimes both sending and receiving functions are combined in a transceiver. In the lower frequencies of the radio spectrum, several broadcast radio bands are reserved not only for conventional AM/FM radio but also for broadcast television, cellphones, and private radio-band mobile services (such as police, fire, and taxi dispatch). Some organizations use specific radio frequencies and networks to support wireless communications. For example, UPC (Universal Product Code) bar-code readers are used by grocery-store clerks restocking store shelves to communicate with a main computer so that the store can control inventory levels. Advantages The main advantages of broadcast radio are given below : 1. It provides mobility. 2. It is cheaper than digging trenches for laying cables and maintaining repeaters and cables if cables are damaged for various reasons. 3. It offers freedom from land owner rights that are required for laying, repairing the cables and wires. 4. It offers ease of communication. Disadvantages The disadvantages of broadcast radio are given below : 1. It is an insecure communication. 2. It is susceptible to weather effects like rains, thunder storms etc. Microwave Radio Microwave radio transmits voice and data at 75 Megabits Per Second through the atmosphere as superhigh-frequency radio waves called microwaves, which vibrate at 1 Gigahertz (1 billion hertz) per second or higher. These frequencies are used not only to operate microwave ovens but also to transmit messages between ground based stations and satellite communications systems. Nowadays dish-or horn-shaped microwave reflective dishes, which contain transceivers and antennas, are nearly everywhere—on towers, buildings, and hilltops. Why, you might wonder, do we have to interfere with nature by putting a microwave dish on top of a mountain? As with infrared waves, microwaves are line-of-sight; they cannot bend around corners or around the earth's curvature, so there must be an unobstructed view between transmitter and receiver. Thus, microwave stations need to be placed within 25-30 miles of each other, with no obstructions in between. The size of the dish varies with the distance (perhaps 2-4 feet in diameter for short distances, 10 feet or more for long distances). In a string of microwave relay stations, each station will receive incoming messages, boost the signal strength, and relay the signal to the next station. More than half of today's telephone systems uses dish microwave transmission. However, the airwaves are becoming so saturated with microwave signals that future needs will have to be satisfied by other channels, such as satellite systems. Advantages The main advantages of microwave radio are given below : 1. It is cheaper than digging trenches for laying cables and maintaining repeaters and cables if cables are damaged for various reasons. 2. It offers freedom from land owner rights that are required for laying, repairing the cables and wires. 3. It offers ease of communication. 4. Microwave radio can communicate over oceans. Disadvantages The disadvantages of microwave radio are given below : 1. It is an insecure communication. 2. The signal strength may be reduced due to setting of antenna. 3. It is susceptible to weather effects like rains, thunder storms etc. 4. It has extremely limited bandwidth allocation. 5. It has high cost of design, implementation, and maintenance. Communications Satellites To avoid some of the limitations of microwave earth stations, communications companies have added microwave "sky stations"—communications satellites. Communications satellites are microwave relay stations in orbit around the earth. Transmitting a signal from a ground station to a satellite is called uplinking; the reverse is called downlinking. The delivery process will be slowed if, as is often the case, more than one satellite is required to get the message delivered. Satellite systems may occupy one of three zones in space : GEO, MEO, and LEO. The highest level, known as geostationary earth orbit (GEO), is 22,300 miles and up and is always directly above the equator. Because the satellites in this orbit travel at the same speed as the earth, they appear to an observer on the ground to be stationary in space—that is, they are geostationary. Consequently, microwave earth stations are always able to beam signals to a fixed location above. The orbiting satellite has solar-powered transceivers to receive the signals, amplify them, and retransmit them to another earth station. At this high orbit, fewer satellites are required for global coverage; however, their quarter-second delay makes two-way conversations difficult. The medium-earth orbit (MEO) is 5,000-10,000 miles up. It requires more satellites for global coverage than does GEO. The low-earth orbit (LEO) is 200-1,000 miles up and has no signal delay. LEO satellites may be smaller and are cheaper to launch. Satellites cost are very-very high and a satellite launch also costs very high. Recently India created history by launching many satellites together. Advantages The main advantages of communications satellites are given below : 1. It covers a quite large area. 2. It is the best alternative as laying and maintenance of intercontinental cable is difficult and expensive. 3. It is commercially attractive. 4. Due to large coverage area, it is very useful for sparsely populated areas (i.e., areas having long distances between houses). Disadvantages The disadvantages of communications satellites are given below : 1. The deployment of large, high gain antennas on the satellite platform are prevented due to technological limitations. 2. Over-crowding of available bandwidths due to low antenna gains. 3. Very high investment cost and insurance cost due to chances of failure. 4. High atmospheric losses over 30 GHz carrier frequencies. Network A network, or communications network, is a system of interconnected computers, telephones, or other communications devices that can communicate with one another and share applications and data. The tying together of so many communications devices in so many ways is changing the world we live in. Network Users To use a network, you first log on with an ID and a password. The ID is assigned by the network administrator. The password is selected by you and can (and should) be changed frequently to improve security. Because network software can be customized, what you see when you log on depends on the system's design. Normally you will find printers, hard disks, and other shared assets listed on your system's dialog boxes even though they are located elsewhere on the network. Advantages of Networks People and organizations use networks for the following reasons, the most important of which is the sharing of resources. Sharing of peripheral devices Peripheral devices such as laser printers, disk drives, and scanners can be expensive. Consequently, to justify their purchase, management wants to maximize their use. Usually the best way to do this is to connect the peripheral to a network serving several computer users. Sharing of programs and data In most organizations, people use the same software and need access to the same information. It is less expensive for a company to buy one word processing program that serves many employees than to buy a separate word processing program for each employee. Moreover, if all employees have assess to the same data on a shared storage device, the organization can save money and avoid serious problems. If each employee has a separate machine, some employees may update customer addresses, while others remain ignorant of the changes. Updating information on a shared server is much easier than updating every user's individual system. Finally, network-linked employees can more easily work together online on shared projects. Better communications One of the greatest features of networks is electronic mail. With e-mail, everyone on a network can easily keep others posted about important information. Security of information Before networks became commonplace, an individual employee might be the only one with a particular piece of information, which was stored in his or her desktop computer. If the employee was dismissed—or if a fire or flood demolished the office—the company would lose that information. Today such data would be backed up or duplicated on a networked storage device shared by others. Access to databases Networks enable users to tap into numerous databases, whether private company databases or public databases available online through the Internet. Disadvantages of Networks The disadvantages of networking are given below : Crashes The main disadvantage is on a server-based network. When it crashes, work gets disrupted as all network resources and its benefits are lost. Thus, the proper precautions are needed to ensure regular backups as the crash may result in the loss of important data and wastage of time. Data security problems In a network, generally, all the data resources are pooled together. So, it is very much possible for unauthorised personel to access classified information if network security is weak or poorly implemented. Lack of privacy A network may also result in loss of privacy, as anyone, especially your senior, with the right network privileges may read or even destroy your private e-mail messages. Network Topologies Networks can be laid out in different ways. The logical layout, or shape, of a network is called a topology. The various topologies are given below: Bus Topology The bus topology works like a bus system at rush hour, with various buses pausing in different bus zones to pick up passengers. In a bus topology, all communications devices are connected to a common channel. (See Figure 22) Bus Topology (A single channel connects all communications devices.) In a bus topology, all nodes are connected to a single wire or cable, the bus, which has two endpoints. Each communications device on the network transmits electronic messages to other devices. If some of those messages collide, the sending device waits and tries to transmit again. Advantages of Bus Topology The advantages of bus topology are given below : 1. It may be organized as a client/server or peer-to-peer network. 2. It is very simple to setup and needs a short cable length as compared to ring topology. 3. It is simple to install due to its simple architecture. It is an older topology so technicians are easily available for bus topology. 4. It is very easy to expand. All you have to do is to decide the point of installation of new node and connect the new node by T connector. Disadvantages of Bus Topology The disadvantages of bus topology are given below : 1. Extra circuitry and software are needed to avoid collisions between data. 2. If a connection in the bus is broken—as when someone moves a desk and knocks the connection out—the entire network may stop working. 3. When the length of the cable grows upto a certain limit, the network becomes slow as the signal loose their power due to long length. 4. Complex protocols are used to decide who will be the next sender when the current sender finishes its transmission. 5. Ring Topology A ring topology is one in which all microcomputers and other communications devices are connected in a continuous loop. (See Figure 23) There are no endpoints. Ring Topology (This arrangement connects the network's devices in a closed loop.) Electronic messages are passed around the ring until they reach the right destination. There is no central server. An example of a ring network is IBM's Token Ring Network, in which a bit pattern (called a "token") determines which user on the network can send information. Advantages of Ring Topology The advantages of ring topology are : 2. In ring topology, a shorter cable length is needed as compared to star topology. 3. Each node is connected to the other by using a single connection. 4. Messages flow in only one direction. Thus, there is no danger of collisions. 5. Ring topology delivers fast and efficient performance. 6. It is suitable to setup high speed network using optical fibres. Disadvantages of Ring Topology The disadvantages of ring topology are given below : 1. If a connection is broken, the entire network stops working. The network cannot be operational till the complete ring is working. 2. As the single channel is shared by various nodes, it is difficult to diagnose the fault. 3. It is difficult to add, remove or reconfigure the nodes in the network. 4. In case of node failure, bypassing the traffic requires costly devices. 5. Star Topology A star topology is one in which all microcomputers and other communications devices are directly connected to a central server. (See Figure 24). Star Topology (This arrangement connects all the network's devices to a central host computer, through which all communications must pass.). Electronic messages are routed through the central hub to their destinations. The central hub monitors the flow of traffic. A PBX system is an example of a star network. Traditional star networks are designed to be easily expandable because hubs can be connected to additional hubs of other networks. Advantages of Star Topology The advantages of star topology are given below : 1. It is easy to install as each node has a single connection to the central device called hub. The hub prevents collisions between messages. 2. It only needs one connection per node to install a node in the network. 3. If a connection is broken between any communications device and the hub, the rest of the devices on the network will continue operating. 4. It allows easy management of network. 5. It is very easy to detect faults in the network. 6. It uses simple communication protocols. Disadvantages of the Star Topology The disadvantages of star topology are given below : 1. In star topology, each node is connected individually to the hub. This way it requires large quantity of cables. This in turn increases the cost of network. 2. The hub offers limited number of connections. Therefore, the network can be expanded upto a certain limit; after which a new hub is needed and we have to go for tree topology. 3. The working of the network depends on the working of the hub. If the hub goes down, the entire network will stop. 4. Generally, it is the most popular topology for small LANs. Tree Topology Tree typology is another popular topology, which is suitable for networks having hierarchical flow of data. By the term "hierarchical flow", we mean that the data travels level by level. It starts from one level and travels upto one level down and then to subsequent levels. (See Figure 25). In the tree topology, computers are connected like an inverted tree. The server or the host computer is connected at the top. To the server, the most important terminals are attached at next level and to these terminals, clients are attached. Data can flow from top to bottom and bottom to top in level by level manner. It is an extension to a star topology network. Mesh Topology In mesh topology, each node is connected to more than one nodes in the system. In this way, there exist multiple paths between two nodes of the network. In case of failure of one path, the other one can be used. The mesh topology is used in networks spread in wide geographical area of several kilometres. These kinds of networks have storage capabilities at intermediate nodes. There exist special devices called "routers" that are used to decide a route for the incoming packet and send it to towards its destination. It is generally used in interconnected networks that connects multiple LANs. (See Figure 26). Fully Connected It is a topology in which each node is connected directly to every other node of the network through individual connection. This kind of topology is extremely costly to implement and maintain. It is very rarely used in environments where crucial data is required with lightening speed. (See Figure 27). Graph Topology Graph topology is also a very rarely used topology in networks. In this topology, each node may or may not be connected to the other node. There is no rule about the structure and working of graph topology. If there exists a path between each node of the graph then we say that it is a connected graph. (See Figure 28). Tree Topology. Mesh Topology. Fully Connected. Graph Topology Intranets, Extranets and Firewalls Let us discuss the private internet networks. Intranets—for internal use only An intranet is an organization's internal private network that uses the infrastructure and standards of the Internet and the web. When a corporation develops a public website, it is making selected information available to consumers and other interested parties. When it creates an intranet, it enables employees to have quicker access to internal information and to share knowledge so that they can do their jobs better. Information exchanged on intranets may include employee e-mail addresses and telephone numbers, product information, sales data, employee benefit information, and lists of jobs available within the organization. Extranets—for certain outsiders Taking intranet technology a few steps further, extranets offer security and controlled access. As we have seen, intranets are internal systems, designed to connect the members of a specific group or a single company. By contrast, extranets are private intranets that connect not only internal personnel but also selected suppliers and other strategic parties. Extranets have become popular for standard transactions such as purchasing. Ford Motor Company, for instance, has an extranet that connects more than 15, 000 Ford dealers worldwide. Called FocalPt, the extranet supports sales and servicing of cars, with the aim of improving service to Ford customers. Firewalls—to keep out unauthorized users Security is essential to an intranet (or even an extranet). Sensitive company data, such as payroll information, must be kept private, by means of a firewall. A firewall is a system of hardware and software that blocks unauthorized users inside and outside the organization from entering the intranet. The firewall software monitors all internet and other network activity, looking for suspicious data and preventing unauthorized access. Always-on Internet connections such as cable modem and DSL, as well as WiFi devices, are particularly susceptible to unauthorized intrusion, so users are advised to install a firewall. A firewall consists of two parts, a choke and a gate. The choke forces all data packets flowing between the Internet and the intranet to pass though a gate. The gate regulates the flow between the two networks. It identifies authorized users, searches for viruses, and implements other security measures. Thus, intranet users can gain access to the Internet (including key sites connected by hyperlinks), but outside Internet users cannot enter the intranet. Virtual Private Network Wide-area networks use leased lines of various bandwidths. Maintaining a WAN can be expensive, especially as distances between offices increase. To decrease communications costs, some companies have established Virtual Private Networks (VPNs), private networks that use a public network (usually the Internet) to connect remote sites. (See Figure 3 0). Company intranets, extranets, and LANS can all be parts of a VPN. The ISPs local access number for your area is its point of presence (POP). Firewall. Virtual Private Network (VPN). Lesson 1.4 Wireless and Mobile Applications A brief history of Wireless Communications WIRELESS When we think of wireless the first thing that comes up to our minds is simply "No wires needed" which is exactly what it means. With modern technology utilizing wireless more we tend to forget that wireless has already been existing for 150 years. Wireless communication or wireless for short is the term used when the transferring of information between two or more points, the concept behind wireless communication is that it utilizes Radio Waves. In modern technology Bluetooth is being used to communicate with other devices such as mouse, keyboard, earphones and so much more. FIRST INSTANCES OF WIRELESS COMMUNICATION As mentioned above wireless communication has already existed for almost 150 years, the first instance of wireless communication was actually a telephone. The Photophone was invented by Alexander Graham Bell and Charles Summer Tainter, it is a kind of telephone that sends audio over a beam of light so it required sunlight and a clear line of sight from the transmitter and receiver these two factors greatly affected the viability of the photophone although its principles have been soon after used by the military and fiber-optic communications but it took over several decades. 1894 In the year 1894 Guglielmo Marconi an Italian inventor and electrical engineer was developing a wireless telegraph system that uses radio waves. Although the idea was already existing in the year 1888 it did not seem to be practical due to the short range the devices could only do; but it would not be the end as over time Marconi developed a system that could surpass distances that no would have ever predicted in where it reached across the Atlantic. Together with Karl Ferdinand Braun, they were both rewarded a Nobel Prize for Physics in the year 1909 for their contribution of the wireless telegraphy. MODERN DAY WIRELESS COMMUNICATIONS AND TECHNOLOGY Today we can thank the inventors and theorists behind wireless communications, looking back during the 1990, this was the time when wireless communication revolution began. Although wired communications still exist today at least we do not see the burden of wires all over the place. Will there be a time when wires no longer exist? Maybe, technology is always evolving everyday as we can see during the past the devices and systems would always be absurdly large but as time progresses they get smaller and smaller. Wireless Applications We already discussed what is Wireless and its brief history of origin. Now what are more examples? CCTV Security System Mobile Phones Radios Remote Controllers Wireless Routers Speakers Smart TV Technically speaking, any device could become wireless but of course that does not mean for the better chance since not all wireless devices will be better in their own ways. An example of this is wireless vs wired earbuds, each have their own pros and cons but it all boils down to the user's preferences. What is Wi-Fi? Wi-Fi Did you know that Wi-Fi was first introduced in September 21, 1998? Well now you know! But did you also know that Wi-Fi does not stand for anything at all? According to The Wi-Fi Alliance, they simply found that "IEEE 802.11b Direct Sequence" was too long and needed a catchier name so they hired a Interbrand to create one and out all 10 potential names Wi-Fi was the one chosen the most. Throughout the time there have been instances where Wi-Fi stood for Wireless Fidelity but there have been no official claims of it being called as such, much similar to ATM it could mean two things: Automated Teller Machine or At the moment. Wi-Fi is just really a generic term for saying that there is a Local Area Network(LAN) of devices and Internet and in doing so these devices can exchange information and data with the use of radio waves. Devices such as cellphones, computers or laptops, printers and even smart TVs can utilize Wi-Fi. What is Bluetooth? BLUETOOTH Introduced at May 7, 1998 the same year as Wi-Fi was introduced. These two are like brothers with both utilizing the same concepts and principles of wireless connection. Bluetooth is a short-range wireless technology standard that is used for exchanging data between fixed and mobile devices over short distances using UHF radio waves in the ISM bands, from 2.402 GHz to 2.48 GHz, and building personal area networks (PANs). Why was it called Bluetooth? Although it has absolutely no correlation to the color of a tooth, the name was originally proposed back in the year 1997 by Jim Kardach of Intel in which he developed a system where mobile phones and computers would communicate, during that time he was also reading a historical novel by Frans G. Bengtssons with the title "The Long Ships" a book about Vikings and the 10th-century Danish king Harald Bluetooth. King Harald Bluetooth in the novel united Danish tribes into a single kingdom in which Jim Kardach got the idea of Anglicising Blåtand orBlåtann and this the name Bluetooth was used to communicate fixed devices. Mobile Applications What are mobile applications? Mobile Applications or Mobile App or just App is a computer program or software designed to run on mobile devices such as smart phones, tablets and smart watches. All mobile phones have built in apps the moment you purchase them while there are some applications such as Facebook, Twitter or Instagram can be downloaded from App Store. It should be noted that there are some apps that only work on specific platforms, downloading those apps may require you to use an Emulator. What is an Emulator? An emulator is a hardware or software used by the device to act as another device. An example of this is BlueStacks 5, it is currently the most popular Android Emulator for computer systems, not only can you play games on your own computer or laptop but you can also use social media and develop apps. Simply put an emulator is like having a virtual mobile phone. More examples of Android Emulators: ○ Bluestacks ○ Gameloop ○ LDplayer ○ MEmu Play ○ Nox Player ○ PrimeOS ○ Genymotion More examples of iOS Emulators: ○ Appetize.io ○ Corellium ○ iOS Simulator in Xcode ○ TestFlight ○ Electric Mobile Studio ○ Remote iOS Simulator for Windows ○ iPadian Different types of Mobile Applications Native app All apps targeted toward a particular mobile platform are known as native apps. Therefore, an app intended for Apple device does not run in Android devices. As a result, most businesses develop apps for multiple platforms. While developing native apps, professionals incorporate best-in-class user interface modules. This accounts for better performance, consistency and good user experience. Users also benefit from wider access to application programming interfaces and make limitless use of all apps from the particular device. Further, they also switch over from one app to another effortlessly. The main purpose for creating such apps is to ensure best performance for a specific mobile operating system. Web-based app A web-based app is implemented with the standard web technologies of HTML, CSS, and JavaScript. Internet access is typically required for proper behavior or being able to use all features compared to offline usage. Most, if not all, user data is stored in the cloud. The performance of these apps is similar to a web application running in a browser, which can be noticeably slower than the equivalent native app. It also may not have the same level of features as the native app. Hybrid app The concept of the hybrid app is a mix of native and web-based apps. Apps developed using Apache Cordova, Xamarin, React Native, Sencha Touch, and other frameworks fall into this category. These are made to support web and native technologies across multiple platforms. Moreover, these apps are easier and faster to develop. It involves use of single codebase which works in multiple mobile operating systems. Despite such advantages, hybrid apps exhibit lower performance. Often, apps fail to bear the same look-and-feel in different mobile operating systems. Source: https://clevertap.com/blog/types-of-mobile-apps/ Lesson 1.5: Internet and Web Technologies Introduction Internet (the "net") is the worldwide computer network that connects hundreds of thousands of smaller networks, linking computers at academic, scientific, and commercial institutions, as well as individuals. Using it, the users around the world can share all type of information and services. OR Internet is the network that connects other networks of computers around the globe into one seamless network. In 1999, there were about 13 1 million active Internet users worldwide. By the end of 2003, there were close to 360 million. In 200 1, there were about 100 million Internet devices of various kinds. By 20 10, there will be an estimated 14 billion. The first step towards the construction of Internet was taken by U.S. Department of Defence in 1969, when they approved a project named ARPANET (acronym for Advanced Research Projects Network). In 1995, a new name was given to the collection of networks and it is now called THE INTERNET. The number of computers being connected to the Internet doubles in less than a year. Today the world of the Internet permits activities hardly imaginable 10 years age. Activity Purpose Auctions Sell old stuff, acquire more stuff, with online auctions. Career advancement Search job listings, post resumes, interview online. Distance learning Attend online lectures, have discussions, research papers. Download files Get software, music, and documents such as e-books. E-mail and discussion Stay in touch worldwide through electronic mail and online groups chat rooms. Entertainment Amuse yourself with Internet games, music and videos. E-business Connect with coworkes, buy supplies, support customers. E-shopping Price anything from plane tickets to cars, order anything from books to sofas. Financial matters Do investing, banking and bill paying online. News Stay current on politics, weather, entertainment, sports and financial news. Research and information Find information on any subject, using browers and search tools. Telephony and Make inexpensive phone calls; have online meetings. conferencing Because of its standard interfaces and low rates, the Internet has been the great leveler for communications—just as the personal computer was for computing. Starting in 1969 with four computers linked together by communications lines, the Internet expanded to 62 computers in 1974, 500 computers in 1983 and 28,000 in 1987; but it still remained the domain of researchers and academics. Not until the development of the world wide web in the early 1990s, which made multimedia available on the Internet, and the first browser, which opened the web to commercial uses, did the global network really take off, reaching to 3 million servers or host computers in 1994. By the end of 2006 the number of Internet users was nearly over 1 billion. And no one nation, company, or entity really owns it. Year Development Early ARPA (Advanced Research Projects Agency) : U.S. Defense Department's 1960s research organization studies advanced technology that could be used to defend the United States; develops many large databases. Early ARPANET : ARPA developed a networked communications system that 1970s couldn't be knocked out by eliminating computers or links in the system. They also developed the rules by which data was transmitted. By the early 1970s, ARPANET had grown from 4 networked research locations to 20 military sites and universities. 1975 ARPANET was transferred to the U.S. Defence Communications Agency, thus restricting network access to only a few groups. 1980 The National Science Foundation (NSF) started CSnet to provide a network opportunity for computer science researchers at all U.S. universities. By 1986, almost all the country's computer science departments, as well as some private companies, were connected to CSnet. Late After 5 supercomputer centers were built across the United States, the NSF 1980s built a very fast connection—called a backbone—among them. Regional companies, schools, and other organizations built their own regional networks and connected them to the backbone. By ARPANET had become too expensive and had outlived its usefulness; it was 1989 closed down, and many of its sites were connected to the NSF backbone. This vast inter-network became known as the Internet. By In its early stages, the Internet was used mainly for research and scientific 1995 purposes. Soon, however, it was recognized as a revolutionary information resource, and in 1995 it became known as the Information Superhighway. In 1992, multimedia information became available via., the World Wide Web. 1999 Internet access had become virtually universal. Accessing Internet To connect to the Internet, we have to connect our computer to the computer server of Internet Service Provider (ISP). ISPs are companies which provide Internet related services to its users. ISPs have special computers called Internet Servers which are connected to the Internet from one end and to several users from the other end. These servers work 24 hours a day, 365 days a year to provide services to their customers. Types of Connections The connection to service provider can be temporary as well as permanent, depending upon the choice of the customer. You can either have a dial up connection to the Internet or you can have a leased line connection. Dial-up Connection In this type of connection, user creates a temporary connection with the ISP, uses the Internet services and then disconnects the connection. The user is charged a fixed amount depending upon the time period for which the connection was active. This is the cheaper method of Internet access and most of the users connect to the Internet by using dial-up connection. To establish a dial-up connection, you only need a computer equipped with a device called "Modem" and a telephone connection to connect your computer to the ISP. In this type of connection, users are billed as per their duration of connection and not on the amount of data transfer. Leased Line Connection In leased line connection, a permanent connection is dedicated for a particular user and it is open 24 hours a day. The user is charged a fixed amount, depending upon the data transfer rate provided by the connection. The data transfer rate is measured in terms of Kbps (Kilo Bits Per Second). More the data transfer rate of the connection, more are the charges for the connection. Leased line connections generally range from 64 Kbps to 5 12 Kbps. In this type of connection, the users are billed on the amount of data sent or received by them. The amount charged for these connections is generally huge and these connections are used by large corporate houses or educational institutions like universities. These connections also require very high speed modems, having very high costs. Skilled people are also required to manage the leased line connections. Due to a huge competition in the service providers, you can have a line connection for a few Rupees per month in India. WWW (World Wide Web) World Wide Web or WWW or simply Web has changed the picture of Internet, after its creation in 1989 by Tim Berners-Lee. Earlier, Internet was used to share textual information only. There were no graphics, no animations and no links as you see in today's Internet. All credit goes to WWW, which provides easy and effective way of storing and accessing information on the Internet. World Wide Web is a set of programs, standards and protocols that allows the text, images, animations, sounds, videos to be stored and accessed and linked together in the form of websites. Basically, WWW is a collection of millions of web pages stored in thousands of computers all over the world. It is a safe house for storing information on the Internet. Elements of WWW World Wide Web uses several technologies, programming languages, interfaces and devices to bring the ocean of information on your desk. WWW mainly relies on the following to making rapid access of information : 1. Web Server 2. Web Browser 3. Website 4. Hypertext and Hypermedia 5. Hyperlinks 6. HyperText Transfer Protocol (HTTP) 7. HyperText Markup Language (HTML) 8. Search Engines 9. Addressing schemes Web inventor—Tim Berners-Lee. Remember that the Internet and the World Wide Web are not the same thing. The Internet is a massive network of networks, connecting millions of computers via., protocols, hardware, and communications channels. The Web is a means of accessing information available on the Internet using software called a browser. Let us discuss some of the important terms associated with WWW : Website—the domain on the computer: The top-level domains are.com,.edu,.org, and.net. A computer with a domain name is called a website (site). When you decide to buy books at the online site of a bookseller, you would visit its website. The website is the location of a web domain name in a computer somewhere on the Internet. Web pages—the documents on a website : A website is composed of a web page or collection of related web pages. A web page is a document on the World Wide Web that can include text, pictures, sound and video. The first page you see on a website is like the title page of a book. This is the home page, or welcome page, which identifies the website and contains links to other pages at the site. If you have your own personal website, it might consist of just one page—the home page. Large websites have scores or even hundreds of pages. Browsers To use the Web, the software you use is a browser. A Web browser is a software application which enables a user to display and interact with text, images, videos, music, games and other information typically located on a web page at a web site on the World Wide Web or a local area network. This is an intensely competitive field right now and browsers are undergoing very rapid change. Some are independent software programs such as those developed by Microsoft, Mozilla and Netscape. Others are integrated into application programs such as word processing programs, spreadsheets and databases. Some of the Web browsers currently available for personal computers include Internet Explorer, Mozilla Firefox, Safari, Opera, Avant Browser, Google Chrome and AOL Explorer. Figure 3 shows Mozilla Firefox browser. Mozilla Firefox Browser. Firewall Firewall A firewall in computer terms protects your network from untrusted networks. The reason is simple : it's a matter of survival ! companies rely more and more on the Internet to advertise their products and services. It has become necessary to protect data, transmissions and transactions from any incidents, regardless if the cause is unintentional or by malicious acts. This firewall mechanism is used to protect your corporate network/Internet and/or Web Servers against unauthorised access coming from Internet or even from inside a protected network. Basically a firewall separates a protected network from an unprotected one, the Internet. A firewall is a system of hardware and software that blocks unauthorized users inside and outside the organization from entering the interanet (an organization's internal private network that uses the infrastructure and standards of the Internet and the World Wide Web). The firewall monitors all Internet and other network activity, looking for suspicious data and preventing unauthorized access. A firewall consists of two parts, a choke and a gate. The choke forces all data packets flowing between the Internet and the intranet to pass through a gate. The gate regulates the flow between the two networks. It identifies authorized users, searches for viruses, and implements other security measures. Thus, intranet users can gain access to the Internet (including key sites connected by hyperlinks), but outside Internet users cannot enter the intranet. If an organization wants to provide interactive services, it puts a file server outside the firewall. Since outsiders can't get past the firewall, any files that you want to access are put on that server. E-mail, mailing lists, and news services are store-and-forward services where the outsider does not have interactive access to computers inside the firewall. One of the basic purposes of a Firewall should be to protect your site against hackers. However, it can protect you against connections bypassing it. Cookies Cookies are little pieces of data—such as your login name, password, and preferences—left on your hard disk by some websites you visit; the websites retrieve the data when you visit again. Thus, unknown to you, a website operator or companies advertising on the site can log your movements within the site. These records provide information that marketers can use to target customers for their products. Other websites can also get access to the cookies and acquire information about you. You can set your web browser to disable all cookies or to ask you whether you want to accept or reject a cookie every time someone wants to create one on your hard disk. Use your browser's help function to learn how to set cookie functions. Hackers and Crackers These are people who violate computer security. A hacker is a person who has enough knowledge to break into a computer system or facility, although he or she does not cause any harm to the computer system or the organization. A cracker; on the other hand, is a computer thief who breaks into a computer system with wrong intentions, that is, for stealing passwords (sets of characters that helps to log on a system or to access a program), mail messages, files, programs etc., for fun or for benefit. A hacker can help an organization by informing that there are some security lapses in the system. For example, the password of some financially sound company in a bank can be broken by a hacker but no transfer of money is done. On the other hand, crackers can cause financial damages and injure the competitiveness of a firm. For example, in 199 1 a major U.S. automobile manufacturer company lost $500 million worth of designs for future cars, due to security breach at the research facility, and suffered in the market because its designs fell into the hands of it's competitors. Hackers (as opposed to crackers) are basically thrill-seekers who use information technology rather than fast cars. They spend their time learning how systems work at a deep level and exploit this information to roam the information highways seeking out adventure. They have bulletin boards for sharing information, regular meetings. Denning has quoted, ``A diffuse group of people often called `hackers' has been characterised as unethical, irresponsible, and a serious danger to society for actions related to breaking into computer systems..... Hackers are learners an explorers who want to help rather than cause damage, and who often have very high standards of behaviour". Even though there are many definitions of what a (true) hacker is, there is one universal belief of what hackers are not amongst the people who believe that there is a fundamental difference between hackers and crackers : criminals. Lesson 1.6: Cloud Computing Cloud Computing THE HISTORY OF CLOUD COMPUTING Before we begin we need to understand what is cloud computing and when did it all begin. It all began in the late 1960s when the concept of time-sharing was popularized by Remote Job Entry(RJE), basically being able to share computing resource such as multiprogramming and multi-tasking remotely. Why was it called cloud computing when it has nothing to do with clouds in the sky? There are many reasons why: A group of computers look like clouds when outlined Can be found globally, same with clouds in the sky Representation of something immeasurable like clouds Internet Metaphor What ever reason it may be, it does make sense why it's called cloud computing now. ARPANET was the first wide-area packet-switched network with distributed control and one of the first networks to implement the TCP/IP protocol suite. Both technologies became the technical foundation of the Internet. The ARPANET was established by the Advanced Research Projects Agency (ARPA) of the United States Department of Defense. 2000s Fast forward to the year 2000s, on July 2002 Amazon created subsidiary Amazon Web Services, with the goal to "enable developers to build innovative and entrepreneurial applications on their own." In March 2006 Amazon introduced its Simple Storage Service (S3), followed by Elastic Compute Cloud (EC2) in August of the same year. These products pioneered the usage of server virtualization to deliver IaaS at a cheaper and on-demand pricing basis. In April 2008, Google released the beta version of Google App Engine. The App Engine was a PaaS (one of the first of its kind) which provided fully maintained infrastructure and a deployment platform for users to create web applications using common languages/technologies such as Python, Node.js and PHP. The goal was to eliminate the need for some administrative tasks typical of an IaaS model, while creating a platform where users could easily deploy such applications and scale them to demand. In early 2008, NASA's Nebula, enhanced in the RESERVOIR European Commission-funded project, became the first open-source software for deploying private and hybrid clouds, and for the federation of clouds. By mid-2008, Gartner saw an opportunity for cloud computing "to shape the relationship among consumers of IT services, those who use IT services and those who sell them" and observed that "organizations are switching from company-owned hardware and software assets to per-use service-based models" so that the "projected shift to computing... will result in dramatic growth in IT products in some areas and significant reductions in other areas." In 2008, the U.S. National Science Foundation began the Cluster Exploratory program to fund academic research using Google-IBM cluster technology to analyze massive amounts of data. In 2009, the government of France announced Project Andromède to create a "sovereign cloud" or national cloud computing, with the government to spend €285 million. The initiative failed badly and Cloudwatt was shut down on 1 February 2020. 2010s In February 2010, Microsoft released Microsoft Azure, which was announced in October 2008. In July 2010, Rackspace Hosting and NASA jointly launched an open-source cloud-software initiative known as OpenStack. The OpenStack project intended to help organizations offering cloud-computing services running on standard hardware. The early code came from NASA's Nebula platform as well as from Rackspace's Cloud Files platform. As an open-source offering and along with other open-source solutions such as CloudStack, Ganeti, and OpenNebula, it has attracted attention by several key communities. Several studies aim at comparing these open source offerings based on a set of criteria. On March 1, 2011, IBM announced the IBM SmartCloud framework to support Smarter Planet. Among the various components of the Smarter Computing foundation, cloud computing is a critical part. On June 7, 2012, Oracle announced the Oracle Cloud. This cloud offering is poised to be the first to provide users with access to an integrated set of IT solutions, including the Applications (SaaS), Platform (PaaS), and Infrastructure (IaaS) layers. In May 2012, Google Compute Engine was released in preview, before being rolled out into General Availability in December 2013. In 2019, Linux was the most common OS used on Microsoft Azure. In December 2019, Amazon announced AWS Outposts, which is a fully managed service that extends AWS infrastructure, AWS services, APIs, and tools to virtually any customer datacenter, co-location space, or on-premises facility for a truly consistent hybrid experience Cloud Computing Architecture What does a Cloud Computing Architecture comprise of? First let us discuss the basics, the Front and Back end. FRONT END Front end of the cloud architecture refers to the client side of cloud computing system. Which means it contains all the user interfaces and applications which are used by the client to access the cloud computing services/resources. For example use of a web browser to access the cloud platform. Client Infrastructure – Client Infrastructure refers to the frontend components. It contains the applications and user interfaces which are required to access the cloud platform. BACK END Back end refers to the cloud itself which is used by the service provider. It contains the resources as well as manages the resources and provides security mechanisms. Along with this it includes huge storage, virtual applications, virtual machines, traffic control mechanisms, deployment models etc. Application – Application in backend refers to a software or platform to which client accesses. Means it provides the service in backend as per the client requirement. Service – Service in backend refers to the major three types of cloud based services like SaaS, PaaS and IaaS. Also manages which type of service the user accesses. Cloud Runtime – Runtime cloud in backend refers to provide of execution and runtime platform/environment to the virtual machine. Storage – Storage in backend refers to provide flexible and scalable storage service and management of stored data. Infrastructure – Cloud Infrastructure in backend refers to hardware and software components of cloud like it includes servers, storage, network devices, virtualization software etc. Management – Management in backend refers to management of backend components like application, service, runtime cloud, storage, infrastructure, and other security mechanisms etc. Security – Security in backend refers to implementation of different security mechanisms in the backend for secure cloud resources, systems, files, and infrastructure to end-users. Internet – Internet connection acts as the medium or a bridge between frontend and backend and establishes the interaction and communication between frontend and backend. What are the PROs and CONs of Cloud Computing? PROs: 1. Reduce Infrastructure Costs ○ Cloud computing reduces a lot of manual labor since you have another team to do that for you without having to pay them extra, it is already included in the subscription plan such as maintenance and troubleshooting. 2. Impact on Personnel ○ Since you are already giving the job to qualified professionals, you don't have to worry on spending for additional training and risk of hiring underperform employees and possibly them leaving. 3. Consolidating of Data ○ Since all your data is stored and synched on the cloud storage, there is no need to worry in finding it again if ever it gets lost. 4. Defend Against Disaster ○ Compared to having your data stored on-premises, you run the risk of losing everything if you do not have a backup plan or proper security. The advantages of cloud storage is that your data is not stored in only one location but in many, so if one fails the others are still there to be accessed while the one that failed will be repaired as soon as possible. 5. Maximize Uptime ○ Every second a downtime occurs on the company's system or data it could make them lose a lot of money and their reputation too. When transferring to cloud it minimizes it from happening since as stated above your data or servers can be accessed even if one fails. 6. Enhance Collaboration ○ Productivity increases since some employees or not all during the pandemic are now working from home, this means that they can continue to work and access files in real-time without having to leave the comfort of their own homes. 7. Stay Scalable ○ Depending on the size of your company, cloud services expenses will differ. If ever in the future the company will expand they will just pay additional fees. 8. Increase Automation ○ Since maintenance and daily checkups are needed it takes time for the IT personnel to do it, when switching to cloud the services will do that for you automatically while your IT personnel can focus on other tasks. 9. Save On Space ○ Servers and Databases will always take up space and they are not as small as you think, cloud computing will obviously save you lots of space not only physically but digitally too! 10. Enhance Compliance ○ Data should always be maintained due to regulations and it takes a lot of time doing so, a god service provider can do those for you without giving you worries with violations. CONs: 1. Understanding the Costs ○ Although cloud computing saves money in certain areas, there are still analysis needed such as which system or data can be on cloud and which can be on-premise. 2. Moving from Cloud to On-Premise ○ It is easy moving from on-premise to cloud but doing the opposite is not. It is very expensive doing so and there are terms and conditions set by the provider. 3. Limited Control ○ Of course since service providers can help you with your data and system they are things that they can and cannot do, this also goes for the company too. 4. Vendor Lock-In ○ Different service providers has their own different system, transferring your data to other service providers is not an easy task since it can lead to vulnerabilities. Good service providers make sure that transferring of data is safe and secured. 5. Slower Backups and Restore ○ Compared to on-premise systems, it does take a much longer time to process since depending on the service provider's location, the time it takes on doing so will depend on the distance and speed. 6. Internet Reliance ○ Simply put, you cannot access your data online without proper Internet Connection. Your data is safe even if you do not have any Internet connection but the service providers need to make sure they have theirs too. 7. Internet Use ○ Sometimes when performing backups it can drastically affect your Internet connection although this only happens to companies or businesses that did not invest on a much higher bandwidth and speed. Good service providers will make sure to avoid this issue with scheduling or automation. Cloud Computing Services Regularly Used What are some examples? GOOGLE DRIVE YOUTUBE SPOTIFY PAYPAL ELEARN ZOOM Lesson 1.7: Information Security Security Software Security software ○ Designed to protect computers from various forms of destructive software and unauthorized intrusions ○ Antivirus, antispyware, anti-spam, firewall Malware threats – “malicious software” ○ Any program designed to surreptitiously enter a computer, gain unauthorized access to data, or disrupt normal processing operations ○ Viruses, worms, Trojans, bots, spyware ○ Released by hackers, crackers, black hats, or cyber criminals ○ Monetary gain, identity theft, prank, send political messages, disrupt operations, extortion Virus ○ set of program instructions that attaches itself to a file, reproduces itself, and spreads to other files ○ can replicate themselves only on the host computer for days or months ○ usually delivers a payload, e.g. displaying messages, corrupting files ○ Spread when exchanging infected files Worm ○ self-replicating program designed to carry out some unauthorized activity on a victim’s computer ○ security holes in browsers and operating systems, as email attachment, from infected ads or links ○ may also spread over file sharing networks, instant messaging links, and mobile phones Trojan horse ○ a computer program that seems to perform one function while actually doing something else ○ usually does not replicate or spread itself ○ notorious for stealing passwords using a keylogger Bot ○ software that can automate a task or autonomously execute a task when commanded to do so ○ can be spread worms or Trojans ○ controlled by hackers or central server to receive instructions ○ used to carry out denial-of-service attacks Spyware ○ program that secretly gathers personal information without the victim’s knowledge ○ web browsing activities, purchasing behavior ○ can monitor key strokes and relay passwords Malware Activities Symptoms of Infection Avoiding Threats Wireless Security Network threats ○ viruses ○ theft ○ equipment failure Wireless connections are more vulnerable than wired ○ signals are broadcast through the air and can be picked up by any capable device When network discovery is turned on, any Wi-Fi enabled device within range of your network can see its SSID Encryption Transforms a message in such a way that its contents are hidden from unauthorized readers ○ scrambling data to prevent intrusions ○ securing personal info to e-commerce sites ○ encrypting so data is unusable if compromised ○ maintaining privacy Symmetric key encryption ○ same key to encrypt and decrypt Public key encryption ○ different keys to encrypt and decrypt Internet Security Intrusion ○ any access to data or programs by hackers, criminals, or other unauthorized persons ○ data can be stolen or altered, system configurations can be changed to allow even more intrusions Communications port ○ doorway to exchange data ○ hackers exploit this like an unlocked door Port probe (or port scan) ○ automated software to locate computers with vulnerable open ports Routers ○ monitor and direct packets being transported from one device to another ○ connect to the Internet through a DSL, cable, or satellite modem Web Security Spam ○ Unwanted electronic junk mail ○ May contain web bugs, viruses, worms, or key logger ○ Can be blocked or filtered automatically Phishing ○ e-mail based scam that’s designed to persuade you to reveal confidential information such as your bank account number ○ you need to interact with the email Fake Sites ○ looks like legitimate site but replica only ○ collect credit card numbers and other confidential data ○ some fake sites contain sexually explicit material ○ some present totally fabricated information or stories designed to fool users ○ fake sites are used in pharming attacks Pharming ○ redirects users to fake sites by poisoning a DNS with false IP address ○ harder to detect than phishing because the IP is changed in the DNS ○ some browsers contain antipharming tools that compare IP addresses to a list of known fake sites Lesson 1.8: Google and Office 365 Services Google Services Google and its services Most of us use Google in some way or another, either it be for a Youtube Account or Online storage, with so many of its services it provides the list goes on and on. But Google is more than just a catchy name, they specialize in Internet-related Services and products. As mentioned above and on the image, Google has so many services available for any user taking interest, but did you know that Google also provides hardware? Well now you know! Google Nest Google Nest is a line of smart home products that Google provides, from smart speakers, thermostats, streaming devices and security systems. With the help of Google Assistant it provides a hands free help such as asking for updates, results, and so much more! All you have to do is say "Hey Google". Google Pixel The Google Pixel line is a brand of smartphone and consumer electronic devices developed by well Google, its first initial release came from the Chromebook Pixel back in Febuary 21, 2013, after 3 years in October 4, 2016 during the #MadeByGoogle event they announced the first generation of their Pixel Smartphones the Pixel and Pixel XL. Google Docs, Sheets, Slides, Forms and Drive Google also provides services when collaborating with others, you may call it as Google Workspace for students or companies but for simplicities sake we can just call them individually as Docs, Sheets, Slides and Forms. Google offers you to be able to use Docs, Sheets, Slides and Forms online and offline, on the other hand before you can use these services you will need to have access to your Google Drive. Google Drive is basically an online cloud storage service for you to keep all your files and being able to access them in any device, any where and any time. In Google Drive you can edit existing documents there as well as share them with others for a real-time collaboration experience but if you choose to share a file you can send them a link and they can download it on their account. Office 365 Office 365 Services Much similar to Google's Workspace, Microsoft offers Office 365. Most Windows based devices come with the basic tools such as Microsoft Word, Excel, Powerpoint and Publisher preinstalled but in some cases, it may require an active subscription from a Microsoft Account to fully utilize the tools. Windows Phone The Windows Phone is Microsoft's solution to providing a mobile operating system to consumers it did feature some basic Office 365 preinstalled in the device. While the initial releases were a big hit, the interest for consumers and developers of the platform it did not take long for the Windows Phone to reach it's last update. In 2017 Microsoft had discontinued the development. Cortana Cortana is a virtual assistant developed by Microsoft to perform tasks such as weather update, answer questions, make schedules and so much more! Much similar to the Google Nest product line they both share the same functionality. Cortana is also preinstalled in most Windows based devices and is fully integrated to Office 365. Now some of you might be thinking "Why Cortana?" well interestingly enough Microsoft decided to use the name after a video game franchise Microsoft's own Halo franchise. Cortana from Halo Cortana in the video game is a synthetic intelligence character. She has appeared in many Halo games and the voice actress behind the character is also the voice actress for Cortana the virtual assistant for Windows. Did Microsoft provided any smart home products? Unfortunately they did not have any other smart home products besides Cortana. They were able to control 3rd party smart home devices but it seems that it is also ending in integration. So which service is better? Google Workspace VS Office 365 Let us discuss the matter here of which service is better. Honestly it all boils down to the user's own preference and what they have right now. Both services can be used offline and online with some options of having an active subscription to avail other services also comes with different price points. While both services offer practically the same assistance there are some factors that may be taken into consideration such as device capability, company requirements, user access and so much more! Just like other services provided in different aspects it really depends on the user as a whole to which one they find more interesting and usable for themselves. Some companies or establishments provide their employees with a benefit such as a free subscription to the services as long as they use their company account, before doing so the company has to of course negotiate with Microsoft. In the end what ever the user finds what suites them the most.

Use Quizgecko on...
Browser
Browser