🎧 New: AI-Generated Podcasts Turn your study notes into engaging audio conversations. Learn more

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Full Transcript

History of IT The history of information technology includes the history of computer hardware and software. Information technologies are powerful because people crave information, and IT makes it easy for people to get this information. The world is teeming with information about who we are,...

History of IT The history of information technology includes the history of computer hardware and software. Information technologies are powerful because people crave information, and IT makes it easy for people to get this information. The world is teeming with information about who we are, where we live, and what we do to sustain ourselves. Until very recently, before the advent of computers in approximately 1990, this information was written on paper, published in newspapers or magazines, and stored in libraries and stacks of files in offices. In your own personal lives, you and your parents probably cherish family photo albums with printed photographs. Before computers, there was no easy way for people to share information with each other. Businesses did not have easy access to data about their own customers to understand larger patterns and take corrective actions. Beginning in the 1990s, computers became popular, and starting in the 21st century, computers got connected to each other through the Internet, and the field of Information Technology (IT) was born. IT is the use of computers and networking technologies to store, process, and retrieve information. IT is now so important to organizations that IT investments, which include computers, networks, software, and employees are now one of the largest expenses for most organizations. Regardless of what type of work you plan to do when you complete school, you will be expected to be as comfortable using IT effectively as you are using your first language. Doctors are expected to read and write electronic medical records as they interact with patients, CEOs, and business owners are expected to interpret reports generated by IT, and government staff is expected to work with citizen records as they respond to user queries. In turn, some of the most desirable jobs in the economy are emerging from this infusion of technology. Even within organizations, IT systems evolve constantly based on how we want to gather information, what bits of information we want to gather, and what information has lost meaning over time. Some IT systems need to be highly secure, some are spread across the world, and some may be small enough to need just one employee and a desktop. Until around 2015, there was a distinction between personal electronic devices such as cell phones and office IT systems like PCs. Now with powerful smartphones and cheap mobile Internet, this distinction is disappearing. Employees expect business applications to be as user-friendly as consumer applications and consumer services to be as secure as business services. Employees can be responsive round-the-clock from work and home. Employees of the gig economy (e.g., Uber drivers, DoorDash delivery people, Amazon/FedEx drivers) use personal phones as their primary work devices to process orders. In this example, their personal mobile devices are part of the company’s IT system.. This section provides a quick tour through the history of how we have reached the current state of “information technology everywhere.” We show how innovative teams have responded to human needs and commercial incentives to create the technologies we take for granted today. If you find this interesting, we hope you will read more about these individuals and technologies on the Internet. The history of information technology includes the history of computer hardware and software. Charles Babbage is credited with building the first mechanical computer in the 1820s. Over a hundred years later in 1946, a team at the University of Pennsylvania publicly reported the first programmable, general-purpose computer. It was called the Electronic Numerical Integrator and Computer (ENIAC). The ENIAC weighed 30 tons and took up 1800 square feet of space. The ENIAC was a computer like any modern computer, but it did not use software as we understand it today. Every instruction for every task was hard coded by experts. If the task was to be repeated, the instructions were written again on punch cards, which could take days. A lot of these instructions involved tasks such as reading data and writing outputs, which are common to all computer programs. These shared tasks were aggregated into computer programs called operating systems. The Operating System (OS) is the brain of the computer and controls all the parts of a computer. A computer mouse, keyboard, display monitor, motherboards, and storage drives are all components of a computer, and they act together as one computer only when the operating system recognizes them and orchestrates a symphony of coordinated actions to complete the task you tell the computer to perform. When you move your mouse, tab your screen, type on your keyboard, or make a phone call, it is the operating system that recognizes the action and tells the components how to act to bring about the desired outcome. Operating systems evolved rapidly in the 1960s and 70s. In 1971, AT&T, the dominant phone company at the time, built an operating system called Unix. Unix and its variants were freely distributed across the world in the 1970s and 1980s, and some of the world’s best computer scientists and engineers volunteered their contributions to make Unix extremely capable. These experts were guided by the principle of using the “minimum number of keystrokes [to] achieve the maximum effort.” Because of their powerful capabilities and low to no costs, Unix and its variants including the popular 6 operating system, now power most computers around the world, including all major smartphones. Windows is another popular operating system, used extensively on desktops and in data centers. Linux A powerful economic force also contributed to Unix’s widespread adoption. While AT&T funded the development of Unix, 15 years earlier, in 1956, AT&T had reached an agreement with the federal government that gave it monopoly status on long-distance telephony. In exchange, AT&T agreed not to sell any product that was not directly related to long-distance telephony. Eventually, AT&T shared the source code to Unix with multiple organizations, and they released their adaptations to the world. One of the most popular of these adaptations was developed at UC Berkeley and was called Berkeley Systems Distribution (BSD) Unix. The licensing for BSD Unix allowed adopters to make their own modifications without releasing them back to the community. This was very useful to commercial vendors. Among the most popular current commercial releases tracing their lineage to BSD are the operating systems on all Apple products, including MacOS on laptops and iOS on smartphones. The popularity of Unix is the result of technology excellence and economic incentives. Until the early 1980s, computers were too expensive for personal use. As the cost to manufacture computer components came down, IBM saw an opportunity to make small, self-contained personal computers (PCs) that had their own Central Processing Units (CPUs). Since Unix was designed for giant centralized machines and dumb terminals, there was a need for an operating system that could run on these personal devices. IBM partnered with Microsoft to create an operating system for personal computers. This operating system was called Disk Operating System (DOS). Although DOS wasn’t easy to use (users still needed to type commands manually on a line), the idea of owning a computer caught on, and the IBM PC started the PC revolution by becoming the world’s first popular personal computer. Communication is a very fundamental human activity and information exchange has been one of the most popular uses of computers. Computer engineers recognized this need for information exchange early on and developed technologies for computers to talk to each other. The initial networks were limited in scope, connecting computers located within an office, and allowing users within an office to send emails to each other and share expensive resources such as printers. As networks grew, network effects emerged. To understand the network effect, imagine a village with just two telephones connected to each other by a wire. The telephones will not be very useful since they only connect two people in the village. Conversations with all other users happen outside this network. However, as more people in the village connect to the network, every telephone in the network becomes increasingly useful. The same telephone allows users to connect to more people in the village. The free increase in benefit to the community as more members join a network is called the network effect. The network effect generated powerful incentives within the industry to network computers. By 1981, the core computer networking technology we use today was specified. Since that time, the development of computers is closely associated with the development of computer networks, the Internet, and the World Wide Web. The World Wide Web enables the same capability for information. The Internet is built by connecting two types of networks—small networks within buildings and large networks that connect these small networks. The small networks connecting workers inside an office building or a school are called Local Area Networks (LANs). The network at your school or home is an example of a LAN. LANs help the computers in an office share files, emails, printers, and the Internet connection with each other. The networks that connect these small networks to each other are called Wide Area Networks (WANs). WANs are typically large networks spread across a wide geographic area such as a state or country and are used to connect the LANs within corporate and satellite offices. WANs are typically operated by Internet providers such as Verizon, Frontier, and Spectrum and users pay subscription fees to access WANs. Terms and Definitions Android: Mobile OS developed by Google and available as open-source software Computer: A programmable computing device capable of receiving input, manipulating data, and outputting information Computer Engineer: An individual that focuses on the research, design, and development of computer hardware and systems Data: Representation of facts in a formalized manner suitable for communication, interpretation, or processing by humans or by automatic means Data Center: Centralized location of servers and networking equipment that facilitates processing and storage of data Disk Operating System (DOS): Early user-oriented operating system created from a partnership between Microsoft and IBM Dumb Terminal: A simple device consisting of a monitor and keyboard meant to facilitate communication to a separate computing device Gig Economy: A labor market characterized by short-term employment, typically involving an intermediary platform Graphical User Interface (GUI): Visual medium of interacting with computers Hardware: The physical and often modular components of a computer system Information Technology: Any equipment or system responsible for data manipulation; also refers to the disciplines of science and engineering that interact with these systems and data IOS: Mobile OS developed by Apple, used in iPhones IT Personnel: Technically proficient individuals capable of optimizing the use of technological resources and soft skills to assist organizations with their IT needs Keyboard: A physical or digital device capable of communicating with a connected computer system through assigned key inputs Linux: Highly versatile operating system created by Linus Torvalds Microchip: Electronic components comprised of transistors and circuits Microsoft Windows: A GUI-based operating system developed by Microsoft, which is one of the most popular computer operating systems globally Mobile App: Application specifically developed to run on smartphones and other mobile computing devices Moore’s Law: A “law” proposed by Gordon Moore, Intel co-founder, stating that the number of transistors in microchips would double every two years due to advancing computing performance Motherboard: Computer component that connects various other pieces of computer hardware Mouse: An input device capable of detecting user manipulation to facilitate interactions with computer systems Network Effect: Increase in benefit to a community as membership grows Networks: Computers that are connected through either wired or wireless means with the purpose of sharing data Operating System: Software that facilitates sharing, allocation, and effective utilization of computer resources Smartphone: A small form factor computer combined with a mobile phone based around touch screen input Software: Instructions that computer hardware can interpret and execute in order to achieve desired tasks Storage Drives: Physical devices to store data System: Separate components working together to fulfill a function Unix: The early and powerful operating system built by Bell Laboratories Legal and Ethical Issues inTechnology Privacy Because of the pervasiveness of the Internet in people's everyday lives, privacy is an especially contentious topic of technology. Often websites gather user data without their consent, from usernames and passwords to personal details including emails and phone numbers. While selling this information is widely regarded as immoral, it often falls into a legal gray area because the data is provided by the user in the first place. Copyright With the advancement of technologies, copyright and intellectual property protection have now been significant concerns. Copyright infringement became increasingly simple and nearly impossible for many authors as the Internet grew in popularity as a publishing tool. The fight between copyright holders and software pirates for ownership of their intellectual property is fought on a regular basis online and in the courts. Netiquette Refers to the online code of conduct for internet users in terms of what is acceptable and in good taste. The terms "web" (from the internet) and "etiquette" are combined. Bad decisions made online will have negative effects. Cyber Bullying Bullying that occurs on a social networking site, but may also occur through other technology such as text messages. Bullying differs from normal bullying in that it is more intrusive, targets a larger group, and can occur at any moment and in any place. One of the fastest-growing forms of bullying among teenagers. Gender Women and men use the internet differently. Men are more driven by IT, whereas women are more passive in their role as technology users. Men favor video games Women favor chatting, email Fair Use Fair use is a doctrine of copyright law that provides the limited use of patented content without the need for authorization from the owners of the rights. There are four things to consider: intent, design, number, and consequence. Social Economic Many people living in poverty do not get the same technological resources as others, leaving them farther behind in the digital divide. A lot of technology is expensive and people living in poverty cannot afford them, as they are not a necessity. What is generative AI? Generative artificial intelligence (AI) describes algorithms (such as ChatGPT) that can be used to create new content, including audio, code, images, text, simulations, and videos. Recent breakthroughs in the field have the potential to drastically change the way we approach content creation. What’s the difference between machine learning and artificial intelligence? Artificial intelligence is pretty much just what it sounds like—the practice of getting machines to mimic human intelligence to perform tasks. You’ve probably interacted with AI even if you don’t realize it—voice assistants like Siri and Alexa are founded on AI technology, as are customer service chatbots that pop up to help you navigate websites. Machine learning is a type of artificial intelligence. Through machine learning, practitioners develop artificial intelligence through models that can “learn” from data patterns without human direction. The unmanageably huge volume and complexity of data (unmanageable by humans, anyway) that is now being generated has increased machine learning’s potential, as well as the need for it. What are the main types of machine learning models? Machine learning is founded on a number of building blocks, starting with classical statistical techniques developed between the 18th and 20th centuries for small data sets In the 1930s and 1940s, the pioneers of computing—including theoretical mathematician Alan Turing—began working on the basic techniques for machine learning. But these techniques were limited to laboratories until the late 1970s, when scientists first developed computers powerful enough to mount them. Until recently, machine learning was largely limited to predictive models, used to observe and classify patterns in content. For example, a classic machine learning problem is to start with an image or several images of, say, adorable cats. The program would then identify patterns among the images, and then scrutinize random images for ones that would match the adorable cat pattern. Generative AI was a breakthrough. Rather than simply perceive and classify a photo of a cat, machine learning is now able to create an image or text description of a cat on demand. How do text-based machine learning models work? How are they trained? The first machine learning models to work with text were trained by humans to classify various inputs according to labels set by researchers. One example would be a model trained to label social media posts as either positive or negative. This type of training is known as supervised learning because a human is in charge of “teaching” the model what to do. The next generation of text-based machine learning models rely on what’s known as self-supervised learning. This type of training involves feeding a model a massive amount of text so it becomes able to generate predictions. For example, some models can predict, based on a few words, how a sentence will end. With the right amount of sample text—say, a broad swath of the internet—these text models become quite accurate. We’re seeing just how accurate with the success of tools like ChatGPT. What kinds of output can a generative AI model produce? Generative AI outputs are carefully calibrated combinations of the data used to train the algorithms. Because the amount of data used to train these algorithms is so incredibly massive—as noted, GPT-3 was trained on 45 terabytes of text data—the models can appear to be “creative” when producing outputs. What’s more, the models usually have random elements, which means they can produce a variety of outputs from one input request—making them seem even more lifelike. What are the limitations of AI models? How can these potentially be overcome? These risks can be mitigated, however, in a few ways. For one, it’s crucial to carefully select the initial data used to train these models to avoid including toxic or biased content. Next, rather than employing an off-the-shelf generative AI model, organizations could consider using smaller, specialized models. Organizations with more resources could also customize a general model based on their own data to fit their needs and minimize biases. Organizations should also keep a human in the loop (that is, to make sure a real human checks the output of a generative AI model before it is published or used) and avoid using generative AI models for critical decisions, such as those involving significant resources or human welfare. It can’t be emphasized enough that this is a new field. The landscape of risks and opportunities is likely to change rapidly in coming weeks, months, and years. New use cases are being tested monthly, and new models are likely to be developed in the coming years. As generative AI becomes increasingly, and seamlessly, incorporated into business, society, and our personal lives, we can also expect a new regulatory climate to take shape. As organizations begin experimenting—and creating value—with these tools, leaders will do well to keep a finger on the pulse of regulation and risk. What is a model? A large interconnected neural network with weights and biases Created by training the net on a massive dataset GPT-3 has 175 billion parameters Llama 2 has 7B, 13B, 70B models Model size GPT-3 is 175 billion parameters Trained on 45TB of text data Parameters are float32: 4 bytes 175B x 4 bytes = 700GB Llama 2 model 7B = 28GB My Nvidia 3070 has 6GB Making models accessible Cloud compute (GPU, TPU) Pruning Quantization (GPT-3 uses float16) 1. Procedural programming languages Procedural programming languages are computer languages that use precise steps to compose programs. In a way, all programming languages are procedural languages, but the term typically refers to languages with a limited set of data types such as numbers and strings. C, Fortran, and Pascal are examples of procedural languages and allow programmers to create procedures or subroutines to perform specific tasks. Today, procedural languages are primarily used for introductory programming classes. Historically, they were used to write the earliest scientific and engineering applications. 2.Object-oriented programming languages Object-oriented programming languages are computer languages that allow developers to create their own data types by organizing data and related functions into objects. Object-oriented (OO) languages greatly simplify representing the real world in computer programs and are widely used in software development. Examples of object-oriented languages include Java, C#, and C++. 3. Scripting languages Scripting languages are computer languages used to automate tasks using the capabilities of existing applications. Scripting languages are typically aimed at end users and are considered easier to learn than procedural or object-oriented languages. AppleScript is an example of a scripting language for MacOS, and AutoHotKey is an example of a scripting language for Windows. JavaScript and Python are scripting languages that have evolved into powerful languages to create computer applications. Scripting languages are typically not used to create software for commercial distribution, since scripts are not compiled, and the programs can be read by all users. 4. Markup languages Markup languages are computer languages used to specify how information should be displayed or interpreted. HTML, Markdown and XML are well-known markup languages. Markup languages define markup tags, which are used to create web pages and other content that can be displayed on a variety of devices. 5. Domain-specific languages Domain-specific languages are computer languages optimized for specific application domains. SQL is an example of a domain-specific example. Other domain-specific languages include R for statistical applications and MATLAB for engineering applications. Domain-specific languages greatly simplify application development for complex domains such as data retrieval (SQL) and statistical data analysis (R). 6. Low-level languages Low-level languages are programming languages that are close to the processor’s native instruction set. They are sometimes called assembly language. Programs in all other languages (e.g., procedural, object-oriented, and domain- specific languages) are converted by compilers into low-level language programs for each type of processor. Specialized software programs called compilers convert software code, written in any of the above languages, into binary code that can then be executed directly by the computers. Compilers allow developers to write computer programs in languages that resemble plain English (called high-level languages) and convert these programs into binary code customized for each processor. Binary Code Eventually, all computer programs are stored as instructions in binary code. Computers can only read binary code, which is a collection of 1s and 0s. Binary code is the native language of computers and is necessary for communication and storage of data. For example, files and data are stored in binary format on hard drives and other storage media. Specialized software programs called compilers convert software code, written in any of the above languages, into binary code that can then be executed directly by the computers. Compilers allow developers to write computer programs in languages that resemble plain English (called high-level languages) and convert these programs into binary code customized for each processor. Computer Programming Each type of programming language has its own strengths and weaknesses. The choice of language often depends on the specific needs of the application being developed. It is common to use a combination of languages to build a software application. You may use HTML and JavaScript that handles the business logic and responds to requests from the frontend. You may use SQL to create your backend code that helps your middleware interact with the database to store and retrieve information. manage the look and feel (frontend) of the application, and Java or C++ to create the middleware Programming languages and frameworks are evolving rapidly to handle emerging business needs. These days, it is becoming increasingly common to use JavaScript to build the frontend as well as backend, so you just need to learn one language to build entire applications. This greatly improves developer productivity. Frameworks like React Native let you use JavaScript to build mobile applications. These frameworks also do the heavy lifting to convert your JavaScript code into the required low-level language components necessary to work with Apple IOS and Google Android phone systems. We begin our introduction of programming languages with block- based programming. Block-based programming is a way to use graphical interfaces to write simple programs. If you have never tried computer programming before or if programming languages appear complicated, you could try block-based coding till you get comfortable enough to use regular programming languages. Block Based Coding Block coding is a visual programming language that uses blocks or graphical elements to represent programming concepts instead of traditional text-based coding. These blocks can be dragged and dropped to create a sequence of commands or instructions. Blockly from Google is an example of a block-based computer language. Scratch from MIT labs is another block-based computer language that allows developers to create animations and stories. As shown in picture, Blockly allows you to use simple graphical interfaces to specify instructions, and it converts these instructions into well-formed programs in different languages Block coding is a fun and interactive way to learn programming, ideal for beginners and even children. It allows you to focus on the logical structures of programming, without worrying about the syntax and details of text-based coding. Block coding can give you a taste of the power of programming. It can also help you build foundational skills to assist in the move to text-based programming. Programming Basics While there are many popular programming languages (e.g., Java, C#, C, C++, JavaScript, Python), they all share most of the underlying concepts. Once you learn the basic programming concepts and use them in a few languages, learning new programming languages will be easy and fun. Here are a few concepts you will need to learn no matter which language you choose. If you would like to practice the examples in this chapter, and create your own programs, you can use the jdoodle online code editor. Variables A variable is a named storage location in a computer’s memory that holds a value. Variables are the basic mechanism used to store and manipulate data in code. A variable is one of the first things you will learn when you begin to write software programs. Let’s say you are creating a program that calculates the area of rectangles. Since the area is computed from the length and width of the rectangle, you would need to store the width and height of the rectangle as variables. You would need to create one variable for each dimension, maybe one called width and the other called height. For simplicity in this example, let’s assume all numbers are integers. Every language has its own way of declaring a variable. Once you declare the variables to hold the dimensions of the rectangle, you would assign values to the variables when a user inputs the width and height values of the rectangle. Variable declaration of the type int (integer): int width; int height; Variable assignment: width = 10; height = 5; In programming, the equals operator (=) is typically used to assign values to variables. Once we have the values assigned to variables, we can perform calculations to get the area of the rectangle. If we wish to save this value for future use, we will need a third variable (Area) to store the value of the area: int area; We can now compute the area as the product of the width and height as: area = width * height; In the above statement, we ask the computer to fetch the values stored in width and height and multiply the two. The final output or the area is stored in the area. As you see in the example above, computer programs written in modern programming languages read much like the same commands written in plain English. To be useful to end users, just doing the calculations is often not enough. Users likely also want to see the results. You can print the output to the display using the print function available in most programming languages: print (“The area of the rectangle is:”, area); When users run your program and enter the height and the width values, they will see the following message: The area of the rectangle is: 50. Functions/Methods A function (aka method in some programming languages) is a block of code that performs a specific task. A function is defined with a name and can be called or invoked repeatedly from other parts of a program. Functions provide a way to modularize code and make it easier to read, understand, and reuse. Instead of writing the same code multiple times in different parts of a program, a function can be defined once and called whenever it is needed. Functions also improve program correctness since program errors only need to be fixed in one place (the method), instead of all the places where the methods are used. Functions typically have inputs and outputs. The inputs are called parameters or arguments. They represent the data that the function will receive and work on. The outputs are the result of the function’s computation and can be returned to the calling code. Functions are an important part of programming languages and are used extensively in both frontend and backend development. They allow us to write reusable code that can be called from anywhere in our program, making our code more modular and easier to maintain. //History of IT, Legal Issue in Technology, Generative AI, Lecture3 til functions and method//

Use Quizgecko on...
Browser
Browser