ETECH Project PDF
Document Details
![ChasteParticle](https://quizgecko.com/images/avatars/avatar-12.webp)
Uploaded by ChasteParticle
Pamantasan ng Lungsod ng San Pablo
2025
Tags
Summary
This document appears to be a project report or module on emerging technologies, focusing on Blockchain and Artificial Intelligence. It includes a table of contents for the modules and sections on the history, components, and advantages/disadvantages of the technology.
Full Transcript
CPE413-EMERGING-TECHNOLOGIES ============================ **AI AND BLOCKCHAIN MODULE** Submitted by: Hicalde, Elji T. Santos, Martri R. Bonavente, Jherome P. Manalo, Jayson H. Salivia, Albert D. Divina, John Carlo D. Submitted to: Dr. Teresa A. Yema February 2025 TABLE OF CONTENTS ======...
CPE413-EMERGING-TECHNOLOGIES ============================ **AI AND BLOCKCHAIN MODULE** Submitted by: Hicalde, Elji T. Santos, Martri R. Bonavente, Jherome P. Manalo, Jayson H. Salivia, Albert D. Divina, John Carlo D. Submitted to: Dr. Teresa A. Yema February 2025 TABLE OF CONTENTS ================= [Cover Page 1](#cpe413-emerging-technologies) [Table Of Contents 2](#table-of-contents) [Blockchain 3](#blockchain) [History of Blockchain 3](#history-of-blockchain) [Blockchain Explained 5](#blockchain-explained) [How Does A Blockchain Work? 6](#how-does-a-blockchain-work) [Why Do People Use Peer to Peer Network 8](#why-do-people-use-peer-to-peer-network) [The Three Main Pillars of Blockchain Technology 11](#the-three-main-pillars-of-blockchain-technology) [Artificial Intelligence 12](#artificial-intelligence) [What is Ai? 12](#what-is-ai) [Need for Ai 12](#need-for-ai) [What are the Major Goals of Artificial Intelligence? 13](#what-are-the-major-goals-of-artificial-intelligence) [What Comprises to Artificial Intelligence 14](#what-comprises-to-artificial-intelligence) [Advantages of Artificial Intelligence 17](#advantages-of-artificial-intelligence) [Disadvantages of Artificial Intelligence 20](#disadvantages-of-artificial-intelligence) [History of Artificial Intelligence 23](#history-of-artificial-intelligence) [Levels of Ai 24](#levels-of-ai) [Types of Ai 30](#types-of-ai) [References 32](#references) I. II. **BLOCKCHAIN** -------------- ### HISTORY OF BLOCKCHAIN Blockchain technology has a history that spans several decades, beginning with the early concepts of cryptography in the **1980s.** In 1982, cryptographer ***David Chaum*** introduced the idea of ***blind signatures**,* a method that would eventually play a key role in digital currency and privacy technologies. This concept laid the groundwork for blockchain by showing how transactions could be secured without revealing personal information. In **1991**, researchers **Stuart Haber** and **W. Scott Stornetta** proposed a system for securely timestamping digital documents, ensuring their integrity and preventing tampering. Their work laid the foundation for blockchain's future applications by demonstrating the potential of a decentralized ledger. ![](media/image2.jpeg)The real breakthrough for blockchain came in **2008,** when an anonymous figure known as **Satoshi Nakamoto** released the Bitcoin white paper, titled **\"Bitcoin: A Peer-to-Peer Electronic Cash System**.\" Nakamoto outlined a revolutionary idea: a decentralized digital currency based on a distributed ledger that would enable secure, peer-to-peer transactions without the need for intermediaries like banks. This concept became reality in January 2009 when Nakamoto mined the first Bitcoin block, called the \"genesis block,\" and launched the Bitcoin network. This marked the creation of the first true blockchain, which relied on a proof-of-work consensus mechanism to validate transactions and secure the network. As Bitcoin gained traction, other blockchain-based projects emerged. In **2011,** **Namecoin** was launched as the first alternative cryptocurrency (altcoin), aiming to decentralize domain name registration. Around the same time, **Litecoin,** created by **Charlie Lee**, introduced improvements to Bitcoin's code, offering faster transaction speeds and a different hashing algorithm. However, the most significant development in blockchain came in **2013**, when **Vitalik Buterin** proposed **Ethereum.** Unlike Bitcoin, which was designed as a digital currency, Ethereum was envisioned as a decentralized platform for running smart contracts and decentralized applications (dApps). Ethereum introduced a new kind of blockchain, one that was programmable and capable of supporting a wide range of applications beyond simple currency transactions. Ethereum officially launched in 2015, with its first block mined in July of that year. This brought to life a new era for blockchain technology, allowing developers to build decentralized applications that ran on a secure, decentralized network. However, Ethereum's journey was not without challenges. In **2016**, the **DAO (Decentralized Autonomous Organization),** a project built on Ethereum, was hacked, and a large portion of its funds were stolen. This event led to a controversial decision to hard fork the Ethereum blockchain to recover the stolen funds, creating two separate chains: **Ethereum (ETH)** and **Ethereum Classic (ETC).** The years following Ethereum's launch saw blockchain technology expand beyond cryptocurrency. In **2017**, the rise of **Initial Coin Offerings (ICOs)** saw blockchain become a popular tool for fundraising and launching new projects. During this time, **decentralized finance (DeFi)** platforms began to emerge, utilizing Ethereum's smart contracts to offer services like lending, borrowing, and trading without relying on traditional financial institutions. Blockchain also gained attention from major corporations and institutions. **IBM**, for example, began experimenting with blockchain for supply chain management, while **JPMorgan** created its own digital currency, **JPM Coin**, signaling a shift toward institutional interest in blockchain. By **2020**, blockchain had firmly established itself as more than just a technology for cryptocurrency. The rise of **non-fungible tokens (NFTs)** captured the public's imagination by offering a way to represent ownership of digital assets such as art, music, and collectibles on the blockchain. Meanwhile, DeFi platforms continued to grow in popularity, offering decentralized alternatives to traditional financial products. In response to growing demand for blockchain scalability and energy efficiency, Ethereum began its transition to **Ethereum 2.0,** a major upgrade designed to replace the energy-intensive proof-of-work consensus mechanism with a more efficient proof-of-stake system. By **2022** and beyond, blockchain technology continues to evolve, with new projects focusing on scalability solutions such as Layer 2 networks and blockchain interoperability. Central **Bank Digital Currencies (CBDCs)** have also become a hot topic, with countries like China, the U.S., and the EU exploring the potential of digital currencies issued by central banks based on blockchain. As blockchain technology matures, it is likely to have an even greater impact on industries ranging from finance and supply chain to healthcare and governance, with the promise of decentralized, secure, and transparent solutions for many of the world's challenges. ### **BLOCKCHAIN EXPLAINED** A block chain diagram with white circles and a blue background Description automatically generated Blockchain is a distributed database solution that maintains a continuously growing list of data records that are confirmed by the nodes participating in it. The data is recorded in a public ledger, including information of every transaction ever completed. Blockchain is a decentralized solution which does not require any third party organization in the middle. The information about every transaction ever completed in Blockchain is shared and available to all nodes. This attribute makes the system more transparent than centralized transactions involving a third party. In addition, the nodes in Blockchain are all anonymous, which makes it more secure for other nodes to confirm the transactions. Bitcoin was the first application that introduced Blockchain technology. Bitcoin created a decentralized environment for cryptocurrency, where the participants can buy and exchange goods with digital money. Blockchain is the decentralized managing technique of Bitcoin, designed for issuing and transferring money for the users of the Bitcoin currency. This technique can support the public ledger of all Bitcoin transactions that have ever been executed, without any control of a third party organization. The advantage of Blockchain is that the public ledger cannot be modified or deleted after the data has been approved by all nodes. This is why Blockchain is well-known for its data integrity and security characteristics. Blockchain technology can also be applied to other types of uses. It can for example create an environment for digital contracts and peer-to-peer data sharing in a cloud service. The strong point of Blockchain technique, data integrity, is the reason why its use extends also to other services and applications. Blockchain is a type of shared database that differs from a typical database in the way it stores information; blockchains store data in blocks linked together via cryptography. Different types of information can be stored on a blockchain, but the most common use has been as a transaction ledger. In Bitcoin's case, the blockchain is decentralized, so no single person or group has control---instead, all users collectively retain control. Decentralized blockchains are immutable, which means that the data entered is irreversible. For Bitcoin, transactions are permanently recorded and viewable to anyone. ### **HOW DOES A BLOCKCHAIN WORK?** You might be familiar with spreadsheets or databases. A **blockchain** is somewhat similar because it is a database where information is entered and stored. The key difference between a traditional database or spreadsheet and a blockchain is how the data is structured and accessed. A blockchain consists of programs called scripts that conduct the tasks you usually would in a database: entering and accessing information, and saving and storing it somewhere. A blockchain is distributed, which means multiple copies are saved on many machines, and they must all match for it to be valid. The Bitcoin blockchain collects transaction information and enters it into a 4MB file called a block (different blockchains have different size blocks). Once the block is full, the block data is run through a cryptographic hash function, which creates a hexadecimal number called the block header hash. The hash is then entered into the following block header and encrypted with the other information in that block\'s header, creating a chain of blocks, hence the name "blockchain." ![Blockchain Facts: What Is It, How It Works, and How It Can Be Used](media/image4.jpeg) https://www.lawcommentary.com/articles/will-the-real-satoshi-nakamoto-please-stand-up-bitcoins-trial-of-the-century-enters-third-week **Transaction Process** Transactions follow a specific process, depending on the blockchain. For example, on Bitcoin\'s blockchain, if you initiate a transaction using your cryptocurrency wallet---the application that provides an interface for the blockchain---it starts a sequence of events. In Bitcoin, your transaction is sent to a memory pool, where it is stored and queued until a miner picks it up. Once it is entered into a block and the block fills up with transactions, it is closed, and the mining begins. Every node in the network proposes its own blocks in this way because they all choose different transactions. Each works on their own blocks, trying to find a solution to the difficulty target, using the \"nonce,\" short for number used once. The nonce value is a field in the block header that is changeable, and its value incrementally increases with every mining attempt. If the resulting hash isn\'t equal to or less than the target hash, a value of one is added to the nonce, a new hash is generated, and so on. The nonce rolls over about every 4.5 billion attempts (which takes less than one second) and uses another value called the extra nonce as an additional counter. This continues until a miner generates a valid hash, winning the race and receiving the reward. Once a block is closed, a transaction is complete. However, the block is not considered confirmed until five other blocks have been validated. Confirmation takes the network about one hour to complete because it averages just under 10 minutes per block (the first block with your transaction and five following blocks multiplied by 10 equals 60 minutes ### **WHY DO PEOPLE USE PEER TO PEER NETWORK** A peer-to-peer network is an information technology (IT) infrastructure that allows two or more computer systems to connect and share resources without requiring a separate server or server software. Workplaces may set up a P2P network by physically connecting computers into a linked system or creating a virtual network. You can also set up computers to be the clients and servers of their network so they can share resources with other networked computers.. **Some major features of the P2P network include:** Each computer in a P2P network provides resources to the network and consumes resources that the network provides. Resources such as files, printers, storage, bandwidth and processing power can be shared between various computers in the network. A P2P network is easy to configure. Once it's set up, access is controlled by setting sharing permissions on each computer. Stricter access can be controlled by assigning passwords to specific resources. Some P2P networks are formed by overlaying a virtual network on a physical network. The network uses the physical connection to transfer data while the virtual overlay allows the computers on the network to communicate with each other. **Advantages of using a peer-to-peer network** - **Easy file sharing -** A P2P network allows for quick file sharing over large distances, allowing users to access files at any time. This ease of access makes it particularly useful for sharing data across different locations with very little delay. - **Reduced costs -** Setting up a P2P network is cost-effective, as it eliminates the need for a dedicated server or a network operating system. Additionally, businesses don't need to hire a full-time system administrator, saving on operating expenses. - **Adaptability -** P2P networks are highly adaptable, allowing for easy expansion by adding new clients. This scalability makes them more flexible than traditional client-server networks, allowing for growth with minimal disruptions. - **Reliability -** In contrast to client-server networks, which depend on a central server, P2P networks continue to function even if one computer goes down. Traffic is spread across multiple devices, preventing bottlenecks and reducing the risk of complete network failure. - **High performance -** As more clients join a P2P network, the network\'s performance often improves. Each client acts as a server, contributing resources and ensuring that the system scales efficiently as usage increases. - **Efficiency -** Modern P2P networks are designed for efficient collaboration between devices with varying resources. This cooperative structure enhances the overall performance and resource utilization of the network, benefiting all users. **Tips for using a P2P network** - **Create a decentralized policy** - **Invest in cybersecurity** - **Monitor for vulnerabilities** - **Establish remote success controls** - **Regularly update software and systems** **Examples of P2P networks** P2P networking operates on three basic levels, ranging from simple USB connections between two computers to advanced software-based protocols that manage multiple devices over the internet. Intermediate networks use copper wires to link larger groups of computers, while advanced systems offer more sophisticated, internet-based connections. These levels form the foundation for building various types of P2P networks such as: - **Unstructured P2P networks** - **Structured P2P networks** - **Hybrid networks** ### **THE THREE MAIN PILLARS OF BLOCKCHAIN TECHNOLOGY** - **Decentralization** Blockchain removes the need for a centralized authority by distributing data across a network of nodes, ensuring no single entity controls the system. - **Transparency** All transactions are visible on a public ledger, promoting trust and accountability among participants, while maintaining privacy through encrypted transaction details. - **Immutability** Once data is recorded, it cannot be altered or deleted, ensuring the integrity and permanence of the transaction history **ARTIFICIAL INTELLIGENCE** --------------------------- AI Applications Today: Where Artificial Intelligence is Used \| IT Chronicles https://itchronicles.com/wp-content/uploads/2020/11/where-is-ai-used.jpg ### **WHAT IS AI?** **Artificial intelligence (AI)**, the ability of a digital [computer](https://www.britannica.com/technology/computer) or computer-controlled [robot](https://www.britannica.com/technology/robot-technology) to perform tasks commonly associated with intelligent beings. Artificial intelligence is the simulation of human intelligence processes by machines, especially computer systems. Examples of AI applications include [expert systems](https://www.techtarget.com/searchenterpriseai/definition/expert-system), natural language processing ([NLP](https://www.techtarget.com/searchenterpriseai/definition/natural-language-processing-NLP)), [speech recognition](https://www.techtarget.com/searchcustomerexperience/definition/speech-recognition) and [machine vision](https://www.techtarget.com/searchenterpriseai/definition/machine-vision-computer-vision). AI requires specialized hardware and software for writing and training machine learning algorithms. No single programming language is used exclusively in AI, but Python, R, Java, C++ and Julia are all popular languages among AI developers. ### **NEED FOR AI** The machines with human-like intelligence are influencing at the same time helping our lives at every stage. Which is why it has become important to understand more about Artificial Intelligence & why do we need AI in our lives? Artificial intelligence (AI) has already started playing a major role in our lives. Today, more than before, it has become easy to spot the portions of our modern life where artificial intelligence has penetrated. Understanding the role of AI in our lives can throw light on its need in society, businesses, and regular day-to-day life. Human efficiency, activity, and capabilities are highly improvised and augmented when coupled with intelligent machines. Earth has already witnessed three industrial revolutions. The fourth one is presumed to be driven by artificial intelligence and its capabilities. It is likely to change our concept of what being a human feels like. The fourth [AI-driven Industrial revolution](https://www.queppelin.com/7-ways-in-which-ai-can-change-the-future/) will bring an impact that no other revolution has brought to date. Artificial intelligence has long been a subject of anticipation among both popular and scientific culture, with the potential to transform businesses as well as the relationship between people and technology at large. So, why is AI usage reaching critical mass today? Because of the proliferation of data and the maturity of other innovations in [cloud processing and computing power](https://www.accenture.com/ph-en/cloud/insights/cloud-computing-index), AI adoption is growing faster than ever. Companies now have access to an unprecedented amount of data, including dark data they didn't even realize they had until now. These treasure troves are a boon to the growth of AI. **A critical source of business value---when done right** AI has long been regarded as a potential source of business innovation. With the enablers now in place, organizations are starting to see how AI can multiply value for them. Automation cuts costs and brings new levels of consistency, speed and scalability to business processes; in fact, some Accenture clients are seeing time savings of 70 percent. Even more compelling, however, is the ability of AI to drive growth. Companies that scale successfully see 3X the return on their AI investments compared to those who are stuck in the pilot stage. No wonder 84 percent of C-suite executives believe they must leverage AI to achieve their growth objectives ### **WHAT ARE THE MAJOR GOALS OF ARTIFICIAL INTELLIGENCE?** **1. Problem-Solving and Decision Making** One of the central aims of AI is to develop systems that can analyze large datasets, identify patterns, and make data-driven decisions. This ability to solve problems and make decisions efficiently is invaluable across various industries, from healthcare and finance to transportation and manufacturing. **2. Natural Language Processing (NLP)** AI-driven [NLP](https://www.simplilearn.com/tutorials/artificial-intelligence-tutorial/what-is-natural-language-processing-nlp) is a critical aspect of creating machines that can understand and communicate with humans in natural language. NLP enables virtual assistants like Siri and Alexa to comprehend user queries and respond appropriately, making machine interactions more intuitive and user-friendly. **3. Machine Learning and Deep Learning** [Machine learning](https://www.simplilearn.com/tutorials/machine-learning-tutorial/what-is-machine-learning) and [deep learning](https://www.simplilearn.com/tutorials/deep-learning-tutorial) are subset of AI that focus on enabling machines to learn from data without explicit programming. These techniques have led to significant advancements in computer vision, speech recognition, and recommendation systems, among others. **4. Robotics and Automation** Integrating AI with robotics has given rise to intelligent machines that can perform physical tasks with precision and accuracy. From assembly line robots in manufacturing plants to autonomous vehicles, AI-powered automation is reshaping industries worldwide. **5. Enhancing Healthcare and Medicine** AI\'s goal in healthcare is to improve diagnostics, treatment planning, and patient care. Medical professionals can leverage AI algorithms to analyze medical images, predict disease outcomes, and develop personalized treatment plans for patients. **6. Fostering Creativity and Innovation** AI is not limited to practical applications alone; it has the potential to spur creativity and innovation. AI-powered tools can assist artists, writers, and designers generate creative new ideas and push the boundaries of human imagination. ### **WHAT COMPRISES TO ARTIFICIAL INTELLIGENCE** 1. **Learning Learning** is a crucial component of AI as it enables AI systems to learn from data and improve performance without being explicitly programmed by a human. AI technology learns by labeling data, discovering patterns within the data, and reinforcing this learning via feedback, often in the form of rewards or punishments. Punishments are negative values associated with undesirable outcomes or actions. Example: Voice recognition systems like Siri or Alexa learn correct grammar and the skeleton of a language. 2. **Reasoning and Decision Making** 3. **Problem Solving** **The Five Branches of AI Below are the five most significant branches or** **subfields of AI.** 1. **Machine Learning** 2. **Deep Learning** 3. **Natural Language** 4. **Robotics** 5. **Fuzzy Logic** ### **ADVANTAGES OF ARTIFICIAL INTELLIGENCE** 1. **Reduction in Human** **Error** 2. **Decision-Making** 3. **Zero Risks** 4. **Availability** 5. **Digital Assistance** 6. **New Inventions** 7. **Unbiased Decisions** 8. **Automation** 9. **Daily Applications** 10. **Medical Applications** ### **DISADVANTAGES OF ARTIFICIAL INTELLIGENCE** 1. **Creativity Artificial Intelligence (AI)** 2. **Emotional Intelligence** 3. **Encouraging Human Laziness** 4. **Privacy Concerns** 5. **Job Displacement** 6. **Over-dependence on Technology** 7. **Algorithms Developments Concerns** 8. **Environmental Issues** 9. **Lack of Common Sense** 10. **Interpretability and Transparency** ### **HISTORY OF ARTIFICIAL INTELLIGENCE** **George Boole** was the first to describe a formal language for logic reasoning in 1847. The next milestone in artificial intelligence history was in 1936, when Alan M. Turing described the Turing-machine. Warren McCulloch and Walter Pitts created the model of artificial neurons in 1943, and it was in 1944 when J. Neumann and O. Morgenstern determined the theory of decision, which provided a complete and formal frame for specifying the preferences of agents. In **1949** Donald Hebb presented a value changing rule for the connections of the artificial neurons that provide the chance of learning, and Marvin Minsky and Dean Edmonds created the first neural computer in 1951. Artificial intelligence (AI) was born in the summer of 1956, when John McCarthy first defined the term. It was the first time the subject caught the attention of researchers, and it was discussed at a conference at Dartmouth. The next year, the first general problem solver was tested, and one year later, McCarty?regarded as the father of AI?announced the LISP language for creating AI software. Lisp, which stands for list processing, is still used regularly today. Herbert Simon in 1965 stated: "Machines will be capable, within twenty years, of doing any work a man can do." However, years later scientists realized that creating an algorithm that can do anything a human can do is nearly impossible. Nowadays, AI has a new meaning: creating intelligent agents to help us do our work faster and easier (Russel & Norvig, 2005; McDaniel, 1994; Shirai & Tsujii, 1982; Mitchell, 1996; Schreiber, 1999). Perceptrons was a demonstration of the limits of simple neural networks published by Marvin Minsky and Seymour Papert in 1968. In 1970, the first International Joint Conference on Artificial Intelligence was held in Washington, DC. PROLOG, a new language for generating AI systems, was created by Alain Colmerauer in 1972. In 1983, Johnson Laird, Paul Rosenbloom, and Allen Newell completed CMU dissertations on SOAR. ### **LEVELS OF AI** ![A screenshot of a computer Description automatically generated](media/image6.png) https://castle.princeton.edu/wp-content/uploads/2024/12/7levelsofAI-2048x771.jpg **LEVEL 1: RULE-BASED LOGIC ** Rule-based logic (alternatively, "symbolic rule-based reasoning") was the first form of "AI" that emerged in the 1960s and 1970s, and produced the initial surge of interest in what computers could do. As computers became more powerful in the 1980s, this logic was captured by "expert systems." Rule-based logic involves rules specified by a human, coded into logic. Simple examples include "if you are eating red meat, then drink red wine," but the number of cases quickly exploded as occurred in health: "a male patient, over 60, with high blood sugar, not a smoker, no parents with diabetes,..., should follow this \[multidimensional\] diet." Rule-based logic did not come close to meeting the early expectations of AI, but far from being a complete failure (as it was once viewed), rule-based logic is used throughout modern machine intelligence. There are a number of commercial "rules engines" available today which are widely used. **LEVEL 2: BASIC MACHINE LEARNING ** This covers well-known statistical models using lookup tables, parametric models (including linear or nonlinear models and neural networks) or nonparametric models. These methods first emerged in the early 1900s under the umbrella of statistics, but grew dramatically as computers became more powerful (computer scientists entered the field using the name "machine learning"). The most popular models were linear (in the parameters), capturing basic relationships between input variables (also known as explanatory or independent variables, or "covariates") and a response. For example, we might use price with variations such as price-squared and log of price as inputs, to predict the response of the demand. These models matured with the introduction of nonlinear models (such as logistic regression and early neural networks), and nonparametric models (using local approximations). Neural networks have been popular since the 1970s, and widely used in many nonlinear (deterministic) estimation problems. **LEVEL 3: PATTERN RECOGNITION USING DEEP NEURAL NETWORKS** ![](media/image8.jpeg)The research community has studied neural networks since the 1960s, but it was the use of deep neural networks, along with the availability of large datasets, starting around 2010, that produced the first true breakthroughs in pattern recognition. This was the foundation of the modern use of "AI" when it emerged in the 2010s for voice recognition, facial recognition, and image identification. Pattern recognition using deep neural networks is just another form of nonlinear modeling, but it took machine learning into an entirely new class of applications, bringing a level of visibility to the field of machine learning far beyond what had been achieved with the work described under level 2. **LEVEL 4: LARGE LANGUAGE MODELS ** Learning speech patterns using ultra-deep neural networks burst into the public imagination in 2023 with the introduction of ChatGPT building on research that has been evolving since the 2000s (but especially post 2010). Although large language models can be viewed as merely an extension of deep neural networks for pattern recognition, I have reserved an entire level for this application given the dramatic increase in the complexity of language problems (LLMs require substantially more data preparation), and the sheer growth in the size of neural networks (and the training datasets to train them) used to support this problem setting. Unlike pattern recognition, where the "answer" to "what is this pattern" is deterministic, LLMs deal with much richer inputs and can produce a range of outputs to a single query (which means it has a stochastic response). Some people equate its ability to create its own sequences of words and phrases, but it is nothing more than randomization among patterns found in the training dataset. LLMs (such as ChatGPT) are often associated with a much broader category known as "artificial general intelligence" which includes more general forms of learning for unstructured problems. I put these capabilities in Level 7. This means that LLMs will always produce words and phrases that have been used in the training datasets. It is for this reason that companies supporting this technology are investing heavily not just in the training using massive datasets, but also in the active use of people to guide the behavior of the neural network. ![A close-up of a sign Description automatically generated](media/image10.jpeg) There has been a lot of hype about the potential danger of LLMs. The only danger of an LLM is misinformation, which means any damage is still coming from people. There is, of course, no shortage of misinformation on the internet today, so we can hope that modern society has developed defenses to misinformation, but it is something that we have to be aware of. **LEVEL 5: DETERMINISTIC OPTIMIZATION ** Unlike machine learning models, deterministic optimization depends on an explicit model of a problem that includes both the physics of the problem as well as a performance metric. Inputs to the model include controllable parameters, also known as actions or decisions. Sophisticated algorithms are used to search over feasible regions to optimize the performance metrics. Unlike machine learning-based technologies (levels 2-4) which have to be trainedusing a training dataset, deterministic optimization does not involve any training. Instead, it uses a model of a problem that captures the physics of the underlying application, along with a performance metric that allows us to evaluate decisions. Then, algorithms are used to search for the best decision that is implementable, and which optimizes the specified performance metric. For example, in the 1990s tools emerged for scheduling airlines that were able to produce more efficient schedules. There are countless examples of achievements like this from the operations research community. One class of deterministic optimization problems arises in machine learning. The problem is to find the best set of parameters that minimizes the difference between a parameterized function (often called a model) and a set of observations (the training data set). Using a neural network to solve a problem (which requires solving the optimization problem of fitting the neural network to the training dataset), versus using an optimization model to optimize some problem (model-based optimization) are often confused, but they are fundamentally different. Model-based optimization, unlike training-based machine learning, is able to produce solutions that are better than what any human can achieve. The price of this performance is that we have to invest the time to create a model of the physical problem inside the computer. Note that a neural network does not directly capture the physics of a problem; instead, it tries to learn appropriate behaviors through the training dataset. A major difference between a statistical model (such as a neural network) and an optimization model is the objective function. The objective when fitting a statistical model to a training dataset is always one of minimizing some distance metric (such as the sum of the squares of the difference between the model and the training dataset). By contrast, optimization models require that an objective function be specified by the analyst, in addition to the constraints that capture the properties of a problem. **LEVEL 6: SEQUENTIAL DECISION PROBLEMS ** Here we are solving a problem that consists of the sequence *decision, information, decision, information,... *where decisions have to be made given the uncertainty of information that has not yet arrived. The optimization problem is to find the best method for making decisions (known as a policy). Special cases of sequential decision problems include: *decision, information* (this describes classical stochastic search problems) and *decision, information, decision* (this describes two-stage stochastic programming problems). More general problems have sequences of "*decision, information" *that extend over either a finite or infinite horizon. ![](media/image12.jpeg)While deterministic optimization problems search for the best decision x*x* (which is typically a vector), sequential decision problems require searching for a function Xπ(St)*Xπ*(*St*) known as a *policy *(designated "π*π*"), which is a function that maps the information in the state variable St*St* to a decision. For sequential decision problems, a decision might be binary, discrete, scalar continuous, or any type of vector. The challenge is that the policy has to work well over time, according to some metric (typically an expectation, which means we are optimizing the performance on average). Levels 5 and 6, which are both a form of optimization, can produce tools that outperform humans. Such problems may be quite complex, either due to size (such as planning an airline schedule), or because of the inherent complexity of sequential decision problems. However, these are for well-structured problems with properties that can be fully specified by a mathematical model. **LEVEL 7: SCIENCE FICTION** We reserve level 7 for unstructured problems which require the highest levels of intelligence. It is view as pure science fiction -- it is easy to speculate, but it do not see the economic justification to develop such an advanced technology (consider the cost of training an LLM, that does not possess any of these capabilities). This does not mean computer scientists will not try to claim these behaviors (as some have tried to attribute these capabilities to LLMs). Technology (especially expensive technology) does not happen unless there is a significant economic benefit to people. Some examples of these highest levels of intelligence include: - **Knowledge representation** reflects the ability to characterize a poorly defined problem. For example, we may have a robot that needs to find its way around an urban area, or a desert, or a room full of furniture. The robot needs to characterize the area that it is in and identify the questions that need to be answered that are suitable for the context. Then this information needs to be organized in a way that the robot can accomplish some objective. - **Creativity** -- Optimization involves finding the best of a well-defined set of decisions, where the set of decisions are clearly articulated, with a performance metric that can be used to evaluate different decisions. Creativity Is required when we have a performance metric but may not specify the decisions that are available. For example, we may wish to reduce the number of deaths from COVID, but do not have a well-defined set of actions (or decisions). In English, a new type of decision is called an "idea." - **Judgment** -- Decisions for simpler, well-structured problems have clear metrics (winning a game, maximizing profits, treating a medical condition), but there are problems that involve complex judgments such as choosing whether to swerve to avoid hitting a cyclist but sending the car into the path of another car, or whether to bomb a military target that will also kill civilians. - **Reasoning** -- This requires the ability to think through different steps to achieve a goal. While this is done in levels 5 and 6 in the context of well-structured problems, reasoning requires being able to think through steps that are less-structured or completely unstructured. This cannot be done using large language models since these are fitted with the singular goal of minimizing the difference between "predicted words" and "actual words", while reasoning requires the ability to incorporate other goals. ### **TYPES OF AI** Artificial Intelligence can be broadly classified into several types based on capabilities, functionalities, and technologies. Here\'s an overview of the different types of AI: 1. **Based on Capabilities** - Narrow AI (Weak AI) - This type of AI is designed to perform a narrow task (e.g., facial recognition, internet searches, or driving a car). Most current AI systems, including those that can play complex games like chess and Go, fall under this category. They operate under a limited pre-defined range or set of contexts. - General AI (Strong AI) - A type of AI endowed with broad human-like cognitive capabilities, enabling it to tackle new and unfamiliar tasks autonomously. Such a robust [AI framework](https://www.simplilearn.com/open-source-ai-frameworks-article) possesses the capacity to discern, assimilate, and utilize its intelligence to resolve any challenge without needing human guidance. - Superintelligent AI - This represents a future form of AI where machines could surpass human intelligence across all fields, including creativity, general wisdom, and problem-solving. Superintelligence is speculative and not yet realized. 2. **Based on Functionalities** - Reactive Machines - These AI systems do not store memories or past experiences for future actions. They analyze and respond to different situations. IBM\'s Deep Blue, which beat Garry Kasparov at chess, is an example. - Limited Memory - These AI systems can make informed and improved decisions by studying the past data they have collected. Most present-day AI applications, from chatbots and virtual assistants to self-driving cars, fall into this category. - Theory of Mind - This is a more advanced type of AI that researchers are still working on. It would entail understanding and remembering emotions, beliefs, needs, and depending on those, making decisions. This type requires the machine to understand humans truly. - Self-aware AI - This represents the future of AI, where machines will have their own consciousness, sentience, and self-awareness. This type of AI is still theoretical and would be capable of understanding and possessing emotions, which could lead them to form beliefs and desires. 3. **Based on Technologies** - Machine Learning (ML) - AI systems capable of self-improvement through experience, without direct programming. They concentrate on creating software that can independently learn by accessing and utilizing data. - Deep Learning - A subset of ML involving many layers of neural networks. It is used for learning from large amounts of data and is the technology behind voice control in consumer devices, image recognition, and many other applications. - Natural Language Processing (NLP) - This AI technology enables machines to understand and interpret human language. It\'s used in [chatbots](https://www.simplilearn.com/creating-chatbots-guide-pdf), translation services, and sentiment analysis applications. - Robotics - This field involves designing, constructing, operating, and using robots and computer systems for controlling them, sensory feedback, and information processing. - Computer Vision - This technology allows machines to interpret the world visually, and it\'s used in various applications such as medical image analysis, surveillance, and manufacturing. - Expert Systems - These AI systems answer questions and solve problems in a specific domain of expertise using rule-based systems. ### **REFERENCES:** Nakamoto, S. (2008). Bitcoin: A Peer-to-Peer Electronic Cash System. Buterin, V. (2013). A Next-Generation Smart Contract and Decentralized Application Platform. Ethereum White Paper. https://castle.princeton.edu/the-7-levels-of-ai/ Stornetta, W. S., & Haber, S. (1991). How to Time-Stamp a Digital Document. Lee, C. (2011). Litecoin: A Peer-to-Peer Cryptocurrency. Vitalik Buterin. (2016). Ethereum White Paper - A next-generation smart contract platform. https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0163477