AI Notes PDF
Document Details
Uploaded by InnovativeSteelDrums
Chitkara University
Tags
Related
Summary
Lecture notes on Artificial Intelligence, covering foundational concepts like machine learning, neural networks, and deep learning. The notes detail applications in entertainment, healthcare, and finance, and highlight the growing role of AI in various industries. The document includes links to practice papers.
Full Transcript
You can attempt these practice papers any time (but only once) till Dec 29, 2024. Copy of response (i.e. Questions & answers you have marked) will be sent to your mail box once you submit the form. Link to ST-01 Practice Paper https://forms.gle/uFFpXNhTyCGhFdmS8 Link...
You can attempt these practice papers any time (but only once) till Dec 29, 2024. Copy of response (i.e. Questions & answers you have marked) will be sent to your mail box once you submit the form. Link to ST-01 Practice Paper https://forms.gle/uFFpXNhTyCGhFdmS8 Link to practice paper on search Algorithms https://forms.gle/G9hAC9AQxvfjz5co7 Link to ST-02 Practice Paper https://forms.gle/oZfGXRPE8GfRrDhB7 Unit-01: Introduction to Artificial Intelligence (Lecture 01 to 05) Lecture-01 Introduction to Artificial Intelligence Artificial intelligence (AI) is a branch of computer science focused on creating intelligent machines capable of mimicking human cognitive functions like learning and problem-solving. Artificial Intelligence (AI) operates on a core set of concepts and technologies that enable machines to perform tasks that typically require human intelligence. With Artificial Intelligence you do not need to preprogram a machine to do some work, despite that you can create a machine with programmed algorithms which can work with your own intelligence, and that is the awesomeness of AI. It is believed that AI is not a new technology, and some people say that as per Greek myth, there were Mechanical men in the early days who can work and behave like humans. Why Artificial Intelligence With the help of AI, you can create software or devices which can solve real-world problems very easily and with accuracy such as health issues, marketing, traffic issues, etc. With the help of AI, you can create your personal virtual assistants, such as Cortana, Google Assistant, Siri, etc. With the help of AI, you can build such Robots which can work in an environment where the survival of humans can be at risk. AI opens a path for other new technologies, new devices, and new opportunities. Here are some foundational concepts required to develop an AI system: 1. Machine Learning (ML): This is the backbone of AI, where algorithms learn from data without being explicitly programmed. It involves training an algorithm on a data set, allowing it to improve over time and make predictions or decisions based on new data. 2. Neural Networks: A neuron in the context of artificial intelligence is a mathematical function that mimics the behavior of a biological neuron in the human brain. It receives input signals, processes them through weights and biases, applies an activation function, and produces an output. Neurons are the fundamental units in a neural network, and they are responsible for transforming input data into meaningful outputs by learning from examples. A neural network is a series of interconnected neurons organized into layers, including an input layer, hidden layers, and an output layer. Each node connects to others, and has its own associated weight and threshold. If the output of any individual node is above the specified threshold value, that node is activated, sending data to the next layer of the network. 1 Otherwise, no data is passed along to the next layer of the network. A neural network that only has two or three layers is just a basic neural network. 3. Deep Learning is a subset of machine learning that involves neural networks with many layers (hence "deep"). A neural network that consists of more than three layers—which would be inclusive of the inputs and the output - can be considered a deep learning algorithm. These deep neural networks can automatically learn to represent data in multiple layers of abstraction, making them powerful for complex tasks like image and speech recognition, natural language processing, and even game playing. Unlike traditional machine learning methods, deep learning models can learn features directly from raw data, eliminating the need for manual feature extraction. 4. Natural Language Processing (NLP): NLP involves programming computers to process and analyze large amounts of natural language data, enabling interactions between computers and humans using natural language. 5. Robotics: While often associated with AI, robotics merges AI concepts with physical components to create machines capable of performing a variety of tasks, from assembly lines to complex surgeries. 6. Cognitive Computing: This AI approach mimics human brain processes to solve complex problems, often using pattern recognition, NLP, and data mining. 7. Expert Systems: These are AI systems that emulate the decision-making ability of a human expert, applying reasoning capabilities to reach conclusions. Each of these concepts helps to build systems that can automate, enhance, and sometimes outperform human capabilities in specific tasks. Core Goal of Artificial Intelligence (AI) Emulate human intelligence in machines. This can involve tasks like: Reasoning: Analyze information and draw logical conclusions. Learning: Acquire new knowledge and skills from data. Problem-solving: Identify and solve problems in a goal-oriented way. Decision-making: Evaluate options and make choices based on available information. Functional Goal of Artificial intelligence are as follow- 1. To Create Expert Systems − The systems which exhibit intelligent behavior, learn, demonstrate, explain, and advise their users. 2. To Implement Human Intelligence in Machines − Creating systems that understand, think, learn, and behave like humans. 3. Improving Problem-solving skills - The potential of AI in solving problems will make our lives easier as complex duties can be designated to dependable AI systems which can simplify vital jobs 4. Include knowledge representation – It is concerned with the representation of 'what is known' to machines by using the existence of a set of objects, relations, and concepts. The representation displays real-world data that a computer can utilize to solve complicated real-world problems, such as detecting a medical condition or conversing with humans in natural language. 5. Facilitates Planning − Through predictive analytics, data analysis, forecasting, and optimization models, AI-driven planning creates a procedural course of action for a system to reach its goals and optimizes overall performance. One of the principal goals of artificial intelligence is to employ its prediction to anticipate the future and determine the implications of our actions. 6. Allow continuous learning - Conceptually, learning implies the ability of computer algorithms to improve the knowledge of an AI program through observations and past experiences. Technically, AI programs process a collection of input-output pairs for a defined function and use the results to predict outcomes for new inputs. 2 7. AI primarily uses two learning models–supervised and unsupervised–where the main distinction lies in using labeled datasets. As AI systems learn independently, they require minimal or no human intervention. For example, ML defines an automated learning process. 8. Promote creativity − AI promotes creativity and artificial thinking that can help humans accomplish tasks better. It also offers a platform to augment and strengthen creativity, as AI can develop many novel ideas and concepts that can inspire and boost the overall creative process. For example, an AI system can provide multiple interior design options for a 3D-rendered apartment layout. Lecture-02 Application area of Artificial Intelligence Entertainment and Media - AI is increasingly used to generate and curate content, from writing articles to creating music and videos. Machine learning models can analyze vast amounts of data to identify trends, generate scripts, or even compose music, helping content creators and producers deliver engaging material more efficiently. OpenAI's GPT-3 has been used to write articles, generate creative writing prompts, and even assist in screenwriting. Similarly, AI tools like Amper Music allow musicians to compose original pieces by generating music based on user inputs and preferences. - AI-driven recommendation engines are crucial in delivering personalized content to users. These systems analyze user behavior, preferences, and historical data to suggest movies, music, articles, or games that are most likely to interest the individual, enhancing user engagement and satisfaction. Netflix uses AI algorithms to recommend movies and TV shows to its users. The recommendation system analyzes viewing history, ratings, and user interactions to suggest content tailored to each viewer's tastes, significantly contributing to the platform's popularity. 3 Health Care - AI algorithms, particularly those based on deep learning, are being used to analyze medical images such as X-rays, MRIs, and CT scans. These systems can detect abnormalities like tumors, fractures, and infections with high accuracy, often surpassing human experts in certain cases. e.g. Google's DeepMind developed an AI system that can detect over 50 different eye diseases from retinal scans with an accuracy comparable to that of leading ophthalmologists. This technology can assist doctors in making quicker and more accurate diagnoses. - AI is being used to accelerate the drug discovery process by analyzing large datasets to identify potential drug candidates. AI can predict how different compounds will interact with the human body, significantly reducing the time and cost associated with bringing new drugs to market. IBM Watson has been employed in drug discovery to analyze millions of data points, helping to identify new drug compounds and possible uses for existing drugs in treating diseases like cancer and Alzheimer's. Finance - AI-powered trading algorithms analyze vast amounts of market data to execute trades at optimal times, often within milliseconds. These systems can predict market trends, assess risks, and make split-second decisions that humans cannot. Quantitative hedge funds, like Renaissance Technologies, rely on AI-driven models to automate trading decisions, optimizing portfolio performance and managing risks more effectively. - AI enables banks to offer personalized financial services by analyzing customer data, such as spending habits, income, and savings goals. This allows for tailored product recommendations, customized financial advice, and more targeted marketing. Banks like Capital One use AI to offer personalized credit card recommendations, budgeting advice, and alerts for unusual spending, enhancing customer satisfaction and engagement. Manufacturing and Industries - AI is used to predict when machinery and equipment are likely to fail, allowing maintenance to be performed before a breakdown occurs i.e. AI can be used for preventive maintenance. This minimizes downtime, extends the lifespan of machines, and reduces maintenance costs. General Electric (GE) uses AI-driven predictive maintenance in its industrial equipment, such as jet engines and gas turbines. By analyzing data from sensors, AI models can predict potential failures and suggest timely maintenance, preventing costly unplanned outages. - AI systems are employed in manufacturing to monitor and control the quality of products. Machine learning algorithms can detect defects in products by analyzing images, sensor data, or other indicators, ensuring high-quality output. Companies like Siemens use AI-powered vision systems to inspect products on the assembly line. These systems can identify defects or deviations from quality standards with greater accuracy and speed than human inspectors, reducing the likelihood of defective products reaching customers. Agriculture - AI-driven precision farming involves the use of sensors, drones, and data analytics to monitor and manage crops at a granular level. AI can analyze data on soil conditions, weather patterns, and crop health to optimize planting, irrigation, and fertilization. John Deere's AI-powered tractors use computer vision and machine learning to monitor crop health and soil conditions in real time. Farmers can adjust their practices based on AI recommendations, leading to higher yields and reduced resource use. - AI systems can detect and diagnose crop diseases early by analyzing images of plants. Machine learning models trained on large datasets of plant images can identify diseases with high accuracy, 4 enabling timely intervention and preventing large-scale crop loss. PlantVillage, an AI-based app, helps farmers in developing countries by using machine learning to diagnose plant diseases from photos taken with smartphones. The app provides instant recommendations for treatment, helping to protect crops and livelihoods. Safety and Security - AI-powered surveillance systems are used to monitor public spaces, detect threats, and enhance security. These systems can analyze video footage, recognize faces, detect unusual behavior, and alert authorities to potential security incidents. AI is used in smart city surveillance systems to monitor public areas for suspicious activities. For instance, systems like BriefCam can analyze video footage to identify objects, track movements, and detect anomalies, helping law enforcement respond quickly to potential threats. - AI is used to predict, monitor, and respond to natural disasters, such as floods, earthquakes, and wildfires. AI models analyze data from sensors, satellites, and social media to provide early warnings, assess damage, and coordinate emergency response efforts. The AI system developed by Google for flood forecasting uses machine learning to predict the likelihood of floods in certain regions. By analyzing weather patterns, river levels, and terrain data, the system can provide early warnings to affected communities, helping to reduce casualties and property damage. Astronomy - Artificial Intelligence can be very useful to solve complex universe problems. AI technology can be helpful for understanding the universe such as how it works, its origin, etc. Gaming - AI can be used for gaming purposes. The AI machines can play strategic games like chess, where the machine needs to think of a large number of possible places. Data Security - The security of data is crucial for every company and cyber-attacks are growing very rapidly in the digital world. AI can be used to make your data more safe and secure. Some examples such as the AEG bot, AI2 Platform, are used to determine software bugs and cyber-attacks in a better way. Social Media - Social Media sites such as Facebook, Twitter, and Snapchat contain billions of user profiles, which need to be stored and managed in a very efficient way. AI can organize and manage massive amounts of data. AI can analyze lots of data to identify the latest trends, hashtags, and requirements of different users. Travel & Transport - AI is becoming highly demanding for travel industries. AI is capable of doing various travel-related works such as making travel arrangements to suggesting the hotels, flights, and best routes to the customers. Travel industries are using AI-powered chatbots which can make human-like interaction with customers for better and fast response. Automotive Industry - Some Automotive industries are using AI to provide virtual assistants to their users for better performance. Such as Tesla has introduced TeslaBot, an intelligent virtual assistant. Various Industries 5 are currently working for developing self-driven cars which can make your journey more safe and secure. Robotics - Usually, general robots are programmed such that they can perform some repetitive tasks, but with the help of AI, we can create intelligent robots which can perform tasks with their own experiences without being pre-programmed. Humanoid Robots are the best examples of AI in robotics, recently the intelligent Humanoid robot named Erica and Sophia has been developed which can talk and behave like humans. E-commerce - AI is providing a competitive edge to the e-commerce industry, and it is becoming more demanding in the e-commerce business. AI is helping shoppers to discover associated products with recommended size, color, or even brand. Education - AI can automate grading so that the tutor can have more time to teach. AI chatbot can communicate with students as a teaching assistant. AI in the future can work as a personal virtual tutor for students, which will be accessible easily at any time and any place. Advantages of AI Reduction in Human Error - The reduction in human error stands out as a significant benefit of artificial intelligence, marking a pivotal shift in how tasks are executed across various fields. By leveraging AI, tasks that were traditionally prone to inaccuracies due to human oversight can now be performed with a higher degree of precision. AI systems are programmed to follow specific instructions and algorithms, which allows them to process information and make decisions based on factual data and predefined rules. This minimizes the chances of errors that can occur due to subjective judgment, fatigue, or other human limitations. Zero Risks - The concept of zero risks is another compelling advantage of artificial intelligence, particularly in contexts where human safety and well-being are paramount. AI systems can be deployed in hazardous environments or situations where human involvement would pose significant risks, such as space exploration, deep-sea exploration, and handling of toxic substances. By utilizing robots or AI-driven machinery, tasks can be performed without jeopardizing human lives, ensuring that operations are conducted safely and effectively. 24x7 Availability - One of the standout advantages of artificial intelligence is its ability to remain operational 24x7, without the limitations faced by human workers such as the need for rest, breaks, or shifts. AI systems, unlike humans, do not suffer from fatigue, which allows them to perform continuous and consistent service around the clock. This aspect of AI is particularly beneficial in industries that require constant monitoring and operations, such as customer service, cybersecurity, and manufacturing. Digital Assistance/Daily Applications 6 - Digital assistance, facilitated by artificial intelligence, represents a significant leap in how individuals and businesses interact with technology, offering personalized and efficient support. AI-powered digital assistants, such as virtual assistants on smartphones or voice-activated devices in homes, have become integral in simplifying user interactions with technology. They process natural language, understand user requests, and provide relevant information or perform tasks ranging from setting reminders to controlling smart home devices, all through simple voice commands or text inputs. Perform Repetitive Jobs - The ability of artificial intelligence to efficiently perform repetitive jobs is a substantial advantage, particularly in sectors where consistency and precision are crucial. AI excels in automating tasks that are monotonous for humans, such as data entry, transaction processing, and even intricate manufacturing operations. This automation not only accelerates processes but also allows human employees to engage in more complex and creative tasks, thereby boosting overall productivity and innovation. Apart from the above listed; we have advantages like- Useful as a public utility: AI can be very useful for public utilities such as a self-driving car which can make our journey safer and hassle-free, facial recognition for security purposes, Natural language processing to communicate with the human in human language, etc. New Inventions: AI is powering many inventions in almost every domain which will help humans solve the majority of complex problems. High Accuracy with less errors: AI machines or systems are prone to less errors and high accuracy as it takes decisions as per pre-experience or information. High-Speed: AI systems can be of very high-speed and fast-decision making, because of that AI systems can beat a chess champion in the Chess game. High reliability: AI machines are highly reliable and can perform the same action multiple times with high accuracy. Useful for risky areas: AI machines can be helpful in situations such as defusing a bomb, exploring the ocean floor, where to employ a human can be risky. Disadvantages of AI High Costs - The high costs associated with artificial intelligence systems represent a significant disadvantage, impacting their accessibility and adoption across various sectors. Developing, deploying, and maintaining AI technologies require substantial investments in specialized hardware, sophisticated algorithms, and vast datasets for training purposes. Additionally, the expertise needed to create and manage AI systems is highly specialized, contributing to the overall expenses due to the high demand for skilled professionals in the field of AI and data science. Make Humans Lazy - The convenience and efficiency brought by artificial intelligence can inadvertently lead to a dependency that may diminish human initiative and problem-solving skills. As AI systems take over more tasks, from personal assistants managing our schedules to smart home devices controlling our living environments, there's a growing concern that this reliance could make individuals less proactive or less inclined to engage in critical thinking. The ease with which information and services are delivered can reduce the necessity for individuals to develop or maintain certain skills, such as navigation skills in the age of GPS apps, or basic research skills with search engines providing instant answers. 7 Job Displacement & Next Digital Divide - One of the most significant concerns associated with the rise of artificial intelligence (AI) is job displacement. As AI systems become increasingly capable of performing tasks that were once the exclusive domain of human workers, there is growing anxiety about the potential for widespread unemployment. Automation, powered by AI, is already transforming industries such as manufacturing, logistics, and customer service, where repetitive tasks can be efficiently handled by machines. While AI can lead to increased productivity and cost savings for businesses, it also raises the possibility that many jobs may become obsolete. This displacement disproportionately affects low-skill and routine jobs, potentially leading to economic inequality and social unrest as workers struggle to adapt to new roles or retrain for positions that require more advanced skills. - As AI becomes more integrated into various sectors, the gap between those who have access to AI technology and those who do not could widen. Developed countries, large corporations, and affluent individuals may have the resources to invest in AI-driven tools, leading to significant advancements in productivity, education, healthcare, and more. In contrast, developing countries, small businesses, and economically disadvantaged populations might struggle to access or afford these technologies. This disparity could result in a new form of digital divide, where the benefits of AI are unevenly distributed, leaving some regions and communities at a significant disadvantage in terms of economic growth, opportunities, and quality of life. Bias and Fairness - AI systems, particularly those that rely on machine learning, are prone to issues of bias and fairness. These systems learn from historical data, which may contain biases related to race, gender, age, or other characteristics. If the data used to train AI models reflects societal prejudices, the AI may inadvertently perpetuate or even exacerbate these biases. For instance, AI-driven hiring algorithms may favor certain demographics over others, or facial recognition technology may have higher error rates for people of color. The lack of transparency in AI decision-making processes—often referred to as the "black box" problem—makes it difficult to identify and correct these biases. Ensuring fairness in AI requires careful consideration of the data used to train models, as well as ongoing monitoring and adjustment of AI systems to prevent discriminatory outcomes. Without these measures, the widespread deployment of AI could reinforce existing inequalities rather than mitigate them. Privacy Issues - The proliferation of AI technologies raises significant concerns about privacy. AI systems often rely on vast amounts of data to function effectively, including personal data such as location information, browsing history, and even biometric details. As AI becomes more integrated into everyday life, the collection and analysis of personal data can lead to intrusive surveillance practices and the erosion of individual privacy rights. For example, AI-driven advertising platforms track user behavior to deliver personalized ads, while smart home devices monitor activities within the home. In more extreme cases, AI-powered surveillance systems used by governments and corporations can infringe on civil liberties by monitoring public spaces or tracking individuals without their consent. Safety and Security - While AI has the potential to enhance safety and security in many areas, it also introduces new risks that need to be carefully managed. One of the primary concerns is the safety of AI systems themselves. As AI systems become more autonomous and are deployed in critical areas such as transportation, healthcare, and national defense, ensuring their reliability and robustness becomes paramount. Failures or malfunctions in AI systems can have serious, even catastrophic, consequences. For instance, autonomous vehicles that rely on AI for navigation and decision-making could cause 8 accidents if their systems fail. Additionally, AI technologies can be exploited for malicious purposes, such as in the development of autonomous weapons or sophisticated cyber-attacks. The dual-use nature of AI—where the same technology can be used for both beneficial and harmful purposes—poses a significant challenge for regulators and policymakers. Ensuring the safe and secure deployment of AI requires stringent testing, oversight, and the establishment of ethical guidelines to prevent misuse and mitigate risks. Apart from the here are few more disadvantages Can't think out of the box: Even we are making smarter machines with AI, but still they cannot work out of the box, as the robot will only do that work for which they are trained, or programmed. No feelings and emotions: AI machines can be an outstanding performer, but still it does not have the feeling so it cannot make any kind of emotional attachment with human, and may sometime be harmful for users if the proper care is not taken. No Original Creativity: As humans are so creative and can imagine some new ideas but still AI machines cannot beat this power of human intelligence and cannot be creative and imaginative. Lecture-03 History Maturation of Artificial Intelligence (1943-1952) Year 1943: The first work which is now recognized as AI was done by Warren McCulloch and Walter pits in 1943. They proposed a model of artificial neurons. Year 1949: Donald Hebb demonstrated an updating rule for modifying the connection strength between neurons. His rule is now called Hebbian learning. Hebbian learning is a theory in neuroscience that describes how neurons in the brain adapt during learning processes. It is often summarized by the phrase "cells that fire together, wire together." This means that if a neuron repeatedly helps activate another neuron, the connection between them (called a synapse) is strengthened. Over time, this strengthens the association between the neurons, making it easier for them to fire together in the future. Hebbian learning is fundamental to understanding how learning and memory formation occur in the brain. Year 1950: The Alan Turing who was an English mathematician and pioneered Machine learning in 1950. Alan Turing publishes "Computing Machinery and Intelligence" in which he proposed a test. The test can check the machine's ability to exhibit intelligent behavior equivalent to human intelligence, called a Turing test. It is a game where there are three players: 1. Player A: A human. 2. Player B: Another human. 3. Player C: A computer (the AI). All three players communicate with each other through text, so they can't see or hear each other. Player A asks questions to Player B and Player C, trying to figure out which one is the human and which one is the computer. The goal of the computer (Player C) is to try to convince Player A that it's a human. 9 If Player A can't reliably tell which one is the computer, the computer is said to have passed the Turing Test, meaning it has shown human-like intelligence. Example of a Turing Test Let's say Player A types a question: "What's your favorite movie and why?" Player B (the human) might respond: "I love 'Inception' because it's a mind-bending movie with amazing visuals and a complex plot that keeps you thinking.” Player C (the AI) might respond: "My favorite movie is 'Inception' because it's very interesting and has great action scenes.” Player A then asks more questions, like "What's your favorite food?” or "How do you feel about rainy days?” The goal is for Player A to decide which responses sound more human. If the computer (Player C) can make Player A unsure or believe that it's the human, then the computer has passed the Turing Test. For the sake of better results; A number of different people play the roles of Player-A and Player-B, and, if a sufficient proportion of the Player-A are unable to distinguish the computer from the human being, then (according to proponents of Turing's test) the computer is considered an intelligent, thinking entity. The birth of Artificial Intelligence (1952-1956) Year 1955: An Allen Newell and Herbert A. Simon created the "first artificial intelligence program "Which was named "Logic Theorist". This program had proved 38 of 52 Mathematics theorems, and found new and more elegant proofs for some theorems. Year 1956: The word “Artificial Intelligence” was first adopted by American Computer scientist John McCarthy at the Dartmouth Conference. For the first time, AI was coined as an academic field. At that time high-level computer languages such as FORTRAN, LISP, or COBOL were invented. And the enthusiasm for AI was very high at that time. The golden years-Early enthusiasm (1956-1974) Year 1966: The researchers emphasized developing algorithms that can solve mathematical problems. Joseph Weizenbaum created the first chatbot in 1966, which was named ELIZA. Year 1972: The first intelligent humanoid robot was built in Japan which was named WABOT-1. It was estimated that the WABOT-1 has the mental faculty of a one-and-half-year-old child. It consisted of a limb-control system, a vision system and a conversation system. The WABOT-1 was able to communicate with a person in Japanese and to measure distances and directions to the objects using external receptors, artificial ears and eyes, and an artificial mouth. The WABOT-1 walked with his lower limbs and was able to grip and transport objects with hands that used tactile-sensors. The first AI winter (1974-1980) The duration between the years 1974 to 1980 was the first AI winter duration. AI winter refers to the time period where computer scientists dealt with a severe shortage of funding from the government for AI research. During AI winters, an interest in publicity on artificial intelligence was decreased. 10 A boom of AI (1980-1987) Year 1980: After AI winter duration, AI came back with an "Expert System". Expert systems were programmed that emulate the decision-making ability of a human expert. In the Year 1980, the first national conference of the American Association of Artificial Intelligence was held at Stanford University. The second AI winter (1987-1993) The duration between the years 1987 to 1993 was the second AI Winter duration. Again Investors and the government stopped funding for AI research due to high costs but not efficient results. The expert system such as XCON was very cost-effective. The emergence of intelligent agents (1993-2011) Year 1997: In the year 1997, IBM Deep Blue beats world chess champion, Gary Kasparov, and became the first computer to beat a world chess champion. Year 2002: for the first time, AI entered the home in the form of Roomba, a vacuum cleaner. The Roomba vacuum cleaner, developed by iRobot, is a robotic vacuum that uses AI to autonomously clean floors. It’s equipped with sensors to navigate around furniture, avoid obstacles, and detect dirtier areas that need more attention. Roomba's AI allows it to learn the layout of a room over time, optimize its cleaning patterns, and return to its charging station when the battery is low. Some models even use machine learning to improve their cleaning efficiency based on previous runs and user preferences. Year 2006: AI came into the Business world till the year 2006. Companies like Facebook, Twitter, and Netflix also started using AI. Deep Learning, Big Data, and Artificial Intelligence (2011-present) Year 2011: In the year 2011, IBM's Watson won jeopardy, a quiz show, where it had to solve complex questions as well as riddles. Watson had proved that it could understand natural language and can solve tricky questions quickly. Year 2012: Google has launched an Android app feature "Google now", which was able to provide information to the user as a prediction. Year 2014: In the year 2014, Chatbot "Eugene Goostman" won a competition in the infamous "Turing test." Year 2018: The "Project Debater" from IBM debated on complex topics with two master debaters and also performed extremely well. Google has demonstrated an AI program "Duplex" which was a virtual assistant and which had taken hairdresser appointments on call, and the lady on the other side didn't notice that she was talking with the machine. Now AI has developed to a remarkable level. The concept of Deep learning, big data, and data science are now trending like a boom. Nowadays companies like Google, Facebook, IBM, and Amazon are working with AI and creating amazing devices. The future of Artificial Intelligence is inspiring and will come with high intelligence. Lecture-04 11 First Criteria: Based on Scope of Capabilities and Cognitive Abilities 1. Artificial Narrow Intelligence (ANI)/Narrow AI It doesn't possess understanding or consciousness, but rather, it follows pre-programmed rules or learns patterns from data. Narrow AI is designed to perform tasks that normally require human intelligence, but it operates under a limited set of constraints and is task-specific. It's the most common type of AI that we encounter in our daily lives. For example: image recognition, voice assistants on our phones like Siri and Google Assistant, recommendation algorithms used by Netflix and Amazon, spam filters used by Gmail. 2. Artificial general intelligence (AGI)/Strong AI Artificial general intelligence is AI that can learn, think and act the way humans do. AGI has yet to be created, in theory it could complete new tasks it never received training for and perform creative actions that previously only humans could. It’s a theoretical concept so no example actually exists yet the examples ChatGPT-4o, IBM Watson, Self-Driving Cars etc. show that AGI may not be that far off. Key Characteristics of AGI include: 1. Generalization 2. Understanding and Reasoning 3. Learning 4. Adaptability 5. Consciousness and Self-Awareness 3. Artificial Superintelligence (ASI)/Superintelligent AI Represents an AI that not only mimics but significantly surpasses human intelligence across all fields like science, general wisdom, social skills, and more. ASI would be capable of extraordinary problem-solving and creative abilities, far beyond what current human minds can achieve. 12 Second Criteria: Based on Functionality 1. Reactive Machines Criteria: This classification is based on the AI's lack of memory and its ability to only react to immediate situations without considering past experiences. This level of A.I. is the simplest i.e. first stage & performs basic operations. The model stores no inputs, it performs no learning but it can still perform tasks that require some degree of problem-solving, albeit in a very constrained and limited way. They can execute complex algorithms that solve specific problems effectively. This involves some form of "intelligence," as these machines can outperform humans in narrow tasks due to their speed and accuracy. Reactive machines can make decisions in real-time based on current inputs. This ability to react to the present environment, even without learning or memory, can be very powerful in certain applications like video games, autonomous robots for specific tasks, or simple automated customer service systems. Static machine learning models are reactive machines. Their architecture is the simplest and they can be found on GitHub repos across the web. These models can be downloaded, traded, passed around and loaded into a developer’s toolkit with ease. An Example: IBM's Deep Blue, which defeated world chess champion Garry Kasparov (Chess GrandMaster) in 1997, is a reactive machine. It didn't learn from past games but was programmed to evaluate a vast number of possible chess moves quickly and choose the best one according to predefined algorithms. This ability to "think" many moves ahead in chess is a form of intelligence, albeit very narrow and specialized. 2. Limited Memory Criteria: This classification is based on the AI's ability to learn and make decisions based on historical data or past experiences. Limited memory types refer to an A.I.’s ability to store previous data and/or predictions, using that data to make better predictions. With Limited Memory, machine learning architecture becomes a little more complex. There are three major kinds of machine learning models that achieve this Limited Memory type: 2.a Reinforcement learning: These models learn to make better predictions through many cycles of trial and error. This kind of model is used to teach computers how to play games like Chess, Go, and DOTA2. In Reinforcement learning, "prediction" is more about estimating the potential future rewards of actions and choosing actions to maximize those rewards. It doesn't predict the next data point in a sequence; instead, it learns to make decisions that lead to the best outcomes. e.g AlphaGo, developed by DeepMind, is a famous example of reinforcement learning. It learned to play the complex board game Go by playing millions of games against itself, gradually improving its strategy through the rewards of winning or losing. Eventually, AlphaGo defeated the world's top human Go players. 13 2.b Long Short Term Memory (LSTMs) Researchers intuited that past data would help predict the next items in sequences, particularly in language, so they developed a model that used what was called the Long Short Term Memory. For predicting the next elements in a sequence, the LSTM tags more recent information as more important and items further in the past as less important. e.g. Speech Recognition: LSTMs are widely used in speech recognition systems. For instance, in Google’s voice search, LSTMs help model the sequential nature of speech, remembering the context of earlier words in a sentence to predict and recognize the subsequent words accurately. 2.c Evolutionary Generative Adversarial Networks (E-GAN) Evolutionary Generative Adversarial Networks (E-GANs) combine the concepts of evolutionary algorithms and generative adversarial networks (GANs). E-GANs use a population-based approach to evolve a set of generators over time, where the generators compete and evolve to produce increasingly realistic data. e.g. Human Evolution E-GANs are still a relatively new and experimental area so no example 3. Theory of Mind In this type of A.I., A.I. begins to interact with the thoughts and emotions of humans. We have yet to reach Theory of Mind artificial intelligence types. These are only in their beginning phases and can be seen in things like self-driving cars. e.g. If you angrily yell at Google Maps to take you another direction, it does not offer emotional support and say, "This is the fastest direction. Who may I call and inform you will be late?" Google Maps, instead, continues to return the same traffic reports and ETAs that it had already shown and has no concern for your distress. 4. Self-Aware Finally, in some distant future, perhaps A.I. achieves nirvana. It becomes self-aware. This kind of A.I. exists only in story, and, as stories often do, instills both immense amounts of hope and fear into audiences. A self-aware intelligence beyond the human has an independent intelligence, and likely, people will have to negotiate terms with the entity it created. What happens, good or bad, is anyone’s guess. 14 Lecture-05 Terminology Sensor ❖ A sensor is a device used to measure a property such as force, pressure, position, temperature, or acceleration and respond with feedback. ❖ A sensor is a device that detects the change in the environment and sends the information to other electronic devices i.e. it is used to take input. Actuators ❖ Actuators are the components of machines that convert energy into motion. The actuators are only responsible for moving and controlling a system. Actuators typically include electric motors & hydraulic etc. ❖ Actuators are the internal components responsible for converting the decision into action. Effectors ❖ Effectors are the external parts that directly interact with the environment. They are the physical components that carry out the movement generated by the actuators i.e. an actuator is the actual mechanism that enables the effectors to execute an action. ❖ Effectors can be legs, wheels, arms, fingers, wings, fins and display screens. Let's compare sensors, actuators and effectors in a human body: ▪ Sensors: These are like the eyes, ears, and skin. They gather information from the environment (e.g., light, sound, touch) and send it to the brain. ▪ Actuators: These are like the nerves that carry signals from the brain to the muscles, telling them to contract or relax. ▪ Effectors: These are the muscles themselves. They carry out the action, like moving a hand to pick up an object or walking. Agent An agent can be anything that perceives its environment through sensors and act upon that environment through actuators. An Agent runs in the cycle of perceiving, thinking, and acting. An agent can be: 1. Human-Agent: A human agent has eyes, ears, and other organs which work for sensors and hand, legs, vocal tract work for actuators. 2. Robotic Agent: A robotic agent can have cameras, infrared range finder, NLP for sensors, and various motors for actuators. 3. Software Agent: Software agents can have keystrokes, file contents as sensory input and act on those inputs and display output on the screen. Hence the world around us is full of agents such as thermostats, cellphone, cameras, and even we are also agents. 15 Specific types of agents are- 1. Intelligent Agent An intelligent agent (IA) in the context of artificial intelligence is an autonomous entity that perceives its environment through sensors and interacts with that environment using actuators to achieve specific goals. These agents can range from simple systems like thermostats to complex ones such as autonomous vehicles. Intelligent agents are designed to operate without human intervention and are capable of learning or acquiring knowledge to improve their performance over time. This learning aspect is crucial as it allows the agent to adapt to new situations and to incrementally accommodate new problem-solving rules. Following are the main four rules for an AI agent: Rule 1: An AI agent must have the ability to perceive the environment. Rule 2: The observation must be used to make decisions. Rule 3: Decision should result in an action. Rule 4: The action taken by an AI agent must be a rational action. An Example: A self-driving car exhibits the four essential properties of an intelligent agent: 1. The car perceives its environment using cameras, LIDAR (Light Detection and Ranging), GPS and sensors to detect objects like pedestrians, other vehicles, and road signs. 2. It processes this sensory data to make decisions about speed, lane changes, and traffic signals. 3. Based on its decisions, it takes actions like accelerating, steering or braking. 4. These actions are rational as they aim to avoid collisions, follow traffic rules, and reach the destination efficiently. This agent learns over time through experience, improving its ability to handle complex driving environments. 2. Rational Agent/Problem Solving Agent In the context of artificial intelligence (AI), rationality refers to the ability of an agent to make decisions and take actions that are in its best interest or that maximize its expected performance in a given environment. 16 A rational agent is expected to act in a manner that maximizes its expected performance, which can be expressed as follows: a) Performance Measure: A rational agent is designed with a performance measure or utility function that quantifies how well it is doing in the given task or environment. This measure represents the agent's objectives, goals, or preferences. b) Information and Perception: The agent perceives its environment through sensors, collecting data and information that are relevant to its task. c) Decision-Making: The agent uses its internal function, typically based on decision theory, to analyze the available information and select actions that are expected to lead to optimal or near-optimal outcomes. d) Learning and Adaptation: Rational agents may also incorporate learning mechanisms to improve their performance over time by adjusting their actions based on feedback and experience. For an AI agent, the rational action is most important because in the AI reinforcement learning algorithm, for each best possible action, the agent gets the positive reward and for each wrong action, an agent gets a negative reward. Example: Recommendation systems employed by streaming services, e-commerce platforms, and social media networks are rational agents. They gather user data and preferences, apply decision theory to assess the expected utility of various content or product recommendations, and choose the most relevant items to present to users. These systems aim to maximize user satisfaction and engagement. Structure of an AI Agent The task of AI is to design an agent program that implements the agent function. The structure of an intelligent agent is a combination of architecture and agent programs. Agent = Architecture + Agent Program 1. Architecture: Architecture is machinery that an AI agent executes on. 2. Agent Function: Here is the expression for Agent function f: P → A It is a mapping between the percept sequence (P) and the actions (A) that the agent should take. P (Percept): This is the input, representing the observations or data the agent receives from the environment via its sensors. A (Action): This is the output, or the action the agent takes in response to the input. 3. Agent program: The agent program is an implementation of the agent function. An agent program executes the physical architecture to produce function f. An Example Here’s how the vacuum cleaner agent program works in terms of f: P → A: P (Percept): The vacuum perceives dirt through dirt sensors, obstacles through bump sensors, and battery status through power sensors. f (Agent Function): Based on the percept, the agent decides on the appropriate action using predefined rules (e.g., if dirt is detected, clean; if an obstacle is detected, turn). A (Action): The vacuum moves forward, turns, vacuums, or returns to the charging dock. This maps percepts to actions through the agent function. 17 Roomba Vacuum cleaner Many AI Agents use the PEAS model in their structure. PEAS is an acronym for Performance Measure, Environment, Actuators, and Sensors Performance Measure: Performance measure is a quantitative measure that evaluates the outcomes of an agent’s actions against a predefined goal. The performance measure is crucial because it guides the agent’s decision-making process, ensuring that it acts in a way that maximizes its success. Environment: The environment includes all external factors and conditions that the agent must consider when making decisions. The environment can vary significantly depending on the type of agent and its task. For instance, in the case of a robotic vacuum cleaner, the environment includes the layout of the room, the presence of obstacles, and the type of floor surface. The robot must adapt to these conditions to clean effectively. Actuators: They are responsible for executing the actions decided by the agent based on its perceptions and decisions. In essence, actuators are the "hands and feet" of the agent, enabling it to carry out tasks. In a robotic arm, actuators would be the motors and joints that move the arm to pick up objects. In a software-based AI, actuators might be commands sent to other software components or systems to perform specific tasks. Sensors: Sensors collect data from the environment, which is then processed by the agent to make informed decisions. Sensors are the "eyes and ears" of the agent, providing it with the necessary information to act intelligently. In the context of an autonomous drone, sensors might include cameras, GPS, and altimeters (used to measure altitude), which provide the drone with information about its location, altitude, and surroundings. This data is essential for the drone to navigate and perform tasks like aerial photography or package delivery. Here are few examples 18 Agent Performance Environment Actuators Sensors measure Medical - Healthy patient - Patient - Tests - Keyboard Diagnose - Minimized cost - Hospital - Treatments (Entry of symptoms) - Staff Vacuum - Cleanness - Room - Wheels - Camera Cleaner - Efficiency - Table - Brushes - Dirt detection sensor - Battery life - Wood floor - Vacuum - Cliff sensor - Security - Carpet Extractor - Bump Sensor - Various - Infrared Wall Sensor obstacles Part-picking - Percentage of - Conveyor belt - Jointed Arms - Camera Robot parts in correct with parts, - Hand - Joint angle sensors. bins - Bins Types of agent 1. Simple Reflex Agent These agents function on a set of so-called reflexes or rules. This means that the agent is preprogrammed to perform actions that correspond to certain conditions being met. These agents take decisions on the basis of the current percepts and ignore the rest of the percept history. This agent does not hold any memory, nor does it interact with other agents if it is missing information. The agents are only effective in environments that are fully observable granting access to all necessary information. If the agent encounters a situation that it is not prepared for, it cannot respond appropriately. These agents are easy to implement, fast and efficient yet they have very limited capabilities. Example: A thermostat that turns on the heating system at a set time every night. The condition-action rule here is, for instance, if it is 8 PM, then the heating is activated. 19 2. Model-based reflex agents The agent’s actions depend on its model, reflexes, previous precepts and current state. Model-based reflex agents use both their current perception and memory to maintain an internal model of the world. As the agent continues to receive new information, the model is updated. These agents can store information in memory and can operate in environments that are partially observable and changing. However, they are still limited by their set of rules. When the agent encounters a new situation it can consult its internal model to understand the context This agent can then use its pre-programmed rules along with the model to decide on the best course of action Example: A robot vacuum cleaner. As it cleans a dirty room, it senses obstacles such as furniture and adjusts around them. The robot also stores a model of the areas it has already cleaned to not get stuck in a loop of repeated cleaning. 3. Goal-based Agents To reach their goals, these AI agents employ planning algorithms. The planning process often involves examining a tree of possibilities, with each branch representing a different action the agent can take. Goal-based AI agents consider the potential consequences of each action and choose the one that leads it closer to their goal. They rely on knowledge representation to perform adequate planning. This knowledge base stores information about the environment, the AI agent’s capabilities, and the relationships between actions and outcomes They usually require search and planning. The goal-based agent’s behavior can easily be changed. Example: A GPS navigation system is a classic example of a goal-based agent. Its primary goal is to take you from point A to point B. It considers different routes, evaluates which route will help achieve the goal (reaching the destination), and acts accordingly. It doesn't account for other factors like fuel efficiency or scenic routes unless those impact reaching the goal. 20 4. Utility-based agents Utility-based agents evaluate different courses of action based on a utility function. This function assigns a specific numerical value to each possible outcome, representing how desirable that outcome is for the agent. The agent strives to maximize its overall score by choosing actions that lead to outcomes with higher utility values Next, they consider different possible actions to take. For each action, the agent predicts the potential outcomes that can occur. The utility function assigns a score to each predicted outcome based on how desirable it is for the agent. Then, the agent selects the predicted action to lead to the outcome with the highest utility value Utility-based agents can consider factors like risk, time, and effort when evaluating different options yet Designing the utility function is complex and there is a degree of uncertainty about the outcomes Example: A self-driving car can be a utility-based agent. It not only aims to reach the destination (goal-based) but also tries to optimize fuel efficiency, minimize travel time, and ensure passenger comfort by calculating a utility value for different actions. It balances multiple objectives for an optimal outcome. 5. Learning Agents Learning agents hold the same capabilities as the other agent types but are unique in their ability to learn. New experiences are added to their initial knowledge base, which occurs autonomously. This learning enhances the agent’s ability to operate in unfamiliar environments. Learning agents may be utility or goal-based in their reasoning and are comprised of four main elements: a) Learning: This improves the agent’s knowledge by learning from the environment through its precepts and sensors. b) Critic: This provides feedback to the agent on whether the quality of its responses meets the performance standard. c) Performance: This element is responsible for selecting actions upon learning. d) Problem generator: This creates various proposals for actions to be taken. Learning agents require a significant amount of data and time. The agent needs to balance exploring new options for learning with exploiting its current knowledge for good performance. Understanding how a learning agent arrived at a particular decision can be challenging 21 Example: Personalized recommendations on e-commerce sites. These agents track user activity and preferences in their memory. This information is used to recommend certain products and services to the user. The cycle repeats each time new recommendations are made. The user’s activity is continuously stored for learning purposes. In doing so, the agent improves its accuracy over time. Reference: https://eluminoustechnologies.com/blog/ai-agents/ https://www.ibm.com/think/topics/ai-agents 22 Unit-02: Searching (Lecture 06 to 13) Lecture-06-07 Consider the image of following maze- Search Algorithms Search algorithms in AI are algorithms that help in the resolution of search issues. A search issue comprises the search space, start and goal test (All three terms discussed later). By evaluating scenarios and alternatives, search algorithms in artificial intelligence assist AI agents in achieving the objective state. Search algorithms provide search solutions by transforming the initial state to the desired state. Search algorithms in artificial intelligence are used to find the best possible solutions for AI agents. Search Algorithm Terminologies 1. Search Space: A search space is a collection of potential solutions a system may have. 2. Start State: It is a state from where agent begins the search. 3. Goal state: The desired resulting condition in a given problem, and what search algorithms are looking for. Some problems may have a unique goal state (arrive at a given location, for example) whereas others can have multiple goal states, all satisfying a set of conditions (checkmate in chess, for instance.) 4. Goal test: A function that examines the current state and returns whether or not the goal state has been attained. 5. Search tree: A Search tree is a tree representation of a search issue. The node at the root of the search tree corresponds to the initial condition. 6. Actions: It describes all the steps, activities, or operations accessible to the agent. 23 7. Transition model: It can be used to convey a description of what each action does. The transition model defines how actions change the current state, moving from one state to another. 8. Path Cost: It is a function that gives a cost to each path. It is the number of steps or effort needed to reach a state. 9. Solution: An action sequence connects the start node to the target node. 10. Optimal Solution: If a solution has the lowest cost among all solutions, it is said to be the optimal answer. Lecture-08 Properties for Search Algorithms Following are the four essential properties of search algorithms to compare the efficiency of these algorithms: 1. Completeness: A search algorithm is said to be complete if it guarantees to return a solution if at least any solution exists for any random input. 2. Optimality: If a solution found for an algorithm is guaranteed to be the best solution (lowest path cost) among all other solutions, then such a solution is said to be an optimal solution. 3. Time Complexity: Time complexity is a measure of time for an algorithm to complete its task. 4. Space Complexity: It is the maximum storage space required at any point during the search, as the complexity of the problem. Types of Search Algorithms An analogy to understand about the heuristic/informed and non-heuristic/uninformed search 24 Heuristic Search (Informed Search): Imagine you're in a big library trying to find a specific book. You know the book is in the "History" section, so you go straight to that section. You don’t check every aisle because your prior knowledge helps you search faster. Heuristic search works similarly by using shortcuts or educated guesses to reach the goal quickly. Non-Heuristic Search (Uninformed Search): In contrast, if you had no idea where the book was, you'd have to check every aisle one by one until you found it. This is like non-heuristic search, where no prior knowledge is used, and all possible options are explored methodically. Comparison of uninformed & informed search algorithms Feature Uninformed Search (Blind Search) Informed Search Knowledge Operates solely on the problem Utilizes additional knowledge about the Used structure without external knowledge. problem domain (heuristics) Search Less efficient as it explores the search Generally more efficient due to heuristic Efficiency space systematically without guidance. guidance. Heuristic No heuristic function; treats each node Employs a heuristic function to estimate Function uniformly. the cost to the goal Optimality Typically optimal in finite, unweighted Can be optimal, depending on the graphs (e.g., BFS) but not always heuristic used. (e.g., DFS). Completeness Complete in finite search spaces. Complete in finite search spaces if the heuristic is admissible. Computational Comparatively higher computational Computational requirements are requirements requirements. lessened. Efficiency It is comparatively less efficient as It is more efficient as efficiency takes incurred cost is more and the speed of into account cost and performance. The finding the Breadth-First solution is incurred cost is less and the speed of slow. finding solutions is quick. Lecture-09-10 The queue data structure A Queue Data Structure is a fundamental concept in computer science used for storing and managing data in a specific order. It follows the principle of "First in, First out" (FIFO), where the first element added to the queue is the first one to be removed. Basic Operations of Queue Data Structure Enqueue (Insert): Adds an element to the rear of the queue. Dequeue (Delete): Removes and returns the element from the front of the queue. 25 The stack data structure A Stack is a linear data structure that follows a particular order in which the operations are performed. The order is LIFO (Last In First Out). LIFO implies that the element that is inserted last, comes out first. Basic Operations on Stack Data Structures Push: Adds an element to the top of the stack. Pop: Removes the top element from the stack. queue stack Breadth-first search BFS is a traversing algorithm where you should start traversing from a selected node (source or starting node) and traverse the graph layerwise thus exploring the neighbour nodes (nodes which are directly connected to source node). You must then move towards the next-level neighbour nodes. Nodes represent states of the problem & Edges represent the possible actions or transitions between states. The BFS uses Queue (i.e. First In First Out). As the name BFS suggests, you are required to traverse the graph breadthwise as follows: 1. First move horizontally and visit all the nodes of the current layer 2. Move to the next layer 26 Link of the above image is here The algorithm works as follows: 1. Start by putting any one of the graph's vertices at the back of a queue. 2. Take the front item of the queue and add it to the visited list (if not already visited). 3. Create a list of that vertex's adjacent nodes. Add the ones which aren't in the visited list to the back of the queue. 4. Keep repeating steps 2 and 3 until the queue is empty. 5. Print visited list for BFS Traversal. Here is how the elements are added to the queue; the elements with red highlights are eliminated from the queue and the elements with green highlights are added to the queue. A ABC ABCDE ABCDEFGH ABCDEFGHIJ ABCDEFGHIJKLM ABCDEFGHIJKLMN ABCDEFGHIJKLMN ABCDEFGHIJKLMNOP ABCDEFGHIJKLMNOP ABCDEFGHIJKLMNOP ABCDEFGHIJKLMNOP ABCDEFGHIJKLMNOP ABCDEFGHIJKLMNOP ABCDEFGHIJKLMNOP Here is another diagram to explain the same 27 Advantage: As breadth-first search exhaustively examines every node at a particular depth before progressing to the next level, it is guaranteed to find the solution, if one exists, with the shortest path from the initial state. Disadvantage: A disadvantage of breadth-first search is that it can have a high memory requirement - as a record needs to be maintained of every expanded node. Completeness: BFS is complete because it searches the tree level by level. So, it will not go in an infinite branch. Optimality: BFS is optimal only if all branches have the same cost, because it finds the path with the least number of edges, therefore, it will find the optimal path if all branches have the same cost. Also, the cost of edges should be non-negative. Time complexity: If our search tree has a branching factor (b) and our solution is at depth (m). The worst-case scenario will be that the goal state is the rightest leaf node in the search tree. We will search the whole tree. Then the time complexity will be the number of nodes in our search tree. Number of expanded nodes = 1 + b + b² + b³ + … + bᵐ = O(bᵐ) Time complexity = Number of expanded nodes = O(bᵐ) Space complexity: 28 we will store at least all nodes in a level when exploring this level. The worst-case scenario is that our goal is a leaf node, so our Queue will contain all leaves. The number of leaves at level m will be O(bᵐ). Space complexity = O(bᵐ) Real-World Problem Solved using BFS Snake and Ladder Problem Chess Knight Problem Shortest path in a maze Flood Fill Algorithm Count number of islands https://medium.com/techie-delight/top-20-breadth-first-search-bfs-practice-problems-ac2812283ab1 Depth-first search The DFS algorithm is a recursive algorithm that uses the idea of backtracking. It involves exhaustive searches of all the nodes by going ahead, if possible, else by backtracking. Here, the word backtrack means that when you are moving forward and there are no more nodes along the current path, you move backwards on the same path to find nodes to traverse. All the nodes will be visited on the current path till all the unvisited nodes have been traversed after which the next path will be selected. This recursive nature of DFS can be implemented using stacks (i.e. Last In First Out System). As the name DFS suggests, you are required to traverse the graph depthwise. The DFS algorithm works as follows: 1. Start by putting any one of the graph's vertices on top of a stack. 2. Take the top item of the stack and add it to the visited list (if not already visited). 3. Create a list of that vertex's adjacent nodes. Add the ones which aren't in the visited list to the top of the stack. 4. Keep repeating steps 2 and 3 until the stack is empty. 5. Print visited list for DFS Traversal. Link of the above image is here This is how elements are getting inserted into the stack 29 I I K K D D J J J L L L F F N B B E E E E E E M M M M G G G A A C C C C C C C C C C C C C H H H Here is another diagram to explain the same Advantage: DFS requires very little memory as it only needs to store a stack of the nodes on the path from the root node to the current node. It takes less time to reach the goal node than the BFS algorithm (if it traverses in the right path). Disadvantage: Unlike breadth-first search, depth-first search is not guaranteed to find the solution with the shortest path. As it is possible for depth-first search to proceed down an infinitely long branch, without ever returning to explore other branches, there is no guarantee that depth-first search will ever find a solution, even when one exists. Completeness: Depth-First Search (DFS) is not considered complete because it can get stuck in an infinite loop if the search space is infinite or contains cycles. Since DFS explores as deeply as possible along one path before backtracking, it may continue down an endless path without ever finding a solution, even if one exists at a shallower depth. As a result, DFS might miss the goal entirely, making it incomplete for certain problems, especially when there's no mechanism to limit depth or avoid revisiting nodes. Optimality: DFS is not optimal because it finds the most left solution branch. There could be better branches that lead to the solution than this branch. Time complexity: 30 If our search tree has a branching factor (b) and our solution is at depth (m). The worst-case scenario will be that the goal state is the rightest leaf node in the search tree. We will search the whole tree. Then the time complexity will be the number of nodes in our search tree. Number of expanded nodes = 1 + b + b² + b³ + … + bᵐ Time complexity = Number of expanded nodes = O(bᵐ) Space complexity: In our frontier, we will store the successors presented in the branch from the initial state to the goal state. We will store (b) nodes for (m) depth. Space complexity = O (b * m) Depth-Limited Search Depth limited search is the new search algorithm for uninformed search. The unbounded tree problem happens to appear in the depth-first search algorithm, and it can be fixed by imposing a boundary or a limit to the depth of the search domain. We will say that this limit is the depth limit, making the DFS search strategy more refined and organized into a finite loop. We denote this limit by l, and thus this provides the solution to the infinite path problem that originated earlier in the DFS algorithm. Depth-limited search is found to terminate under these two clauses: 1. Standard Failure Value: It indicates the failure which indicates that all branches are explored without finding the value and without hitting the depth limit. 2. Cutoff Failure Value: It indicates the failure which indicates that all branches are explored without finding the value but hitting the depth limit i.e. solution may exist at the deeper level. An Example Say for the above example the maximum level of searching (which is l) is 2. This is how elements are getting inserted into the stack C C A A D D D I I S S B B B B B B J J J Advantages: 31 Depth-limited search is Memory efficient. Disadvantages: Depth-limited search also has a disadvantage of incompleteness. It may not be optimal if the problem has more than one solution. Time complexity: The time complexity of Depth-Limited Search (DLS) is: Time complexity is: O(bl) where: b is the branching factor (the number of successors for each node). l is the depth limit (the maximum depth the search will go). Space complexity: The space complexity of Depth-Limited Search (DLS) is: space complexity is: O(b * l) because the search only needs to store up to l nodes at any time (one per level in the recursive call stack). Iterative deepening depth-first search The iterative deepening algorithm is a combination of DFS and BFS algorithms. This search algorithm finds out the best depth limit and does it by gradually increasing the limit until a goal is found. This algorithm performs depth-first search up to a certain "depth limit", and it keeps increasing the depth limit after each iteration until the goal node is found. This Search algorithm combines the benefits of Breadth-first search's fast search and depth-first search's memory efficiency. The iterative search algorithm is useful for uninformed search when the search space is large, and the depth of the goal node is unknown. An Example 32 In the first iteration only check the level-0; We are unable to reach to the goal state and the order of traversal will be [A] In the second iteration only check the level-0 & 1; We are unable to reach to the goal state and the order of traversal will be [A -> B -> C] In the third iteration only check the level-0, 1 & 2; We are able to reach to the goal state and the order of traversal will be [A->B->D->E->C->F->G] Advantages: It combines the benefits of BFS and DFS search algorithms in terms of fast search and memory efficiency. Disadvantages: The main drawback of IDDFS is that it repeats all the work of the previous phase. Completeness: ID-DFS is complete, it runs level by level until finding the goal state. In other words, it uses DFS and BFS together. Optimality: ID-DFS is optimal, it runs level by level until finding the goal state, so it will find the path with the least number of edges. But like BFS, it is optimal only if all branches have the same cost. Also, the cost of edges should be non-negative. Time complexity: The goal node lies at depth (m). So, we will run DLS for (m) times. Each time we will run DLS for a specific depth. So, the time complexity will be the sum of the time complexity of DLS for each depth. Time complexity: O(bᵐ) Space complexity The space complexity will be the space complexity of DLS for the maximum depth. Space complexity = O(b * m) Bidirectional Search 33 Bidirectional search algorithm runs two simultaneous searches, one from the initial state called forward-search and the other from the goal node called backward-search, to find the goal node. Bidirectional search replaces one single search graph with two small subgraphs in which one starts the search from an initial vertex and the other starts from the goal vertex. The search stops when these two graphs intersect each other. Bidirectional search can use search techniques such as BFS, DFS, etc. An Example Comparison of Bidirectional BFS to BFS Advantages: Bidirectional search is fast. Bidirectional search requires less memory 34 Disadvantages: Implementation of the bidirectional search tree is difficult. In bidirectional search, one should know the goal state in advance. Completeness: yes if uses BFS; may or may not be in case of DFS Optimility: yes if uses BFS; may or may not be in case of DFS Time Complexity: O(b^(m/2)) when BFS is used O(b^m) when DFS is used Space Complexity: O(b^(m/2)) when BFS is used O(m) when DFS is used For both the time and space complexity; b is the branching factor and the m is the depth of the tree. The Priority Queue A priority queue is an extended version of the normal queue with the only difference being that its elements have a priority value associated with them. So the priority queue has a few properties, which are as follows: Every element of the queue has a priority assigned to it. The element with the highest priority is served first or will be dequeued before the element having low priority. Two elements with the same priority appear, then they will be served on the basis of their occurrence. The one who occurred first will be dequeued first. In the ascending order priority queue, the element with least value has the highest priority. Uniform cost search It is used to find the minimum path from the source node to the destination node around a directed weighted graph. This searching algorithm uses a brute force approach, visits all the nodes based on their current weight, and finds the path having minimum cost by repeatedly checking all the possible 35 paths. We use a boolean array visited and a priority queue to find the minimum cost. The node with minimum cost has the highest priority. The algorithm for the above algorithm is given as below: 1. Create a priority queue, a boolean array visited of the size of the number of nodes, and a min_cost variable initialized with maximum value. Add the source node to the queue and mark it visited. 2. Pop the element with the highest priority from the queue. If the removed node is the destination node, check the min_cost variable, if the value of the min_cost variable is greater than the current cost then update the variable. 3. If the given node is not the destination node then add all the unvisited nodes to the priority queue adjacent to the current node. Priority Queue Visited Array S->0 S->0, A->1, G->12 [S:T] S->0, A->1, C->2, B->4, G->12 [S:T, A:T] S->0, A->1, C->2, D->3, B->4, G->4, G->12 [S:T, A:T, C:T] S->0, A->1, C->2, D->3, B->4, G->4, G->6, G->12 [S:T, A:T, C:T, D:T] S->0, A->1, C->2, D->3, B->4, G->4, G->6, D->7, G->12 [S:T, A:T, C:T, D:T, B:T] S->0, A->1, C->2, D->3, B->4, G->4, G->6, D->7, G->12 [S:T, A:T, C:T, D:T, B:T] (min_cost = 4) S->0, A->1, C->2, D->3, B->4, G->4, G->6, D->7, G->12 [S:T, A:T, C:T, D:T, B:T] S->0, A->1, C->2, D->3, B->4, G->4, G->6, D->7, G->10, G->12 [S:T, A:T, C:T, D:T, B:T] S->0, A->1, C->2, D->3, B->4, G->4, G->6, D->7, G->10, G->12 [S:T, A:T, C:T, D:T, B:T] S->0, A->1, C->2, D->3, B->4, G->4, G->6, D->7, G->10, G->12 [S:T, A:T, C:T, D:T, B:T] 36 Advantage: Uniform cost search is an optimal search method because at every state, the path with the least cost is chosen. Disadvantage: It does not care about the number of steps or finding the shortest path involved in the search problem, and it is only concerned about path cost. This algorithm may be stuck in an infinite loop. Completeness: UCS is complete because it explores nodes with the least cost first. So, it will not go in an infinite branch. Optimality: UCS is optimal because it finds the path with the least cost even if the edges have different costs or there exist negative edges. Time Complexity: The time complexity of Uniform Cost Search (UCS) depends on the size of the search space and the branching factor. In the worst case, UCS explores every possible node before reaching the goal. Time complexity is: O(bd) where: ○ b is the branching factor (the maximum number of child nodes for each node). ○ d is the depth of the goal node. Space Complexity: The space complexity is the same as in BFS. Space complexity is: O(bd) Reference: https://www.educba.com/uniform-cost-search/ Lecture-11-12 Informed Search Algorithms The informed search algorithm is more useful for large search spaces. An informed search algorithm uses the idea of heuristic, so it is also called Heuristic search. Heuristics function: Heuristic is a function that is used in Informed Search, and it finds the most promising path. It takes the current state of the agent as its input and produces the estimation of how close the agent is to the goal. Heuristic function estimates how close a state is to the goal. The heuristic function, however, might not always give the best solution, but it guarantees to find a good solution in a reasonable time. A heuristic function helps the search algorithm choose a branch from the ones that are available. It helps with the decision process by using some extra knowledge about the search space. Heuristic Function is represented by 37 h(n), and it calculates the cost of an optimal path between the pair of states. The value of the heuristic function is always positive. h(n) 13 S->13, B->4, A->12 S S->13, B->4, F->2, E-> 8, A->12 S, B S->13, B->4, F->2, G->0, E-> 8, I->9, A->12 S,B,F S->13, B->4, F->2, G->0, E-> 8, I->9, A->12 S,B,F,G Advantages: Much better than the uninformed method Best first search can switch between BFS and DFS by gaining the advantages of both the algorithms. This algorithm is more efficient than BFS and DFS algorithms. Disadvantages: It can behave as an unguided depth-first search in the worst-case scenario. It can get stuck in a loop as DFS. Completeness: If It gets stuck in an infinite loop then it will not be able to find any solution i.e. it is incomplete algorithm Optimality: Good Solution Guaranteed; optimal not Guaranteed Time complexity: O(bᵐ) as it is depend on the quality of heuristic function; for a bad heuristic function its complexity may be same as for the uninformed search. Space complexity: 39 O(bᵐ) because in worst case scenario it has to go for all nodes to find result. A* Search Algorithm Algorithm A* is a best-first search algorithm that relies on an open list and a closed list to find a path that is both optimal and complete towards the goal. It works by combining the benefits of the uniform-cost search and greedy search algorithms. A* makes use of both elements by including two separate path finding functions in its algorithm that take into account the cost from the root node to the current node and estimates the path cost from the current node to the goal node. The first function is g(n), which calculates the path cost between the start node and the current node. The second function is h(n), which is a heuristic to calculate the estimated path cost from the current node to the goal node. f(n) = g(n) + h(n). It represents the path cost of the most efficient estimated path towards the goal. A* continues to re-evaluate both g(n) and h(n) throughout the search for all of the nodes that it encounters in order to arrive at the minimal cost path to the goal. This algorithm is extremely popular for pathfinding in strategy computer games. It also uses priority queue. Algorithm: 1. Create an open list and a closed list that are both empty. Put the start node in the open list. 2. Loop this until the goal is found or the open list is empty: a. Find the node with the lowest F cost in the open list and place it in the closed list. b. Expand this node and for the adjacent nodes to this node: i. If they are on the closed list, ignore them. ii. If not on the open list, add to the open list, store the current node as the parent for this adjacent node, and calculate the F, G, H costs of the adjacent node. iii. If on the open list, compare the G costs of this path to the node and the old path to the node. If the G cost of using the current node to get to the node is the lower cost, change the parent node of the adjacent node to the current node. Recalculate F,G,H costs of the node. 3. If the open list is empty, fail. An Example Open List (Priority Queue) Closed List 40 S->5 S->5, A->4, G->10 S S->5, A->4, C->4, B->7, G-> 10 S, A S->5, A->4, C->4, G-> 6, B->7, G->10, D->11 S, A, C S->5, A->4, C->4, G-> 6, B->7, G->10, D->11 S, A, C, G Advantages: A* search algorithm is the best algorithm than other search algorithms. This algorithm can solve very complex problems. Disadvantages: It does not always produce the shortest path as it is mostly based on heuristics and approximation. A* Search algorithm has some complexity issues. The main drawback of A* is memory requirement as it keeps all generated nodes in the memory, so it is not practical for various large-scale problems. Completeness: Yes; if it has finite search space, finite step costs and branching factor is also finite No; if it has infinite search space and zero cost loop Optimal: Yes; if it has Admissible Heuristic, Consistent (Monotonic) Heuristic, Finite Path Costs No; if it has Inadmissible Heuristic & non-finite Path costs Time complexity: O(bᵐ) Space complexity: O(bᵐ) An example of an admissible heuristic is the straight-line distance (Euclidean distance) in a map-based pathfinding problem. If you're using A* to find the shortest path between two cities, and your heuristic is the straight-line distance between the current city and the goal city, this heuristic is admissible. This is because the straight-line distance will never overestimate the actual road distance, as roads often have curves, making the actual travel distance longer than the straight-line distance. An example of an inadmissible heuristic is using the Manhattan distance (which calculates the distance by summing horizontal and vertical movements) in a scenario where diagonal movements are allowed. In this case, the Manhattan distance could overestimate the true cost because it doesn't account for the more direct, diagonal movement that might result in a shorter path. Since the heuristic 41 overestimates the actual minimum cost, it would lead the A* algorithm to potentially find a suboptimal solution, violating the condition for optimality. A consistent (monotonic) heuristic is a heuristic that satisfies the condition: For any node n and its successor node n', the estimated cost of reaching the goal from n is less than or equal to the cost of reaching n' plus the estimated cost from n' to the goal. Mathematically: h(n) ≤ c(n,n′) + h(n′) This ensures that the total estimated cost along any path is non-decreasing, and as a result, A* will never need to revisit a node, improving efficiency. Solving 8 cube problem Solution h(n) = number of misplaced tiles g(n) = depth of node f(n) = g(n) + h(n) 42 Do it yourself Solve this problem using A* Search Algorithm Start state is A and goal State is J 43 The value written on the edge represents the distance between nodes and the number written on the node represents the heuristic value. Solution Open List (Priority Queue) Closed List A->10 A->10, F->9, B->14 A A->10, F->9, G->9, H->13, B-> 14 A, F A->10, F->9, G->9, I->8, H->13, B-> 14 A, F, G A->10, F->9, G->9, I->8, J->10, H->10, B-> 14, E->15 A, F, G, I So the path is A -> F -> G -> I. Lecture-13 Hill Climbing Algorithm Hill climbing algorithm is a local search algorithm that continuously moves in the direction of increasing elevation/value to find the peak of the mountain or best solution to the problem. It terminates when it reaches a peak value where no neighbor has a higher value. It is also called greedy local search as it only looks to its good immediate neighbor state and not beyond that. Hill Climbing is mostly used when a good heuristic is available. In this algorithm, we don't need to maintain and handle the search tree or graph as it only keeps a single current state. A node of hill climbing algorithm has two components which are state and value. 44 Hill climbing algorithm is a technique that is used for optimizing mathematical problems. One of the widely discussed examples of the Hill climbing algorithm is the Traveling-salesman Problem in which we need to minimize the distance traveled by the salesman. Features of Hill Climbing Generate and Test variant: Hill Climbing is the variant of Generate and Test method. The Generate and Test method produce feedback which helps to decide which direction to move in the search space. Greedy approach: Hill-climbing algorithm search moves in the direction which optimizes the cost. No backtracking: It does not backtrack the search space, as it does not remember the previous states. State-space Diagram for Hill Climbing ❖ The state-space landscape is a graphical representation of the hill-climbing algorithm which is showing a graph between various states of algorithm and Objective function/Cost. On the Y-axis we have taken the function which can be an objective function or cost function, and state-space on the X-axis. If the function of the Y-axis is the Objective function, then the goal of the search is to find the global maximum and local maximum. If the function on Y-axis is cost then, the goal of the search is to find the global minimum and local minimum. ❖ Local Maximum: Local maximum is a state which is better than its neighbor states, but there is also another state which is higher than it. ❖ Global Maximum: Global maximum is the best possible state of the state space landscape. It has the highest value of the objective function. ❖ Current state: It is a state in a landscape diagram where an agent is currently present. ❖ Flat local maximum: It is a flat space in the lan