Week 2 - History of AI.pdf
Document Details
Uploaded by StraightforwardAgate4200
Southville International School and Colleges
2024
Tags
Full Transcript
WEEK 2 History of AI First Semester | A.Y. 2024 - 2025 1940s - 1950s Birth of AI Concepts Warren McCulloch and Walter Pitts developed a model of 1943 artificial neurons, forming the first theoretical foundation of neural networks. Alan...
WEEK 2 History of AI First Semester | A.Y. 2024 - 2025 1940s - 1950s Birth of AI Concepts Warren McCulloch and Walter Pitts developed a model of 1943 artificial neurons, forming the first theoretical foundation of neural networks. Alan Turing introduced the Turing Test, a criterion for 1950 determining if a machine can think or possess intelligence. The term “Artificial Intelligence” was officially coined at the 1956 Dartmouth Conference. John McCarthy developed the LISP programming language, 1958 which became the standard for AI research. 1960s Early AI Programs The first industrial robot, Unimate, was deployed in a General 1961 Motors assembly line Joseph Weizenbaum created ELIZA, an early natural language 1966 processing program simulating conversation with a psychotherapist. The development of the Shakey robot by Stanford Research 1960s Institute (SRI), which could perceive its environment and make decisions, showcased the potential for AI in robotics. 1970s AI Winter Expectations outpaced actual progress. Funding for AI projects 1970s began to decline, leading to the first AI Winter—a period of reduced interest and investment. The Prolog programming language was developed, which 1972 would become central to AI logic programming. Progress slowed due to limited computational power and the 1974-80 inability of AI programs to handle real-world complexity. 1980s Revival Through Expert Systems AI experienced a resurgence due to the rise of expert systems, 1980 which used if-then rules to emulate human decision-making. Companies and governments began investing again. John Hopfield popularized the concept of neural networks, 1982 which allowed for more adaptive learning by machines. Geoffrey Hinton, David Rumelhart, and Ronald Williams 1986 published papers on backpropagation, revolutionizing neural networks and allowing more complex learning algorithms. 1990s Machine Learning & Applications AI moved beyond rule-based systems into machine learning, 1990s where algorithms learn from data and improve over time. IBM's Deep Blue defeated world chess champion Garry 1997 Kasparov, demonstrating the potential for AI in strategic decision-making. AI began to be integrated into everyday technology, such as 1999 voice recognition systems used in call centers. 2000s AI Becomes Ubiquitous AI technology, such as machine learning and NLP, became 2000s mainstream with applications in search engines (Google), spam filtering, and online recommendation systems. The first commercially successful robot vacuum cleaner, the 2002 Roomba, was introduced, showcasing AI in consumer products. AI entered a new era with Geoffrey Hinton’s work on deep 2006 learning and the rise of big data. Early 2010s AI Breakthroughs & Deep Learning IBM's Watson won Jeopardy!, beating human champions and 2011 showcasing AI’s ability to process natural language and retrieve vast amounts of information. AlexNet, a deep neural network, achieved a breakthrough in 2012 image recognition, leading to great advances in computer vision. Amazon Alexa and Google Home were released, integrating 2014 AI-powered virtual assistants into households. OpenAI, an organization dedicated to advancing digital 2015 intelligence safety, underscored long-term implications of AI. Late 2010s AI in Everyday Life and Industry Google DeepMind’s AlphaGo defeated world Go champion Lee 2016 Sedol, a major achievement in AI strategy and learning. AlphaZero, a more advanced version of AlphaGo, mastered 2017 chess, shogi, and Go with no prior knowledge of the games— learning entirely by playing against itself. AI-powered applications like GPT-2, developed by OpenAI, 2018 demonstrated the ability to generate coherent, human-like text. OpenAI’s GPT-3 emerged as the largest and most powerful 2019 language model, capable of generating complex text, writing essays, answering questions, and even creating code. 2020s AI’s Dominance & Ethical Challenges In the COVID-19 pandemic, AI helped researchers predict virus 2020 spread, speed up drug discovery, and develop treatments. AI advancements in self-driving cars, such as Tesla’s Autopilot 2021 and Waymo, demonstrated capability in navigating complex real-world environments. The rapid development of AI art generators, such as DALL-E 2023 and MidJourney, brought AI into the creative arts. What about the Future of AI? What do you think will be the next big advancement?