1-whycare.pdf
Document Details
Uploaded by SnappySaturn2708
Tags
Full Transcript
HS1501 §1 Why care We start the course with a definition of artificial intelligence, a showcase of its capabilities and problems, and a number of natural questions about AI that we will look into in this course. 1.1 What is Artificial Intelligence (AI)? Depending on the context, the ter...
HS1501 §1 Why care We start the course with a definition of artificial intelligence, a showcase of its capabilities and problems, and a number of natural questions about AI that we will look into in this course. 1.1 What is Artificial Intelligence (AI)? Depending on the context, the term AI may refer to a system or a scientific discipline. The following definition given in Apr. 2019 by the High-Level Expert Group on Artificial Intelligence set up by the European Commission covers both senses of the term. We will see some of the technical terms used explained in §6.1. Artificial intelligence (AI) systems are software (and possibly also hardware) systems designed by humans that, given a complex goal, act in the physical or digital dimension by perceiving their environment through data acquisition, interpreting the collected structured or unstructured data, reasoning on the knowledge, or processing the information, derived from this data and deciding the best action(s) to take to achieve the given goal. AI systems can either use symbolic rules or learn a numeric model, and they can also adapt their behaviour by analysing how the environment is affected by their previous actions. As a scientific discipline, AI includes several approaches and techniques, such as machine learning (of which deep learning and reinforcement learning are specific examples), machine reasoning (which includes planning, schedul- ing, knowledge representation and reasoning, search, and optimization), and robotics (which includes control, perception, sensors and actuators, as well as the integration of all other techniques into cyber-physical systems). Source: High-Level Expert Group on Artificial Intelligence. “A definition of AI: Main capabilities and disciplines”. 8 Apr. 2019. https://ec.europa.eu/newsroom/dae/document.cfm?doc id=60651. Last accessed: 9 Aug. 2024. We will also use the term AI to refer to AI systems collectively, in addition to using it for a particular AI system and for the scientific discipline of AI. Programs that drive AI systems are often called AI models, or models for short. 1 1.2 What can AI do nowadays? 1.2.1 Beating human in Go An AI model called AlphaGo, developed by Google DeepMind, beat 9-dan professional Lee Sedol in Go in 2016. It was the first time a computer Go program has beaten a 9-dan professional player without handicap. This was considered practically unachievable in view of the vast number of possible moves in the game of Go. Image source: Axd, CC BY-SA 4.0, via Wikimedia Commons. https://commons.wikimedia.org/wiki/File: Lee-sedol-alphago-divine-move.jpg. 1.2.2 Checking parking payment Prof. Yu explains how AI can help automate the checking of parking payment in the video below. 26 sec 1.2.3 Writing job applications Here is a demonstration from Aug. 2024 of how Google’s AI-based Gemini can help write cover letters for job applications, using a job advertisement taken from the NUS Student Work Scheme (NSWS) system and the advice from the NUS Centre for Future-ready Graduates on crafting cover letters. 2 3 4 5 1.2.4 And many other applications Andrew Ng: AI is the new electricity. It will transform every industry and cre- ate huge economic value. Technology like supervised learning is automation on steroids. It is very good at automating tasks and will have an impact on every sector – from healthcare to manufacturing, logistics and retail. Source: Catherine Jewell. “Artificial intelligence: the new electricity”. WIPO Magazine, Jun. 2019. https: //www.wipo.int/wipo magazine/en/2019/03/article 0001.html. Andrew Ng is an active proponent of AI education, Chairman and Co-Founder of Cours- era, and Adjunct Professor at Stanford University. He co-founded Google Brain in 2012 and was Chief Scientist at Baidu 2014–2017. 1.3 Does AI bring any problem? 1.3.1 AI won an art competition Image source: Jason M. Allen / Midjourney. “Théâtre d’Opéra Spatial”. https://commons.wikimedia.org/ wiki/File:Th%C3%A9%C3%A2tre d’Op%C3%A9ra Spatial.webp. The image above won the first place in the digital art competition at the 2022 Colorado State Fair. Jason M. Allen made it using an AI program called Midjourney, which generates in seconds custom images on users’ text inputs. This sparked controversy over the role of AI in art. Sources: Colorado State Fair. “2022 Fine Arts First, Second & Third”. 29 Aug. 2022. https://colora dostatefair.com/wp-content/uploads/2022/08/2022-Fine-Arts-First-Second-Third.pdf. Rachel Metz. “AI won an art contest, and artists are furious”. CNN Business, 3 Sep. 2022. https://edition.cnn. com/2022/09/03/tech/ai-art-fair-winner-controversy/index.html. 1.3.2 Autonomous cars crashed In May 2016, Joshua Brown’s Tesla Model S collided with a truck in Florida, USA while it was engaged in the “Autopilot” mode and he was killed. Prof. Yu discusses this and similar 6 accidents in the following video. 2 min 26 sec Reference: Danny Yadron and Dan Tynan. “Tesla driver dies in first fatal crash while using autopilot mode”. The Guardian, Jul. 2016. https://www.theguardian.com/technology/2016/jun/30/tesla-autopilot-dea th-self-driving-car-elon-musk. 1.3.3 Fake videos circulated In Mar. 2022 during the war between Russia and Ukraine, a deepfake video of Ukrainian President Volodymyr Zelenskyy urging Ukrainians to put down their weapons circulated on social media and was placed on a Ukrainian news website by hackers. 1 min 12 sec Source: The Telegraph (@telegraph). “Deepfake video of Volodymyr Zelensky surrendering surfaces on social media”. YouTube, 17 Mar. 2022. https://youtu.be/X17yrEV5sl4. 1.3.4 AI Risk In May 2023, many AI experts (and other notable figures) signed the following Statement on AI risk. Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war. The signatories include: Geoffrey Hinton (Emeritus Professor of Computer Science at the University of Toronto, awarded the Turing Award in 2018 for his work in AI); Yoshua Bengio (Professor of Computer Science at Université de Montréal, awarded the Turing Award in 2018 for his work in AI); Demis Hassabis (CEO of Google DeepMind, which developed AlphaGo mentioned in §1.2.1 above); and Sam Altman (CEO of OpenAI, which developed the GPT family of large language models and the text-to-image AI model DALL-E). Reference: Centre for AI Safety. “Statement on AI Risk”. https://www.safe.ai/statement-on-ai-risk. Last accessed: 9 Aug. 2024 7 1.4 Reflection How good is current AI? How fast is it developing? How much does it cost to incorporate AI in our work nowadays? – Only the tech giants can do this? Or individuals can also afford to take advantage? How much technical knowledge is required of us to deploy AI to perform tasks that are specific to our needs? – Do we need to know coding for it? How much of the work we are now doing ourselves can already be automated by AI? What can we exploit AI for, as individuals and as organizations? – Learning? Brainstorming? Making better decisions? Increasing productivity? Saving time and money? Starting and running businesses? Earning money? Im- proving life? Saving lives? Saving the earth? – What are some limitations of current AI? How creative can it be? What should we look out for to increase our chances of success when deploying AI? Is AI really going to be everywhere? Will AI take away my job in the future? – Which jobs are more easily replaced by AI? – What will the role of AI be in the future job market? How will AI transform jobs, societies, businesses, economies, politics, etc.? How do AI-made products compare to traditional machine-made products and hand- made products? In which directions is AI development heading currently? Will AI really be dangerous to us (human)? Answers to many of these questions may change as the technology evolves. Therefore, instead of presenting one fixed set of answers, this course will guide us to our own set of answers. 1.5 Try this out! In a paper published at the International Conference on Computer Vision 2023, researchers from the University of Maryland, Adobe Research, and Carnegie Mellon University intro- duced the use of rich text, i.e., text augmented with font, style, size, colour, text, and even Internet information, to give the user of a text-to-image AI model more fine-grained control over the output. Here are some demonstrations from their paper, where the images on the left are gener- ated using only textual information, and the images on the right take into account also the 8 augmented information. Image source: Songwei Ge, Taesung Park, Jun-Yan Zhu, and Jia-Bin Huang. “Expressive Text-to-Image Generation with Rich Text”. IEEE International Conference on Computer Vision (ICCV) 2023. Try this out yourself on Hugging Face to see how helpful the use of rich text is in specifying the image to be generated. 1. Open the “Expressive Text-to-Image Generation with Rich Text” page on Hugging Face at https://huggingface.co/spaces/songweig/rich-text-to-image. 2. Type into the text box a textual description of an image you would like to generate. 3. Include more specific information using the buttons at the top of the text box. 4. Click the “Generate” button at the bottom, and wait for the process to end. 5. The model generates the required outputs in the box labelled “Rich-text” and in the box labelled “Plain-text”. Compare the two. 6. Repeat the steps above with different inputs. 7. Evaluate the efficiency of model and the quality of the outputs. 9 Here is a demonstration that is not from the authors of the AI. Moral: trying out helps one understand an AI better. 10