🎧 New: AI-Generated Podcasts Turn your study notes into engaging audio conversations. Learn more

03_Generative%20AI%20%26%20Prompt%20Engineering.pdf

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Full Transcript

Generative AI & Prompt Engineering MGT001354 AI for Innovation and Entrepreneurship Dr. Peter Hofmann | October 30th, 2023 | Munich 2023 “GPT-4 is out and it crushes every other LLM, and many humans” stateof.ai 2023 SOURCE: Benaich and Air Street Capital (2023) State of AI Report. 2 “60-70% o...

Generative AI & Prompt Engineering MGT001354 AI for Innovation and Entrepreneurship Dr. Peter Hofmann | October 30th, 2023 | Munich 2023 “GPT-4 is out and it crushes every other LLM, and many humans” stateof.ai 2023 SOURCE: Benaich and Air Street Capital (2023) State of AI Report. 2 “60-70% of employees’ time can be automated with generative AI” McKinsey 2023 SOURCE: McKinsey and Company (2023) The economic potential of generative AI: The next productivity frontier. 3 ChatGPT is one of the fastest growing internet products Days to 100 million user SOURCE: https://aiimpacts.org/how-popular-is-chatgpt-part-2-slower-growth-than-pokemon-go/ 4 Timeline of existing LLMs (size > 10bn) SOURCE: Zhao et al. (2023) A Survey of Large Language Models. arXiv. 5 Picture: Memeroid 6 SOURCE: https://www.gartner.com/en/articles/what-s-new-in-artificial-intelligence-from-the-2023-gartner-hype-cycle 7 (Expected) Potential Escape the hype and create sustainable value Realistic expectations and the right use case lead to sustainable value creation Exaggerated expectations lead to fast user growth but also rapid disinterest Time 8 Learning Objectives 1. Leverage generative AI’s application potential Get to know the application potential of Generative AI (especially large language models) and its transformative impact on the economy. 2. Be aware of Generative AI’s limitations Recognize the limitations of the state-of-the art Generative AI approaches and understand the need for governance mechanisms. 4. Make rational, informed business decisions Learn how to navigate this new era of LLMs, enabling to thoroughly evaluate available open-source and closed-source LLM options, make strategic make-or-buy decisions, and consider process (re-)design. 5. Master the art and science of writing prompts Learn various techniques, strategies, and tips to craft effective prompts 3. Incorporating Generative AI into an organization Understand both the business and technical aspects of incorporating LLMs into an organization. 9 Agenda 1 Conceptual and technological basics 2 LLM application potentials and limitations 3 Business decisions in the LLM context 4 The Art and Science of Prompt Engineering (by Bastian Burger from TUM Venture Labs) 5 How SUMM AI simplifies complicated text (by Nicholas Wolf from SUMM AI) 10 Conceptual and technological basics 11 What is generative AI? Specific class of computational techniques or a subfield of AI Machine agency that exhibits creativity and autonomy in producing content that is The counterpart of discriminative modeling (Generative modeling aims to infer some actual data distribution) new/original, divers, meaningful, … SOURCES: Feuerriegel et al. (2023) Generative AI. Bus Inf Syst Eng.; Ng and Jordan (2002) On discriminative vs. generative classifiers. Advances in neural information processing systems. appliedAI Initiative GmbH (2023) A Guide for Large Language Model Make-or-Buy Strategies. 12 Three key lessons from the history of generative AI ChatGPT is not the start of the generative AI story. Don’t forget generative adversarial networks (GAN) or applications such as Dall-E or AlphaFold. Breakthroughs go back to seminal research papers in the last 10 years of universities (e.g., Stanford university) and companies (e.g., Google). Recognize the technological breakthroughs that are beyond the mainstream buzz (e.g., transformers or diffusion models). 13 Large language models as the most recent instantiation of generative AI Perception Analysis Language Planning Generation Recognize structure and patterns in sensory data Find trends, associations and make predictions Understand language and computer code Make tactical or strategic decisions Synthesize new data samples Is the input structured data? E.g., tabular data, graphs Is input or output text or code? Is the output a sequence of steps? Is the output similar to raw sample inputs? E.g., images, sounds Is the input raw data? E.g., images, sensors, video 14 Why you should care about the technical particularities of LLM Data requirements are difficult but crucial to understand (e.g., due to use case and technical dependencies) 15 Why you should care about the technical particularities of LLM The model architecture and new learning paradigms have implications on business decisions and potentials (e.g., ROI, competitive advantage, …) 16 Why you should care about the technical particularities of LLM It is difficult to validate LLMs (You should consider the multidimensionality of business-effective benchmarking and the difficulties of detecting hallucinations) 17 Why you should care about the technical particularities of LLM Basic technical understanding helps you mastering prompt engineering (e.g., chain prompting) 18 Being successful with LLMs requires answering techno-economic questions on four levels LLM Application LLM application is key to business value generation E.g., consider the hype around ChatGPT vs. GPT-3.5 Large language models (LLM) Finding a suitable LLM requires making difficult choices We will discuss them in this lecture Data As in cooking, it all depends on the (data) ingredients Data volume requirements strongly depend on the use case, data quality and fit are always crucial for success Infrastructure Infrastructure is no commodity anymore Access to specific high-performance hardware is costly and has become a strategic resource SOURCE: appliedAI Initiative GmbH (2023) A Guide for Large Language Model Make-or-Buy Strategies. 19 Three characteristics describe current large language models in a nutshell 1 Large-scale, sequential neural network (e.g., transformer with an attention mechanism) 2 Pre-training through self-supervision in which auxiliary tasks are designed to learn a representation of natural language from large-scale datasets of text (e.g., via next-word prediction) SOURCES: Feuerriegel et al. (2023) Generative AI. Bus Inf Syst Eng. appliedAI Initiative GmbH (2023) A Guide for Large Language Model Make-or-Buy Strategies. 3 The language model may be fine-tuned with custom datasets and specific tasks can be predicted with in-context learning 20 Current large language models in a nutshell 1 Large-scale, sequential neural network (e.g., transformer with an attention mechanism) SOURCES: Feuerriegel et al. (2023) Generative AI. Bus Inf Syst Eng. appliedAI Initiative GmbH (2023) A Guide for Large Language Model Make-or-Buy Strategies. 21 Language data is more than text language data sequence of tokens with a known set of vocabulary Hence LaTeX formulas, musical notes, and programming languages like Python, Java, and C++ may all be adopted as training data SOURCE: appliedAI Initiative GmbH (2023) A Guide for Large Language Model Make-or-Buy Strategies. 22 Considering the sequence/context of tokens is a huge advantage compared to previous bag-of-words approaches I use an Apple MacBook for work We baked a delicious apple pie The latest Apple iPhone has a good camera Apples come in different colors, like green and red They say, “An apple a day keeps the doctor away” ? I have an apple in my hand, its color is _____ NOTE: Attention mechanisms are strong in making use of contextual information. 23 Current large language models in a nutshell 2 Pre-training through self-supervision in which auxiliary tasks are designed to learn a representation of natural language from large-scale datasets of text (e.g., via next-word prediction) SOURCES: Feuerriegel et al. (2023) Generative AI. Bus Inf Syst Eng. appliedAI Initiative GmbH (2023) A Guide for Large Language Model Make-or-Buy Strategies. 24 Self-supervised learning Predict any part of the input from any other part by pretending there is a part of the input that you do not know. Then predict the unknown part. 25 Self-supervised learning / Next-word prediction I use an Apple MacBook for work I use an Apple MacBook for ___ predict (next word) = work 26 Current large language models in a nutshell 3 The language model may be fine-tuned with custom datasets for specific tasks and specific tasks can be predicted with in-context learning SOURCES: Feuerriegel et al. (2023) Generative AI. Bus Inf Syst Eng. appliedAI Initiative GmbH (2023) A Guide for Large Language Model Make-or-Buy Strategies. 27 Foundation models can make use of the multimodality of input and output data Text 3D Image Code Audio Genomics Video Chemical Structures A foundation model is a large neural network model that captures and generalizes knowledge from massive data* and serves as a strong starting point for further customization as well as a fundamental building block for various specific downstream tasks. * GPT4 was trained on ~13 trillion tokens SOURCE: appliedAI Initiative GmbH (2023) A Guide for Large Language Model Make-or-Buy Strategies. 28 A simple explanation of the new learning paradigms Task spectrum Task Instructions Few shot learning Use case knowledge Pre-trained LLM Fine-tuning Ontological spectrum (account for industry, domain, and application specifics such as specific terms or ontological relationships) 29 Examples for few-shot learning and zero-shot learning Task description Zero-shot learning Few-shot learning Translate English to German: Translate English to German: Cat → Katze Dog → Hund Car → Auto Examples Prompt Adapted from Brown et al. (2020) Apple → …. Apple → …. 30 Fine-tuning bears great potentials but also challenges Benefits of fine-tuning ● Improved performance at specific task ● Vertical models provide tailored and relevant outputs* ● Small models with good data can rival big models Challenges of fine-tuning ● Requires carefully curated datasets that accurately represent the target domain, language, or business context ● Requires expertise as well as sufficient computational resource *Vertical models are foundations models tailored to specific industries, domains, and applications. 31 Own reading Glossary of learning paradigms in the LLM context Pre-training Fine-tuning The initial phase of training a neural network model where it learns from a large dataset, allowing the model to capture general knowledge and patterns from the data to enhance its performance and adaptability. The process of adapting a pre-trained neural network model to perform specific tasks by training it on task-specific data, allowing the model to specialize its knowledge and improving its performance on specific applications. Few-shot learning Zero-shot learning A technique whereby an AI model learns to perform a new task with a small number of examples, making it possible to teach the model something new without needing much training data. A technique whereby an AI model can understand and perform a task with no specific examples or training on that task, relying instead on general knowledge it has learned from related tasks. SOURCE: appliedAI Initiative GmbH (2023) A Guide for Large Language Model Make-or-Buy Strategies. 32 It would be embarrassing in interviews to not know what GPT and BERT stands for GPT = generative pre-trained transformer BERT = bidirectional encoder representations from transformers 33 LLM application potentials and limitations 34 X SOURCE: McKinsey and Company (2023) The economic potential of generative AI: The next productivity frontier. 35 SOURCE: https://www.youtube.com/watch?v=S7xT 36 SOURCE: https://www.youtube.com/watch?v=4RfD5JiXt3A 37 Multimodality + zero-shot learning + magic dust = general AI? LLMs are a milestone on the road to general AI; and there is likely to be more to come LLMs can solve various but specific downstream tasks VIDEO RECOMMENDATION: Sparks of AGI: early experiments with GPT4 by Sebastian Bubeck. https://www.youtube.com/watch?v=qbIk7-JPB2c 38 Limitations: Hallucination and Constraints in Factuality & Robustness SOURCE: Zhang et al. (2023) How Language Model Hallucinations Can Snowball. arXiv. 39 40 Limitations: LLMs fail with confidence (or in other words, they sometimes hallucinate) R R The output is typically not easily verifiable R R Probabilistic algorithms for making inferences may produce output with errors (most probable response vs. correct response) Truth + Risk of adversarial attacks via prompt injection SOURCE: Feuerriegel et al. (2023) Generative AI. Bus Inf Syst Eng. 41 Take a look at the collection above, do you notice anything or sense any bias here? Input: Happy family Output: SOURCE: https://www.statworx.com/en/content-hub/blog/dalle-2-open-ai/ NOTE: Picture was created with DALL-E 2 42 Limitations: Bias, fairness, and toxicity Societal biases, stereotypes, and toxicity Training data can amplify human biases, replicate toxic language, or perpetuate stereotypes permeate everyday human-generated content Maschine bias Machines can perpetuate the bias and toxicity under the guise of objectivity (also consider automation bias) Instructions are an additional source of bias SOURCE: Feuerriegel et al. (2023) Generative AI. Bus Inf Syst Eng. 43 Limitations: Violation of copyright laws Illegal copies of a work violating the reproduction right of creators Derivative works violating the transformation right of creators SOURCE: Feuerriegel et al. (2023) Generative AI. Bus Inf Syst Eng. 44 Limitations: Social and environmental concerns https://theconversation.com/the-hidden-cost-of-the-ai-boom-social-and-environmental-exploitation-208669 45 Limitations: Costs / Business case LLMs might not be compatible with the traditional software monetization paradigm The cost of inference is significant and requires a business case broken down to the call/prompt level, which may lead to the decision to use less expensive models Adapted from the idea of Vin Vashishta. 46 Business decisions in the LLM context 47 “Strategically, this has changed the way we work and what our focus areas are. The output quality and ease of use will shape both our professional and our private lives.” Dr. Andreas Liebl Managing Director and Founder, appliedAI Initiative GmbH 48 “In the future, I envision employees seamlessly collaborating with specialized AI assistants to efficiently address daily internal tasks or inquiries by customers.” Bernhard Pflugfelder Head of Use Cases and Applications, appliedAI Initiative GmbH 49 Process (re-)design matters when adopting LLMs Decision support AI in the loop Machine agency supports the humans Machine agency observes and intervenes Human-AI Collaboration and Teams Dynamic interaction of human and maschine agency Human in the loop Full delegation to AI Human audits and alters the machine (machine agency delegates edge cases) Human mandates tasks to machine agency without further intervention The spectrum from human augmentation and maschine augmentation to full automation SOURCE: Möllers et al. (forthcoming). Contrasting Human-AI Workplace Relationship Configurations. NOTE: Algorithmic management was excluded from the spectrum. 50 Low Strategic value High Six possible approaches to consider when making LLM make-or-buy-decisions (1) Buy an application with limitedly controllable LLM (4) Make application, fine-tune LLM (3) Make application, buy controllable LLM (2) Buy end-to-end application without LLM controllability Low (5) Make application, pre-train LLM (6) Stop Degree of customization SOURCE: appliedAI Initiative GmbH (2023) A Guide for Large Language Model Make-or-Buy Strategies. High 51 Own reading Nine factors relevant for make-or-buy-decisions Strategic value Customization Costs Establishing and maintaining proprietary knowledge and in-house expertise, creating an intellectual asset and competitive advantage Developing LLMs in-house typically allows for greater customization, meaning that LLMs can be tailored to requirements and firm-specific use cases The development process requires highly skilled experts and is time-consuming and resource-intensive Intellectual property (IP) Security Talent There may be concerns regarding ownership and usage rights of generated content Firms should conduct a thorough risk assessment for each use case to ex-ante identify and address potential security issues The scarcity of experienced professionals often make it difficult to establish a skilled in-house team Legal advice Data Trustworthiness Developing LLMs in-house requires firms to seek legal expertise to navigate an increasingly complex regulatory landscape LLMs rely on vast amounts of diverse data to understand language patterns, enhance accuracy, and generate coherent and appropriate responses Companies need to be able to build or apply LLMs in line with their values and ethical considerations. SOURCE: appliedAI Initiative GmbH (2023) A Guide for Large Language Model Make-or-Buy Strategies. 52 Hugging Face and the open source movement 54 The Art and Science of Prompt Engineering 56 Bastian Burger TUM Venture Labs Renowned in the world of AI and Entrepreneurship, Bastian serves as the Director of the Venture Labs Software & AI. In his role, he blended the roles of educator, entrepreneur, and AI enthusiast. With a profound commitment to fostering knowledge and curiosity, Bastian is committed to making founders successful. Their entrepreneurial journey, coupled with a deep academic fascination for artificial intelligence, is a testament to his drive in pursuit of innovation. 57 How SUMM AI simplifies complicated text 58 Nicholas Wolf SUMM AI Nicholas is a tech enthusiast who loves to try out new technology. This is why, he started to experiment with NLP models in 2019 long before the ChatGPT hype. He has studied here at TUM Finance and Information Management where he met his two co-founders Vanessa and Flora. After graduation they founded SUMM AI to transform any complicated text into so-called "easy language" (Leichte Sprache). He is now the CTO at SUMM AI, responsible for the infrastructure as well as further developing the AI and the full-stack. 59

Use Quizgecko on...
Browser
Browser