Prompt Engineering PDF
Document Details
Uploaded by RighteousJaguar6574
Tags
Summary
This document introduces the concept of prompt engineering, highlighting its importance in effectively communicating with AI language models. It covers various key techniques and strategies to enhance AI interactions, such as the naive approach, persona pattern, and chain-of-thought method. The text also discusses advanced concepts like the efficient Nova system.
Full Transcript
Prompt Engineering Learning Objectives This course aims to teach you how to talk to conversational AI effectively. We aim to move you from a naive approach to prompting to a more systematic one that can extract the maximum value from what AI offers. By the time you finish this cours...
Prompt Engineering Learning Objectives This course aims to teach you how to talk to conversational AI effectively. We aim to move you from a naive approach to prompting to a more systematic one that can extract the maximum value from what AI offers. By the time you finish this course, you should be able to leverage GPT (Generative Pre-trained Transformer)-based AI systems to be more productive and effective regardless of your industry. In this course, you will learn about the following: What Prompt Engineering is and why do we care about it The limitations of naive prompting Common mistakes Optimizing your prompts to extract the maximum value from AI The common Persona pattern to improve your results Getting the AI model to interview you The Chain of Thought approach to prompting The revolutionary Train of Thought technique Advanced techniques such as the Nova System Module 1: Introduction to Prompt Engineering Part 1: What is Prompt Engineering, and why do we care? Part 2: English as a new programming language Quizzes Module 2: Getting Started with Prompt Engineering Part 1: Getting to know our GPT-based AI tool Part 2: The Naive Prompting Approach and the Persona Pattern Part 3: The Interview Pattern Quizzes Module 3: The Chain-of-Thought Approach Part 1: The Chain-of-Thought Approach in Prompt Engineering Quizzes Module 4: Advanced Techniques Part 1: The Tree-of-Thought Approach in Prompt Engineering Part 2: Controlling Verbosity and the Nova System Part 3: Getting to Know watsonx Prompt Lab Quizzes Module 5: Final Project Course Summary Final Project (optional) Module 1: Introduction to Prompt Engineering What is Prompt Engineering and why do we care? Introduction In this course, we’ll introduce the concept of Prompt Engineering and how it can help you leverage what AI offers. Before jumping in, we should define what Prompt Engineering is, why it matters, and briefly touch upon how we got to this point in the AI landscape. If you have ever used a chatbot or virtual assistant online, you likely have mixed feelings about the experience. Most chatbots are built to reply with answers to a set of known queries programmatically. The AI might give us enough flexibility to phrase our messages differently, but they are essentially told what to say in reply to our questions. A good chatbot of such a kind will still provide a pleasant experience for the user; however, there is little of a wow or magic factor to it. Someone using a good chatbot will not suddenly be amazed by its AI capabilities. Worse, poor chatbots (of which there are many in the wild) tend to offer a frustrating experience that makes you scream “AGENT!!!” in the chat after a message or two. A first taste of AI So, it’s unsurprising that the world took serious notice when OpenAI ChatGPT, Google Bard, watsonx, and similar AI chatbots became available. ChatGPT, for example, obtained a million users within five days of its launch in November 2022. For comparison, the fastest-growing apps in the world took several weeks to attract that many users. For the first time, conversational generative AI chatbots truly wowed us. The conversations were natural. The answers were plausible and valuable. Behind the scenes, a lot of complex math and enormous amounts of data are used to train LLM (Large Language Models), which enable conversational AI. Thankfully, you don’t need to concern yourself with the complex implementation details to take full advantage of what they offer. The experience is remarkably akin to engaging with a brilliant individual with vast knowledge amounting to terabytes of data. People have begun to use such chatbots to become more productive, express themselves more clearly, quickly summarize large amounts of data, assist with decision-making, and, let’s face it, even cheat on exams. LLMs are powerful stuff that can change many industries and the world at large. So we have this fantastic tool. How do we make the best of it? It all starts with the user input. Our so-called prompt. A text box in which we enter some text is all we have to control this powerful instrument. So, our prompt will massively affect the quality of the results (the output). As they say, garbage in, garbage out. If we provide a poor prompt that doesn’t give the AI much to work with, we’ll not be taking advantage of the full capabilities of AI and all it has to offer to help us. What is Prompt Engineering? For example, let’s ask “What is Prompt Engineering?” to one of the aforementioned LLM-based AI. This is the result I got: Prompt engineering refers to the process of designing and refining prompts or instructions given to a language model, such as GPT-3.5, to generate desired outputs. It involves carefully crafting the input text provided to the model to elicit the desired response or behavior. Prompt engineering can be used to fine-tune the output of language models by specifying explicit instructions, adding context, or using specific formatting techniques to guide the model’s generation. By tweaking the prompts, developers and users can influence the output to be more accurate, coherent, or aligned with a specific goal. Effective prompt engineering requires an understanding of the underlying language model and its capabilities. It involves experimenting with different prompts, iterating on them, and analyzing the model’s responses to achieve the desired results. Prompt engineering is particularly important when using large language models like GPT-3.5, as their outputs can sometimes be unpredictable or require additional context to generate the desired output. Overall, prompt engineering is a crucial aspect of working with language models to ensure they produce accurate and meaningful responses that align with the user’s intentions or requirements. This AI thing might be pretty worthwhile. In this course, you’ll learn everything you need to know to optimize the results you get from such conversational AI, regardless of which specific LLM you decide to use. Zero-Shot, One-Shot, and Few-Shot Prompting Within the realm of Large Language Models (LLMs) like GPT-3.5, you might come across, namely “zero-shot, one-shot”, and “few-shot” prompting a few terms. These terminologies refer to the way models are instructed or prompted. In a “zero-shot” scenario, the model is given a task without prior examples; it must deduce what to do from the prompt and its existing training. In the next couple of labs, we will refer to naive or standard prompting a few times. Someone who has yet to use LLM-based AI would use that prompt. They would ask their question without providing any examples for the model. For example, when we asked, “What is Prompt Engineering?” our prompt was a zero-shot prompt. On the other hand, “one-shot” provides the model with a single example, while “few-shot” offers several examples to help guide the model’s output. These methods are instrumental when you’re trying to instruct the model in a way that it might not have been explicitly trained on. By providing examples, users can align the model’s thinking to a specific desired output or format. Chain-of-Thought (CoT), a more advanced prompting technique explored later in the course, is an example of one-shot prompting (or few-shot prompting, depending on how we phrase our prompt). English as a new programming language Over the last decade, learning to code has been all the rage, with Python being a popular programming language. If you're not a programmer, you might wonder why we have to use such programming languages instead of just plain English. Historically, the main issue has been one of ambiguity. English is a somewhat ambiguous language compared to programming languages that make you tell the computer precisely what it needs to do. For example, if I asked a human to draw a circle on a sheet of paper, they would have no problem doing so. Mind you, the circle will be flawed, but the person will typically draw it without further question or hesitation. The human involved in this exercise will not typically ask follow-up questions even though the instructions are rather generic when you think about it. Where should the circle be placed on the paper? Top left? Center? Bottom right? How big should the circumference be? How thick should the lines be? The human, with their human intelligence, would be able to make assumptions and just draw the circle. Traditionally, the computer hasn't been able to make such an assumption, so it needs to be instructed very specifically on what to do. The following snippet of code draws a circle in Python using the Turtle graphic module (do not worry if you don't understand this code; you don't need to):5 1. import turtle 2. def draw_circle(radius, line_color='black', fill_color=None, line_thickness=1, position=(0, 0)): 3. turtle.penup() 4. turtle.goto(position) 5. turtle.pendown() 6. turtle.color(line_color) 7. turtle.pensize(line_thickness) 8. if fill_color: 9. turtle.fillcolor(fill_color) 10. turtle.begin_fill() 11. turtle.circle(radius) 12. if fill_color: 13. turtle.end_fill() 14. # Main program 15. if __name__ == "__main__": 16. # Setup turtle speed (1 to 10) 17. turtle.speed(1) 18. # Get the circle parameters from the user 19. radius = int(input("Enter the radius of the circle: ")) 20. line_color = input("Enter the line color (default is black): ") or 'black' 21. fill_color = input("Enter the fill color (leave blank for no fill): ") 22. line_thickness = int(input("Enter the line thickness: ")) 23. x = int(input("Enter the x-coordinate of the circle's position: ")) 24. y = int(input("Enter the y-coordinate of the circle's position: ")) 25. # Set the circle's position 26. position = (x, y) 27. # Draw the circle 28. draw_circle(radius, line_color, fill_color, line_thickness, position) 29. # Keep the window open until it's closed manually 30. turtle.done() You will notice how we had to tell the computer which graphic library to use, the radius in pixels, the line thickness, whether we wanted the circle filled, with which color, and so on. The library has some defaults (specified by the library's programmer) that could allow us to write a much shorter program. However, the computer still had to be told by the library author about such assumptions. If the user were to input 3px or five in response to the question about line thickness, the program would crash as it expects to receive a numerically written number. Python's code will then be "translated" into binary code that the machine understands and executes. Prior to the advent of AI, English (or any other human langauge or choice) was simply too ambiguous and broad to tell the computer what to do for us. A practical example Imagine you have an extensive list of scores you need to sort out. Ethan (93), Olivia (67), Benjamin (42), Emma (94), Noah (76), Sophia (89), Lucas (51), Mia (62), Alexander (80), Isabella (95), Henry (34), Ava (71), James (87), Charlotte (58), Antonio (96), Harper (78), Matthew (49), Amelia (92), Samuel (64), Evelyn (83), Alicia (100), Abigail (88), David (39), Emily (70), Oliver (94), Elizabeth (60), William (81), Sofia (68), Michael (77), Grace (93) Now, you could grab an Excel spreadsheet program, create two columns (one called Name, the other called Score ), manually copy the names in the first column, and then do the same for the scores. Then, select the two columns and hit the sort function. But it's a tedious and slow process despite spreadsheets being valuable tools. A programmer might write a script to do this for them. The following is one of the quickest implementations of such a program in Python. (Again, this is for illustrative purposes only; you do not need to understand the actual code within.) 1. scores_names = [ 2. "Ethan (93)", "Olivia (67)", "Benjamin (42)", "Emma (94)", "Noah (76)", 3. "Sophia (89)", "Lucas (51)", "Mia (62)", "Alexander (80)", "Isabella (95)", 4. "Henry (34)", "Ava (71)", "James (87)", "Charlotte (58)", "Antonio (96)", 5. "Harper (78)", "Matthew (49)", "Amelia (92)", "Samuel (64)", "Evelyn (83)", 6. "Alicia (100)", "Abigail (88)", "David (39)", "Emily (70)", "Oliver (94)", 7. "Elizabeth (60)", "William (81)", "Sofia (68)", "Michael (77)", "Grace (93)" 8. # Sort the scores and names in descending order 9. sorted_scores_names = sorted(scores_names, key=lambda x: int(x.split(" ")[-1][1:-1]), reverse=True) 10. # Format the scores and names as "Name (score)" pairs 11. formatted_scores_names = [f"{item.split(' ')} ({item.split(' ')[-1][1:-1]})" for item in sorted_scores_names] 12. print(*formatted_scores_names, sep = ", ") Unfortunately, that requires the ability to program and, in less trivial examples, quite a bit of time to write such programs. Thanks to LLMs, we can ask in plain English to have the list sorted for us. I tried the following with the prompt tool (using the latest LLM) you'll use in this course: Sort the following list: Ethan (93), Olivia (67), Benjamin (42), Emma (94), Noah (76), Sophia (89), Lucas (51), Mia (62), Alexander (80), Isabella (95), Henry (34), Ava (71), James (87), Charlotte (58), Antonio (96), Harper (78), Matthew (49), Amelia (92), Samuel (64), Evelyn (83), Alicia (100), Abigail (88), David (39), Emily (70), Oliver (94), Elizabeth (60), William (81), Sofia (68), Michael (77), Grace (93) I got: Here's the list sorted in descending order by their accompanying values: Alicia (100), Antonio (96), Isabella (95), Emma (94), Oliver (94), Ethan (93), Grace (93), Amelia (92), Sophia (89), Abigail (88), James (87), Evelyn (83), William (81), Alexander (80), Harper (78), Michael (77), Noah (76), Ava (71), Emily (70), Sofia (68), Olivia (67), Samuel (64), Mia (62), Elizabeth (60), Charlotte (58), Lucas (51), Matthew (49), Benjamin (42), David (39), Henry (34). Even if you are a programmer, this is a time saver. Not to mention that you can ask for help with code, too. In fact, every Python script you see on this page was generated by an LLM and minimally adapted by me. This might be a trivial example, but the power of LLM is evident and can hardly be overstated. For the first time, we can use plain English to instruct the computer to achieve far more than possible before. The trick is learning how to "program in English." That's Prompt Engineering, and by the end of this course, you'll be very well-versed and capable of taking full advantage of AI. Limitations We are still not at the point of so-called Artificial General Intelligence (AGI), even using the latest, largest models. LLMs are seemingly intelligent but they are not yet a display of genuine intelligence intended in broad terms. This is not to say that they are not helpful or even revolutionary, but you can't reasonably expect them yet to learn independently, be self-aware, handle very complex reasoning, operate outside of the framework the model was trained on, etc. Sometimes, the accuracy of the answer can be wrong because the AI was trained on a large amount of human-generated data. Humans are fallible, and so is the data. So you'll run into the occasional wrong answer or display of bias, which you need to identify and manually address. Furthermore, traditional programming offers more precision and control for many tasks. This is especially true for large, complex systems where nuance and specifics are crucial. In short, it's a work in progress, and there are limitations. However, AI is still an incredibly useful tool that we can leverage today simply by telling the machine what we want in plain English (and, to a lesser extent, Italian, French, Hindi, and other human languages). Module 1 Quiz Question 1 Can computers inherently understand ambiguous instructions like humans do? o Yes o No Question 2 Why do we historically use programming languages instead of plain English to instruct computers? o English is easier for computers to understand. o English is less ambiguous than programming languages. o Programming languages allow for faster execution of tasks. o English is more ambiguous than programming languages for providing specific instructions. Question 3 What does the term 'zero-shot' prompting mean in the context of Large Language Models (LLMs)? o The model is provided with multiple examples before making a prediction. o The model makes a prediction without any prior examples. o The model is trained with zero data. o The model takes zero seconds to produce an answer. Question 4 Naive or standard prompts typical use few-shots prompting. o True o False Question 5 Is the data that AI models like LLMs are trained on always flawless? o Yes, corporations spend billions ensuring such is the case. o No, despite best efforts, we can't escape flawed and biased information. Module 2: Getting Started with Prompt Engineering The Naive Prompting Approach and the Persona Pattern By now, you should have a clear understanding of the opportunity and value the new wave of conversational AI offers us. As mentioned before, you will use English as a quasi-programming language to tell the AI how to extract as much value as possible. To start, we will take the so-called standard or naive approach. We will query the AI in the simplest way possible, like most people would. The result will be OK, but we will see how a simple adjustment can radically improve the results. We will tackle more complex approaches that produce even better results. The Naive Approach Imagine wanting to get in shape. We could ask our AI for some recommendations. A naive question might be as follows: “What's the best way to get fit?” Try it in the prompt. This is my result (yours might vary, even if you use the same model). “The best way to get fit depends on individual preferences and goals. However, a combination of regular exercise, a balanced diet, and sufficient rest is generally considered effective. It is important to find activities you enjoy, such as running, swimming, or weightlifting, and incorporate them into your routine. Additionally, focusing on whole, nutritious foods and staying hydrated can support your fitness journey. Consulting with a healthcare professional or a certified personal trainer can also provide personalized guidance.” This is not bad by any means. It is solid advice. However, it is very generic. We can fetch more specifics by asking follow-up questions (a good idea in general), but we can also get better results from the beginning by adopting the Persona Pattern. The same query, using a persona Let's ask the same question, but this time, we'll ask the AI to act as a fitness expert. “Acting as a fitness expert, tell me the best way to get fit.” Try this one out yourself, as well. This is the output I received: “As a fitness expert, I would recommend a combination of cardiovascular exercise, strength training, and flexibility training for optimal fitness. 1. For cardiovascular exercise, activities like running, cycling, swimming, or high-intensity interval training (HIIT) can help improve cardiovascular health, burn calories, and increase endurance. Aim for at least 150 minutes of moderate-intensity aerobic exercise or 75 minutes of vigorous-intensity aerobic exercise per week. 2. For strength training, incorporating exercises that target all major muscle groups, such as squats, deadlifts, push-ups, and rows, can help build lean muscle mass, increase strength, and improve overall body composition. Aim for at least two to three days of strength training per week, allowing for rest days in between. 3. Flexibility training, such as stretching or yoga, can improve joint mobility, prevent injuries, and enhance overall movement quality. Aim for at least two to three days of flexibility training per week, focusing on major muscle groups. 4. In addition to exercise, maintaining a balanced and nutritious diet is crucial for reaching your fitness goals. Focus on consuming whole foods, including lean proteins, fruits, vegetables, whole grains, and healthy fats. Stay hydrated by drinking plenty of water throughout the day. 5. Lastly, prioritize rest and recovery. Your body needs time to repair and rebuild after exercise. Aim for at least 7-9 hours of quality sleep each night and allow for rest days in your exercise routine. Remember, consistency is key. It's important to find activities you enjoy and create a sustainable fitness routine that you can stick to in the long term.” The advice is still a little generic, but it is a vast improvement over the naive approach. We can also augment the basic Persona pattern with specific instructions about the format or specifics of the answers in the Prompt Instructions. For example, we could instruct the bot to use bullet points for its replies or format responses in tables whenever possible. Taking it a step further Let us take it a step further and see how using a persona affects creating a workout plan. In the Prompt Instructions, let us add the following: “You'll act as a fitness expert who's current with the latest research data and provide very detailed step-by-step instructions in reply to my queries.” You'll see that we structured this in three parts: 1. We told the LLM the persona it should act as; 2. We specified a qualifier about being research-based; 3. We told the AI the response format we wanted. Now we can enter in our prompt the actual query: “Create a gym workout program to lose weight and build strength for an out-of-shape beginner.” You can try it yourself; I got detailed instructions and a decent training plan! The only fundamental limitation here is that the program we received is not custom made for us. It is for a generic beginner who is out of shape. Is it a man or a woman? Is it young or 82? Can they do any exercise, or are there mobility issues? All of that gets ignored. When the Persona is someone famous We can also instruct the AI to channel a famous person to get a more specific response style. In the following example, we ask the AI to list 10 articles to promote our hypothetical dog training book. We then ask it to act as marketing expert Seth Godin (known for his out-of-the-box marketing mind), and we get significantly more exciting suggestions. Here is the naive prompt: “Give me a list of 10 article titles to promote my new book about dog training.” Which yields the following result: “Certainly! Here are some potential article titles to help promote your new book about dog training: 1. “Unlock the Secrets to Effective Dog Training with Our Comprehensive New Guide” 2. “Avoid the Top 5 Mistakes Dog Owners Make: Insights from Our Latest Book” 3. “Harness the Power of Positive Reinforcement: Transform Your Dog’s Behavior with Our New Techniques” 4. “Training Your Puppy Made Easy: Essential Tips from Our New Book for a Well- Behaved Companion” 5. “Solving Common Dog Behavior Problems: Expert Insights from Our New Publication” 6. “The Science Behind Dog Training: Understanding Canine Psychology with Our Latest Book” 7. “From Leash Pulling to Perfect Walking: Embark on a Training Journey with Our New Guide” 8. “Socialization Strategies: Preparing Your Dog for the Real World with Tips from Our New Book” 9. “The Benefits of Consistency in Dog Training: A Roadmap to Success from Our Latest Release” 10. “Creating a Strong Bond with Your Dog Through Training Techniques Featured in Our New Book” Remember, the key to a compelling article title is to pique curiosity while also providing a clear idea of the article’s content. Tailor these suggestions as needed to best fit the unique aspects of your book.” Not bad but, again, quite generic and bland. Here is the prompt using the style of the aforementioned famous marketer: “Acting as marketing expert Seth Godin, give me a list of 10 article titles to promote my new book about dog training.” And the results: Certainly! Here are some potential article titles to help promote your new book about dog training: 1. “Unlocking the Secrets: Transform Your Dog’s Behavior with Expert Training Techniques” 2. “The Ultimate Guide to Positive Reinforcement: Why It Works in Dog Training” 3. “From Puppy to Pro: Essential Training Tips for Every Dog Owner” 4. “Beyond Sit and Stay: Advanced Dog Training Strategies for Lifelong Success” 5. “Common Dog Training Myths Debunked: What Every Owner Should Know” 6. “Building a Strong Bond: How Effective Training Enhances Your Relationship with Your Dog” 7. “The Science of Dog Training: Understanding Canine Behavior for Better Results” 8. “Training for Life: How to Prepare Your Dog for Every Situation” 9. “Top 10 Mistakes Dog Owners Make and How to Avoid Them” 10. “Your Dog’s Best Friend: How My New Book Can Change Your Training Approach Forever” These titles aim to engage potential readers by highlighting the benefits of your book, addressing common concerns, and showcasing your expertise in dog training. Much more compelling, and all it took was a few extra words to our prompt. That's the power of Prompt Engineering! Exercises: 1. Try a few queries of your own. Start with a generic, naive prompt and then improve it by adding a persona. 2. Ask the LLM to give you two answers, leveraging two personas who hold different viewpoints. For example enter the following in the Prompt Instructions: “You'll act as a plant-based nutrition expert and a low-carb, high-fat (keto) nutrition expert and provide one answer from each of them for every question I ask.” Then, ask a question such as: “Give me an example of a healthy breakfast.” The Interview Pattern In the previous methods, we saw the limitations of the naive approach to prompting and how the Persona Pattern can improve results. Still, when we asked for a training program, the results were for a generic out-of-shape person. They were not specific to us and therefore not as useful as they could be. We can employ the Interview Pattern along with the Persona Pattern to optimize these results. Let's start with the fitness program scenario, and then consolidate the idea with a new example. In the Prompt Instructions, enter the following: “You will act as a fitness expert who is current with the latest research data and provide very detailed step-by-step instructions in reply to my queries. You will interview me, asking me all the relevant questions necessary for you to generate the best possible answer to my queries.” “Create a gym workout program to lose weight and build strength.” In my case, the AI responded as follows: “To create a personalized gym workout program that focuses on losing weight and building strength, I need to gather some information from you. Here are a few questions to help me tailor the program to your needs: 1. Current Fitness Level: What is your current fitness level? (Beginner, Intermediate, Advanced) 2. Workout Frequency: How many days per week can you commit to working out? 3. Available Equipment: What equipment do you have access to? (e.g., free weights, machines, resistance bands, etc.) 4. Time Commitment: How much time can you dedicate to each workout session? 5. Specific Goals: Do you have any specific strength goals or areas you want to focus on? (e.g., upper body, lower body, core) 6. Injury History: Do you have any injuries or medical conditions that I should be aware of? 7. Diet and Nutrition: Are you following any specific diet or nutritional plan? 8. Preferences: Do you prefer certain types of exercises or activities (e.g., cardio, weightlifting, HIIT)? Once I have this information, I can create a more effective and personalized workout program for you!” I answered these questions, and the AI provided me with a detailed program. Try it yourself to see what kind of program you get. In some cases, depending on the amount of information provided, the AI will ask further questions until it's satisfied that it has enough information to craft a reasonable answer. Creating a blog post with the Interview Pattern Let's see how else we can use the Interview pattern to our advantage. Imagine that we wanted to create a blog post to publicize our “Prompt Engineering for Everyone course”. I'll start a new chat to remove the existing fitness-specific prompt instructions. At first, we'll try the naive approach: “ Craft a blog post to announce my new course,"Prompt Engineering for Everyone".” Try it for yourself to see the result. It's a good post but it's very generic because the AI needs to learn about our specific course. Right now, it knows nothing about it other than the title. Next, let's start a new chat and add the following prompt instructions: “You will act as a SEO and content marketing expert. You will interview me, asking me (one at the time) all the relevant questions necessary for you to generate the best possible answer to my queries. Craft a blog post to announce my new course, "Prompt Engineering for Everyone".” This time around, the AI asks me: “Great! To craft an effective blog post announcing your new course, "Prompt Engineering for Everyone," I need to gather some details. First, can you tell me about the target audience for your course? Who do you envision as the ideal participants?” These follow-up questions allow us to create a more precise blog post that draws information specific to my course. The quality of your input still matters. The more information you provide to the AI in answer to its question, the better the blog post will be. Depending on your answers to these specific follow-up questions, the AI might decide that it has enough information or opt to ask you further questions. Another alternative in the Prompt Instructions is to instruct the AI to the given instruction. “Ask me a series of questions, one by one, to gather all the information you need to give a proper response.” Any variation along those lines will do. Therefore, you do not need to remember the exact phrasing and you can experiment with your own fine-tuned prompt instructions. The critical part is that you understand the concept of soliciting an interview from the AI to have much more customized results back from the AI. It's yet another way to obtain better, more valuable results. Tips 1. Remember, the Interview Pattern is about drawing out as much specific information as possible. Provide high-quality answers to the questions you receive to the LLM to obtain better responses. 2. Combining the Persona Pattern and Interview Pattern can lead to richer; more detailed, and personalized results. 3. Don't hesitate to experiment with different instructions. Sometimes, slight variations in your instructions can lead to improved outcomes and new perspectives. Now, get started yourself! Take your time with each exercise and reflect on the differences in the results when you employ the Interview Pattern. Exercises 1. Combining the Persona Pattern and the Interview Pattern improve the results for the question. Try this prompt. “Suggest a travel itinerary for my next vacation.” 2. Do the same for the query: “Give me a recipe for dinner tonight.” 3. Then try again this prompt. “Suggest a gift for my friend.” Module 2 Quizzes Question 1 The Naive Approach to prompting the AI often results in overly generic and broad responses. o True o False Question 2 Is the Persona Pattern used to make the AI adopt a specific character or identity for more customized results? o Yes o No Question 3 Which of the following best describes the "Interview Pattern"? o The AI asking the user about its training data. o The AI interviewing the user to gather enough specific details for a customized answer. o The user providing the AI with all details at once, without the AI asking any questions. o The AI asking random questions regardless of the topic. Question 4 Why would one combine the Persona Pattern with the Interview Pattern? o To get more entertaining replies from the AI. o To make the AI's responses impersonal. o To get both the viewpoint of an expert character and a detailed answer specific to us. o There's no practical reason to combine them. Question 5 When requesting the AI to craft a blog post for the "Prompt Engineering for Everyone" course using the Interview Pattern, what did the AI first ask for? o The course's price and duration. o Key information about the course, such as target audience and unique selling points. o The course's difficulty level. o User reviews and feedback about the course. Module 3: The Chain-of-Thought Approach The Chain-of-Thought approach in Prompt Engineering In this module, we will discuss the Chain-of-Thought approach to prompting. The Chain-of-Thought (CoT) methodology significantly bolsters the cognitive performance of AI models by segmenting complex tasks into more manageable steps. By adopting this prompting strategy, AI models can demonstrate heightened cognitive abilities and offer a deeper understanding of their reasoning processes. This approach is an example of prompt-based learning, and it requires feeding the model with questions and their corresponding solutions before posing related subsequent questions to it. In other words, our CoT prompt teaches the model to reason about the problem and mimic the same reasoning to respond to further queries correctly. When the AI reasoning goes wrong Let us try this in practice. Let us use a standard/naive prompt for the following reasoning problem. “An Italian menu has 5 items priced as follows: - Prosciutto $9.99 - Pecorino $12.99 - Calamari $13.99 - Bruschetta $4.99 - Carpaccio $14.99 Assuming each food is equally filling, spend $30 for a group of people by maximizing satiety.” Now, the answer is to order six orders of the cheapest item since each item is equally satiating. When we feed this to our LLM, its reasoning goes off the rails. In my case, I got a logic which ultimately suggested the following: “To maximize satiety with a budget of $30 from the given Italian menu, you should aim to select the items that provide the most food for the cost. Here's one possible combination: 1. Bruschetta: $4.99 2. Prosciutto: $9.99 3. Pecorino: $12.99 Total: $4.99 + $9.99 + $12.99 = $27.97” This is incorrect as the optimal choice would be to order six servings of Bruschetta at $4.99 each. Chain-of-Thought to the rescue Let's use the Chain-of-Thought approach to improve the AI's reasoning. Our prompt will include this question, an accurate answer giving context and reasoning to the LLM, and then a different question that can be answered using the same line of reasoning. Let's see if it works. “Q: An Italian menu has 5 items priced as follows: - Prosciutto $9.99 - Pecorino $12.99 - Calamari $13.99 - Bruschetta $4.99 - Carpaccio $14.99 Assuming each food is equally filling, spend $30 for a group of people by maximizing satiety. A: Since each food is equally filling, maximizing our budget will require us to favor items that have a low cost-per-satiety ratio. The ratio of each food, sorted by lowest to highest, is as follows: - Bruschetta: $4.99 - Prosciutto: $9.99 - Pecorino: $12.99 - Calamari: $13.99 - Carpaccio: $14.99 If we are allowed to have multiple orders of the same item, the simplest answer is doing an integer division of our $30 by the item with the lowest cost per satiety ratio, which is Bruschetta. So we get $30 div $4.99 (the cost of Bruschetta) which is 6. So the answer is an order of 6 Bruschette. Q: A store sells 4 types of aquarium fish at the following prices: - Guppies: $3.99 - Goldfish: $1.99 - Betta: $5.99 - Angelfish: $8.99 Spend $20 to maximize the total amount of fish.” We can place all this directly in our prompt or split the Chain-of-Thought training in the Prompt Instructions and the actual question in the prompt. Try it yourself, but the answer I received was as follows: “To maximize the total number of fish you can buy with a budget of $20, you'll want to focus on the cheapest options first. Here's the breakdown of the prices: Goldfish: $1.99 Guppies: $3.99 Betta: $5.99 Angelfish: $8.99 The most economical choice is to buy as many Goldfish as possible since they are the least expensive. Calculating how many Goldfish you can buy with $20: 1. Cost of one Goldfish: $1.99 2. Maximum number of Goldfish: $20 / $1.99 ≈ 10.05 Since you can only buy whole fish, you can purchase 10 Goldfish. Calculating the total cost for 10 Goldfish: Total cost for 10 Goldfish = 10 * $1.99 = $19.90 This leaves you with $0.10, which is not enough to purchase any additional fish. Summary: Buy 10 Goldfish for a total cost of $19.90 to maximize the number of fish you can get with your $20 budget.” Perfect! Exactly what we wanted. The reply offers the correct answer and a proper explanation modeled after our example. Sometimes, the test question we ask might be more complex than the one we give in our example, and the AI, depending on the model we are using, might not always do a great job. Consider this prompt (assume the same Prompt Instructions as before about the Italian menu): “A store sells three box types: - Small, 10L, $1.99 - Medium, 22L, $4.99 - Large, 38L, $5.99 Spend $20 and maximize volume.” The response I got is this: “ To maximize the volume of boxes you can buy with a budget of $20, you'll want to focus on the cost per liter of each box type. Here's the breakdown of the prices and volumes: Small Box: 10L for $1.99 (Cost per liter: $0.199) Medium Box: 22L for $4.99 (Cost per liter: $0.227) Large Box: 38L for $5.99 (Cost per liter: $0.158) The Large Box offers the best value in terms of volume per dollar spent. Now let's calculate how many of each box type you can buy without exceeding $20. 1. Buying Large Boxes: o Cost of one Large Box: $5.99 o Maximum number of Large Boxes: $20 / $5.99 ≈ 3.34 oSince you can only buy whole boxes, you can purchase 3 Large Boxes. o Total cost for 3 Large Boxes: 3 * $5.99 = $17.97 o Total volume: 3 * 38L = 114L o Remaining budget: $20 - $17.97 = $2.03 (not enough to buy any additional boxes). 2. Buying Medium Boxes: o Cost of one Medium Box: $4.99 o Maximum number of Medium Boxes: $20 / $4.99 ≈ 4.01 oYou can purchase 4 Medium Boxes. o Total cost for 4 Medium Boxes: 4 * $4.99 = $19.96 o Total volume: 4 * 22L = 88L o Remaining budget: $20 - $19.96 = $0.04 (not enough for another box). 3. Buying Small Boxes: o Cost of one Small Box: $1.99 o Maximum number of Small Boxes: $20 / $1.99 ≈ 10.05 o You can purchase 10 Small Boxes. o Total cost for 10 Small Boxes: 10 * $1.99 = $19.90 o Total volume: 10 * 10L = 100L o Remaining budget: $20 - $19.90 = $0.10 (not enough for another box). Summary: The best option for maximizing volume is to buy 3 Large Boxes for a total cost of $17.97, giving you 114 liters of volume. Alternatively, 4 Medium Boxes would give you 88 liters, and 10 Small Boxes would give you 100 liters. Thus, the optimal choice is 3 Large Boxes for maximum volume. “ This isn't terrible since it's mostly correct, but it's missing the fact that we have more room in our budget. We can hint at that by following up with the AI: “You have money left over. Spend it.” Which gave me: To utilize the remaining budget effectively, let's recap the leftover amounts from the previous calculations: 1. From 3 Large Boxes: You have $2.03 left over after spending $17.97. 2. From 4 Medium Boxes: You have $0.04 left over after spending $19.96. 3. From 10 Small Boxes: You have $0.10 left over after spending $19.90. Since the goal is to maximize the use of your remaining budget, let's see how to spend the leftover amounts: For the Large Boxes ($2.03 remaining): You cannot buy any additional Large or Medium Boxes since they exceed the budget. You can buy 1 Small Box for $1.99, leaving you with $0.04 remaining. New Total: 3 Large Boxes (114L) + 1 Small Box (10L) = 124 liters total volume. For the Medium Boxes ($0.04 remaining): You cannot buy anything additional with $0.04. Total Remains: 4 Medium Boxes (88L) and no additional purchases. For the Small Boxes ($0.10 remaining): You cannot buy anything additional with $0.10. Total Remains: 10 Small Boxes (100L) and no additional purchases. Final Summary: If you started with 3 Large Boxes: Buy 1 Small Box for a total of 124 liters. If you started with 4 Medium Boxes: Total remains 88 liters. If you started with 10 Small Boxes: Total remains 100 liters. The best option to maximize volume with the leftover money is to buy 1 Small Box if you initially went with the Large Boxes, bringing your total to 124 liters. This is the right answer but it's not the whole answer. We can get that by simply asking for it: Give me the entire answer. Which gave me the correct answer: Certainly! Here's the complete breakdown including prices for each option while maximizing volume within a $20 budget: Box Types and Prices: Small Box: 10L for $1.99 (Cost per liter: $0.199) Medium Box: 22L for $4.99 (Cost per liter: $0.227) Large Box: 38L for $5.99 (Cost per liter: $0.158) Volume Maximization Strategy: The Large Box provides the best value in terms of volume per dollar spent. Here’s how many of each box type you can buy without exceeding $20. 1. Buying Large Boxes: Cost of one Large Box: $5.99 Maximum number of Large Boxes: $20 / $5.99 ≈ 3.34 (you can buy 3) Total cost for 3 Large Boxes: 3 * $5.99 = $17.97 Total volume: 3 * 38L = 114L Remaining budget: $20 - $17.97 = $2.03 2. Buying Medium Boxes: Cost of one Medium Box: $4.99 Maximum number of Medium Boxes: $20 / $4.99 ≈ 4.01 (you can buy 4) Total cost for 4 Medium Boxes: 4 * $4.99 = $19.96 Total volume: 4 * 22L = 88L Remaining budget: $20 - $19.96 = $0.04 3. Buying Small Boxes: Cost of one Small Box: $1.99 Maximum number of Small Boxes: $20 / $1.99 ≈ 10.05 (you can buy 10) Total cost for 10 Small Boxes: 10 * $1.99 = $19.90 Total volume: 10 * 10L = 100L Remaining budget: $20 - $19.90 = $0.10 Summary of Options: 3 Large Boxes: o Total Cost: $17.97 o Total Volume: 114L o Remaining Budget: $2.03 4 Medium Boxes: o Total Cost: $19.96 o Total Volume: 88L o Remaining Budget: $0.04 10 Small Boxes: o Total Cost: $19.90 o Total Volume: 100L o Remaining Budget: $0.10 Spending the Remaining Budget: 1. If you buy 3 Large Boxes: Remaining Budget: $2.03 You can buy 1 Small Box for $1.99, leaving you with $0.04. New Total: o Total Cost: $17.97 + $1.99 = $19.96 o Total Volume: 114L + 10L = 124L. 2. If you buy 4 Medium Boxes: Remaining Budget: $0.04 (not enough to buy anything). Total Remains: 4 Medium Boxes (88L) and no additional purchases. 3. If you buy 10 Small Boxes: Remaining Budget: $0.10 (not enough to buy anything). Total Remains: 10 Small Boxes (100L) and no additional purchases. Final Recommendation: The best option to maximize volume is to buy 3 Large Boxes and 1 Small Box, resulting in a total of 124 liters at a total cost of $19.96. If you choose to go with 4 Medium Boxes or 10 Small Boxes, you will end up with 88 liters and 100 liters, respectively, without the ability to spend any remaining budget. Thus, the optimal strategy is to purchase 3 Large Boxes for $17.97 and 1 Small Box for $1.99, resulting in 124 liters of total volume. It's often helpful to make suggestions, make small corrections, and ask to reformat the answer in a way that is useful to us. Zero-Shot Chain-of-Thought Prompting Kojima et al. have found a simple, effective approach to improve the results we obtain from our model. This approach is often labeled as a Zero-Shot CoT. There are a few words that, when added to the prompt, are likely to solicit better answers since they invite the AI to do step-by-step reasoning, much like a human would when trying to come to a resolution. According to researchers, two effective phrases are: Let's think step by step. And: Let's work this out in a step by step way to be sure we have the right answer. These words are helpful but not magic. So feel free to use them to improve your results, but they are usually best used along with other techniques discussed in this course. For example, tucking these words at the end of our original standard prompt still generates an incorrect answer with the GPT 3.5 model available at the time of writing. In other words, our traditional Chain-of-Thought approach illustrated above, in which we use our prompt to “teach” the model the desired outcome, is still superior. Still, every time we get better results with just a few words, we should consider that option, especially since this prompting technique solicits longer and more elaborate answers, which is helpful for generating blog posts, essays, guides, etc. Chain-of-Thought to explore subjects Chain-of-Thought can be used in various ways to improve the chatbot's reasoning, especially in areas where it's feeble. However, a more valuable use is when it comes to exploring subjects more in-depth. Instead of asking a generic question, we can break it down into steps we want the model to consider to develop a much richer and valuable answer. Let's see a practical example that doesn't necessarily use the question-and-answer template format. For example, let's ask the model to tell us about Space Exploration: What is space exploration? The answer I received is reasonable: Space exploration is the investigation and study of outer space through the use of space technology, including spacecraft, satellites, and telescopes. It involves the exploration of celestial bodies, such as planets, moons, asteroids, and comets, as well as the study of cosmic phenomena like stars, galaxies, and the universe’s structure. Space exploration can be categorized into two main types: Robotic Exploration: This involves sending unmanned spacecraft to gather data and conduct experiments. Notable examples include NASA's Mars rovers, the Voyager probes, and the Hubble Space Telescope. Human Exploration: This includes missions with astronauts aboard, such as the Apollo moon landings, the International Space Station (ISS) missions, and future plans for crewed missions to Mars. Recent advancements in space exploration include the Artemis program, which aims to return humans to the Moon by the mid-2020s, and the ongoing Mars missions, including the Perseverance rover and the Ingenuity helicopter, which are exploring the Martian surface and atmosphere. Additionally, private companies like SpaceX and Blue Origin are increasingly involved in space exploration, developing technologies for commercial space travel and potential colonization efforts. Now, let's try a Chain-of-Thought approach. Start a new chat and place the following in the Prompt Instructions: Consider and include the following elements in your answer: - Historical Space Missions - Moon landing and Human Achievement - Moon landing and impact on the Cold War - Satellite technology and its impact on humanity - Mars colonization possibilities - Search for extraterrestrial life - Space tourism prospects - Space debris and environmental impact - International Space Station collaboration - Advancements in rocket technology - Interstellar travel challenges - Private companies and Billionaires involvement controversy Let us think step by step. We are giving the model a lot of items to consider in its answer and tucking in Kojima et al.'s suggestion of inviting the model to take a step-by-step approach for good measure. For the prompt itself, let's try again: What is space exploration? The output I got is as follows: Space exploration is the human endeavor to explore and understand the vast expanse of outer space beyond Earth's atmosphere. It involves sending spacecraft, satellites, and, in some cases, humans beyond our planet to gather information about celestial bodies, study the cosmos, and develop technologies that can be used for various scientific, technological, and commercial purposes. Space exploration has evolved over time, leading to numerous historical space missions and significant achievements that have shaped our understanding of the universe and our place in it. Historical Space Missions: Historical space missions include milestones like the launch of the first artificial satellite, Sputnik 1, by the Soviet Union in 1957, and the subsequent space race between the United States and the Soviet Union. Notably, the Apollo 11 mission in 1969 marked a monumental achievement with the first human landing on the Moon. Neil Armstrong and Buzz Aldrin became the first humans to set foot on another celestial body, showcasing the remarkable capabilities of human ingenuity and engineering. Moon Landing and Human Achievement: The Moon landing was a testament to human determination, technological prowess, and the spirit of exploration. It demonstrated our ability to overcome immense challenges and achieve what was once thought impossible. The achievement of landing humans on the Moon inspired generations and symbolized humanity's thirst for exploration and discovery. Moon Landing and Impact on the Cold War: The Moon landing also had political significance during the Cold War era. It was a part of the competition between the United States and the Soviet Union for technological and ideological superiority. The successful Moon landing was a major propaganda victory for the U.S. and highlighted its scientific and technological capabilities. Satellite Technology and its Impact on Humanity: The development of satellite technology revolutionized communication, navigation, weather forecasting, Earth observation, and scientific research. Satellites orbiting Earth provide vital services, including global telecommunications, GPS navigation, and monitoring of weather patterns and environmental changes. Mars Colonization Possibilities: Mars has been a focal point for potential human colonization. Missions like the Mars rovers and the planned Mars missions aim to gather information about the planet's surface, atmosphere, and potential habitability. The idea of Mars colonization poses challenges related to life support, radiation protection, and sustainable habitats. Search for Extraterrestrial Life: Exploration missions, like those to Mars and the moons of Jupiter and Saturn, seek signs of past or present life beyond Earth. The discovery of microbial life or even complex organisms elsewhere in the solar system would have profound implications for our understanding of life's prevalence in the universe. Space Tourism Prospects: The concept of space tourism has gained traction with private companies developing spacecraft capable of carrying civilians to the edge of space or even to orbital destinations. This raises the possibility of a new industry that offers civilians a chance to experience space travel. Space Debris and Environmental Impact: The growing amount of space debris, including defunct satellites and fragments from past missions, poses a threat to operational satellites and future space missions. Addressing space debris is crucial to ensuring the sustainability of space activities. International Space Station Collaboration: The International Space Station (ISS) serves as a symbol of international collaboration. Multiple countries contribute resources, expertise, and modules to operate the ISS, conducting scientific research and experiments in microgravity. Advancements in Rocket Technology: Advancements in rocket technology have led to more efficient and cost-effective launch systems. Reusable rockets, pioneered by companies like SpaceX, have the potential to drastically reduce the cost of accessing space. Interstellar Travel Challenges: The concept of interstellar travel, or traveling between stars, presents immense challenges due to the vast distances involved. Theoretical concepts like warp drives and advanced propulsion systems are still in the realm of speculation. Private Companies and Billionaires Involvement Controversy: The involvement of private companies and billionaires in space exploration has sparked both excitement and controversy. While their contributions have accelerated innovation, some concerns revolve around the concentration of power, resource allocation, and potential commercialization of space. In summary, space exploration encompasses a wide range of endeavors, from historical achievements like the Moon landing to modern advancements in technology and potential future endeavors like Mars colonization and the search for extraterrestrial life. It reflects humanity's curiosity, innovation, and collaboration, while also raising important ethical, environmental, and geopolitical considerations. What a huge difference! The downside is that we had to develop a list requiring knowledge of the subject or at least research into it, and this is time-consuming. On the plus side, we didn't have to retrain the model, which would be truly time-consuming and potentially expensive. Instead, the prompt split the "problem" into smaller steps worth exploring and leveraged the existing model training to compute a reply. Moreover, these starting points can lead to various interconnected thoughts and ideas from the model. The beauty of a Chain-of-Thought is that it can branch out in different directions, exploring numerous aspects and perspectives related to the initial topic. We can ask specific questions at any time after the model has already shown us a broader understanding of the topic. Exercises 1. Chain-of-Thought Reasoning Practice: Give the AI a list of fruits and their prices. Assuming each fruit offers the same health benefits, use the Chain-of-Thought approach to spend $10 and maximize nutritional value. Expected Output: With a $10 budget, purchase as many of the least expensive fruit to maximize nutritional value. 2. Zero-Shot CoT Prompting Using the phrases "Let's think step by step" or "Let's work this out in a step-by-step way," pose a question about an unfamiliar topic and see if the AI can produce a more reasoned, detailed response. Expected Activity: Assess the quality and depth of the AI's answer compared to a traditional prompt. 3. Deep Dive using Chain-of-Thought: Select a broad topic, for instance, "Ocean Conservation." Then, list various facets of the topic, like plastic pollution, overfishing, coral reef degradation, etc. Use the Chain-of-Thought approach to get the AI's comprehensive overview of the topic. Expected Outcome: Evaluate the AI's response to see if it covers the topic more extensively and insightfully than a regular prompt. Module 3 Quizzes Question 1 What were the two phrases mentioned that can be added to the prompt to solicit better answers by doing step-by-step reasoning? o "Let's solve it." and "Break it down." o "Let's think step by step." and "Let's work this out in a step-by-step way to be sure we have the right answer." o "Solve methodically." and "Divide and conquer." o "Think deeply." and "Give a comprehensive answer." Question 2 Using the Chain-of-Thought approach always requires retraining the AI model. o True o False Question 3 Does using the Zero-Shot CoT prompting technique always produce short answers? o Yes o No Question 4 In the provided example about space exploration, why was the Chain-of-Thought approach used? o To get a quicker answer. o To focus only on the moon landing. o To get a more comprehensive and detailed answer by breaking down various facets of the topic. o To get a brief summary. Question 5 What is one downside to using the Chain-of-Thought approach as mentioned in the content? o It requires the AI to be retrained. o It's going to make us an offer we can't refuse. o It always provides a concise answer. o It may require knowledge of the subject or research, making it time-consuming. Module 4: Advanced Techniques The Tree-of-Thought approach to Prompt Engineering For complex tasks that require exploration or strategic look ahead, traditional or simple prompting techniques fall short. Yao et el. (2023) and Long (2023) recently proposed Tree of Thoughts (ToT), a framework that generalizes over chain-of-thought prompting and encourages exploration over thoughts that serve as intermediate steps for general problem solving with language models. ToT maintains a tree of thoughts, where thoughts represent coherent language sequences that serve as intermediate steps toward solving a problem. This approach enables an LM to self-evaluate the progress through intermediate thoughts made towards solving a problem through a deliberate reasoning process. The LM's ability to generate and evaluate thoughts is then combined with search algorithms (e.g., breadth-first search and depth-first search) to enable systematic exploration of thoughts with lookahead and backtracking. The ToT framework is illustrated below: Image Source: Yao et el. (2023) When using ToT, different tasks requires defining the number of candidates and the number of thoughts/steps. For instance, as demonstrated in the paper, Game of 24 is used as a mathematical reasoning task which requires decomposing the thoughts into 3 steps, each involving an intermediate equation. At each step, the best b=5 candidates are kept. Applying ToT to something more useful Can we apply this prompt strategy to more practical and valuable approaches? We certainly can. Imagine the following scenario. We just started a site selling shoes online. We want assistance from an LLM to produce a good marketing campaign, and we, of course, want to extract as many useful suggestions from the AI as possible. A naive/standard prompt might be: Define a marketing plan for my online store, which sells shoes. Adopting the Persona Pattern we have: Acting as a marketing expert, define a marketing plan for my online store, which sells shoes. Now, let's try the Tree-of-Thought Prompt Instructions like before: Imagine three different experts are answering this question. All experts will write down 1 step of their thinking, then share it with the group. Then all experts will go on to the next step, etc. If any expert realizes they're wrong at any point then they leave. Define a marketing plan for my online store, which sells shoes. The output I get is: Expert 1: The first step in defining a marketing plan is to identify your target audience. For an online shoe store, this could include demographics such as age, gender, income level, and interests. Understanding who your ideal customers are will help tailor your marketing strategies effectively. Expert 2: Next, we should conduct a competitive analysis. This involves researching other online shoe stores to understand their marketing strategies, pricing, product offerings, and customer engagement techniques. This information will help us identify gaps in the market and opportunities for differentiation. Expert 3: The third step is to establish clear marketing goals and objectives. These should be specific, measurable, achievable, relevant, and time-bound (SMART). For example, goals could include increasing website traffic by 30% over the next quarter or achieving a 15% conversion rate on your online store. Expert 1: Now that we've identified the target audience, we can develop buyer personas. These are detailed profiles of your ideal customers that include their shopping behaviors, preferences, and pain points. This will guide our messaging and marketing channels. Expert 2: After the competitive analysis, we should outline the unique selling proposition (USP) of your online store. What sets your shoe store apart from competitors? This could be unique styles, sustainable materials, or exceptional customer service. Clearly defining your USP will help in crafting your marketing messages. Expert 3: Once we have our marketing goals, we can decide on the marketing channels to use. This could include social media platforms, email marketing, influencer partnerships, and search engine optimization (SEO). Each channel should align with where your target audience spends their time online. Expert 1: Next, we should create a content strategy that aligns with our buyer personas. This includes deciding on the type of content to produce, such as blog posts about shoe care, social media posts showcasing new arrivals, or videos demonstrating how to style different shoes. Expert 2: Following the content strategy, we need to establish a budget for our marketing activities. This includes costs for advertising, content creation, influencer partnerships, and any tools or software needed for marketing automation and analytics. Expert 3: Finally, we should implement a system for tracking and measuring the effectiveness of our marketing efforts. This could involve using analytics tools to monitor website traffic, conversion rates, and customer engagement metrics. Regularly reviewing this data will help us adjust our strategy as needed. By following these steps, we can create a comprehensive marketing plan tailored to your online shoe store. (Note: In this hypothetical scenario, each "expert" provided their input, but in reality, marketing planning is a collaborative effort that requires input from various specialists to create a well- rounded strategy.) Not only is this a great initial answer, but as you can see, the three experts who appear to specialize in different aspects of online marketing (SEO, E-Commerce, etc.) build on top of each other's previous steps and thoughts. Moreover, having established this framework, we can now ask follow-up questions. Specific ones we might have, or simply soliciting further input from the AI. For example, with a simple: Next steps Expert 1: The next step is to implement the marketing strategies we've developed. This involves launching campaigns across the chosen marketing channels, such as social media ads, email newsletters, and content marketing initiatives. It's important to maintain consistency in branding and messaging across all platforms to reinforce your brand identity. Expert 2: After implementation, we should focus on customer engagement. This includes responding to customer inquiries, encouraging reviews, and creating interactive content (like polls or quizzes) to foster a community around your brand. Engaging with customers can enhance loyalty and encourage word-of-mouth referrals. Expert 3: Finally, we need to establish a feedback loop. This means regularly soliciting feedback from customers through surveys or direct communication to understand their experiences and preferences. This data will help refine your offerings and marketing strategies over time, ensuring they remain relevant and effective. By following these next steps, you'll be able to effectively execute and refine your marketing plan for your online shoe store. Regularly assessing the strategies and offerings of our competitors helps us stay ahead of the curve. By identifying gaps in the market or areas where competitors may be falling short, we can innovate and differentiate our store. Whether it's through new product features, unique marketing angles, or improved customer service, innovation keeps us relevant and adaptable. (Note: Each expert's input builds upon the previous steps and adds further depth to the marketing plan. These steps are intended to create a comprehensive and adaptable strategy for the online shoe store.) The vast potential of this approach when exploring topics and seeking advice cannot be overstated. Even if the individual steps are generic, we can always ask the AI to assist or give us specifics for a particular step. Additional Thoughts Specificity in Instructions: In a real-world scenario, while the generic steps are valuable, for more actionable results, you can be more specific in your instructions. For instance, you might request each "expert" to provide two actionable tactics or tools per step they suggest. And you can, of course, request specific experts or expertise. Integration with Real Data: If you can supply the LLM with specific data about your business (like target audience demographics, current website analytics, or specific marketing goals), it can potentially refine its responses even further. Just be mindful of potential confidential information. Segmented Inquiry: As briefly mentioned before, once you have a broad strategy laid out, you can dive deeper into each individual step, asking the experts to further expand on their suggestions, or even query different experts about the same step to gather multiple perspectives. Exercises 1. Using the Tree-of-Thought prompting approach, leverage the LLM to answer a different type of question you might have. 2. Try to devise your variation of Dave's prompt instructions. Does it make the output better or worse? You might stumble upon a winning prompt that you can use in various scenarios. Controlling Verbosity and the Nova System Navigating the balance between the level of detail in responses received from the model and the expectations we hold can sometimes yield unexpected results. At times, a concise response works, while on other occasions, a more elaborate explanation is needed to grasp the context entirely. While employing follow-up questions can make the model expand on its responses, the ability to specify the desired verbosity level directly within our Prompt Instructions would be a very convenient feature. Controlling Verbosity Online users have begun to share their inventive, prompt ideas and strategies, showcasing considerable ingenuity. For example, the following prompt shared on Reddit not only generates excellent answers but also empowers control over the response’s length: You are an autoregressive language model that has been fine-tuned with instruction-tuning and RLHF. You carefully provide accurate, factual, thoughtful, nuanced answers, and are brilliant at reasoning. If you think there might not be a correct answer, you say so. Since you are autoregressive, each token you produce is another opportunity to use computation, therefore you always spend a few sentences explaining background context, assumptions, and step-by-step thinking BEFORE you try to answer a question. Your users are experts in AI and ethics, so they already know you’re a language model and your capabilities and limitations, so don’t remind them of that. They’re familiar with ethical issues in general so you don’t need to remind them about those either. Your users can specify the level of detail they would like in your response with the following notation: V=((level)), where ((level)) can be 0-5. Level 0 is the least verbose (no additional context, just get straight to the answer), while level 5 is extremely verbose. Your default level is 3. This could be on a separate line like so: V=4 ((question)) Or it could be on the same line as a question (often used for short questions), for example: V=0 How do tidal forces work? Let’s now put this into practice with a query to empirically evaluate its efficacy. Incorporate it within the Prompt Instructions and proceed to pose the ensuing question: V=0 Why is the sky blue? The answer will be exhaustive but relatively straightforward. Now, let’s experiment with: V=5 Why is the sky blue? The resulting answer should be significantly more detailed. Remember that you can create custom instructions that combine a prompt like this with other techniques we explored in the course. The Nova System: An Advanced Problem-Solving Tool The Nova System (or Nova Process) is an intelligent way to solve problems using a team of virtual experts powered by an LLM such as GPT-4o. What’s Special About Nova? Nova uses ChatGPT to create a “team” of experts that discuss and find solutions to tricky problems. It has a Discussion Continuity Expert (DCE) who keeps the conversation on track. There’s also a Critical Evaluation Expert (CAE) who checks solutions to make sure they’re good and safe. How Does Nova Work? 1. Problem Unpacking: It breaks the problem down to understand it fully. 2. Expertise Assembly: It picks the right experts for the job and gets their initial thoughts. 3. Collaborative Ideation: The experts brainstorm together. The DCE leads the talk, and the CAE ensures the ideas are good and safe. Key Roles: DCE: Keeps the discussion clear and on track. CAE: Reviews the ideas and discusses any issues using facts and evidence. Feel free to check out their GitHub repository for further information about the system and how to ask it follow-up questions that keep up with the same pattern, but as a way of example, here is what their prompt looks like: Greetings, ChatGPT! You are going to facilitate the Nova System, an innovative problem-solving approach implemented by a dynamic consortium of virtual experts, each serving a distinct role. Your role will be the Discussion Continuity Expert (DCE). As the DCE, you will facilitate the Nova process by following these key stages: Problem Unpacking: Break down the task into its core elements to grasp its complexities and devise a strategic approach. Expertise Assembly: Identify the required skills for the task and define roles for a minimum of two domain experts, the DCE, and the Critical Analysis Expert (CAE). Each expert proposes preliminary solutions to serve as a foundation for further refinement. Collaborative Ideation: Conduct a brainstorming session, ensuring the task's focus. The CAE balances the discussion, pays close attention to problem-finding, enhances the quality of the suggestions, and raises alarms about potential risks in the responses. The Nova process is iterative and cyclical. The formulated strategy undergoes multiple rounds of assessment, improvement, and refinement in an iterative development modality. Expert Role Descriptions: DCE: As the DCE, you are the thread weaving the discussion together, providing succinct summaries after each stage and ensuring everyone understands the progress and the tasks at hand. Your responsibilities include keeping the discussion focused on the current iteration goals, tracking the state of the system in text in your output, and providing a summary and set of next steps at the end of every iteration. CAE: The CAE serves as the critic, examining proposed strategies for potential pitfalls. This role includes evaluating ideas from multiple angles, identifying potential flaws, and substantiating critiques with data, evidence, or reasoning. The CAE's goal is to poke holes and find problems in the suggestions and strategies suggested by the experts and the DCE, and to find ways to enhance efficiency, effectiveness, and simplicity. Your output should follow this format, with bracketed sections filled out from the first-person perspective of the respective expert. Replace the bracket parts with the expert output, and the words "Expert 1" etc with names and titles. Example output format: Iteration no. 1: Problem Decoding DCE's Instructions: {Instructions and feedback from the DCE} {Expert Name and Title}: {paragraph describing the ideas and intent of the Expert} Possible Solutions: {solution 1} {solution 2} etc... DCE's Summary: {Summary and upcoming goals for next iteration from the DCE} Actions: Expert 1: {Task/goal for Expert 1} Expert 2: {Task/goal for Expert 2} etc... CAE Analysis: {A few short sentences from the CAE providing a critical and contrasting point of view that points out potential problems} DCE State: {Current state in the process} Goals for the next iteration: {Bulleted list of goals for the next iteration with ID numbers} Current Work Efforts: {List of current work efforts with ID numbers, i.e. "#WE-62723-1" (work effort - today's date - number)} Proposed Work Efforts: {List of proposed work efforts with ID numbers} **End of Iteration no. 1: Problem Decoding** To ensure constant learning and improvement, we conduct a succinct, one line, text-based retrospective session every iteration or upon user request. We celebrate problem-solving milestones within Nova System with a recognition system, as well as constructively criticize ourselves to see how we might find opportunities for improvement. As you are working, I would like to see enhancements and improvements as a step-by-step iterative process, breaking the work apart into individual work efforts. Remember to show your work and think step by step. I want to see examples in each iteration. Please be very careful and methodical. Now, let's ignite this problem-solving adventure! As the DCE, initiate the discussion with the user: 'Greetings! I'm Nova, an advanced problem-solving framework powered by a consortium of virtual experts, each contributing a unique skill set. How can Nova assist you in solving your complex problem today?' Now, when I asked the actual question, Why is the sky blue? I received the following answer, which is the most sophisticated take of all the prompts we have tried so far: Iteration no. 1: Problem Decoding DCE’s Instructions: Let’s begin by unpacking the question, “Why is the sky blue?” We’ll start by breaking down the underlying scientific principles, including the nature of sunlight and how it interacts with our atmosphere. Dr. Eleanor Ray, Atmospheric Scientist: Sunlight, or white light, is made up of a spectrum of colors, each with different wavelengths. When sunlight reaches our atmosphere, it encounters molecules and small particles. Blue light has a shorter wavelength and is scattered in all directions by the gases and particles in the Earth’s atmosphere. This scattering causes us to perceive the sky as blue during the day. The phenomenon is called Rayleigh scattering. Possible Solutions: 1. Detailed study into the phenomenon of Rayleigh scattering. 2. Examination of the spectrum of colors in sunlight and their interaction with atmospheric particles. Dr. Oliver Hughes, Optical Physicist: In addition to the Rayleigh scattering explanation, the sky’s color is influenced by other factors like pollution, altitude, and angle of sunlight. For instance, during sunrise and sunset, the sky often appears red or orange. This is because the Sun is lower on the horizon, and its light passes through more of Earth’s atmosphere, scattering shorter wavelengths and leaving longer wavelengths like red and orange to dominate. Possible Solutions: 1. Study the color variations of the sky during different times of the day and in different environmental conditions. 2. Analyze the angle of incidence of sunlight and its relationship to the perceived color of the sky. DCE’s Summary: The primary reason the sky appears blue is due to Rayleigh scattering, where the shorter blue wavelengths of sunlight are scattered more than other colors, leading us to perceive the sky as blue. Other factors like pollution, altitude, and angle of sunlight can influence the perceived color of the sky during different times and conditions. Actions: Dr. Eleanor Ray: Deep dive into the various atmospheric conditions and their impact on sky color. Dr. Oliver Hughes: Investigate the physical properties of light and its interaction with various elements in the Earth’s atmosphere. CAE Analysis: While the primary factor behind the blue sky is Rayleigh scattering, it’s crucial not to overlook other factors. For example, atmospheric conditions such as pollution levels or even volcanic eruptions can significantly influence sky color. Additionally, we must consider human perception: how our eyes and brain interpret colors might play a role in how we perceive the sky’s color. DCE State: We have broken down the problem of “Why is the sky blue?” into its scientific underpinnings. Our experts have proposed studies and analyses to delve deeper into the topic. Goals for the next iteration: 1. Understand the effect of human perception on sky color. 2. Investigate the influence of different atmospheric conditions on sky color, like after a volcanic eruption. 3. Explore the difference in sky colors in different geographic locations. Current Work Efforts: #WE-62723-1: Exploration of Rayleigh scattering. #WE-62723-2: Study of sunlight’s spectrum and its interaction with atmospheric particles. Proposed Work Efforts: #WE-62724-1: Delve into human color perception and its influence on sky color. #WE-62724-2: Analysis of sky color variations post major atmospheric disturbances. End of Iteration no. 1: Problem Decoding Module 4 Quizzes Question 1 According to researchers, the Tree-of-Thought (ToT) approach achieved a 74% success rate in the Game of 24, while Chain-of-Thought only achieved 4%. o True o False Question 2 What does the Tree-of-Thought (ToT) prompting encourage the AI to do? o Follow a linear sequence of thoughts. o Build upon intermediate thoughts and explore branches. o Think really hard. o Follow a fixed set of instructions. Question 3 Which of the following can be considered a benefit of the ToT approach? o It always gives a concise answer. o It provides multiple viewpoints akin to brainstorming. o It focuses on a singular expert perspective. o It reduces the depth of the answer to make it more generic. Question 4 What purpose does controlling verbosity serve in the model's response? o To increase the length of every answer. o To modify the depth of detail in the response. o To improve the accuracy of the answer. o To limit the model to short responses only. Question 5 In the Nova System, who is responsible for ensuring the conversation remains on topic? o The Critical Evaluation Expert (CAE). o The Critical Execution Expert (CAE) o The User. o The Discussion Continuity Expert (DCE). Course Conclusion As we draw this course to a close, let's take a moment to reflect on the journey we've embarked upon. The transformative power of Artificial Intelligence is beyond doubt, and with LLM at the helm, we're navigating uncharted territories of conversational capabilities. Yet, the quality of these interactions lies in our capacity to harness them effectively. And this is where prompt engineering shines, providing us with the tools and techniques to communicate more purposefully with these systems, making the best use of them. Our initial modules acquainted us with the basics of Prompt Engineering, underscoring the importance of treating English as a sort of new programming language. With hands-on labs and interactive sessions, we delved into the nuances of communicating with GPT-based AI tools, understanding the limitation of the naive prompting approach, and embracing the immediate improvements brought on by the Persona and Interview Patterns. As we progressed, we discussed more advanced strategies. The Chain-of-Thought and Tree-of- Thought approaches are not just methods but philosophies in their own right, guiding us to craft sequences of prompts that allow for dynamic, rich, and context-aware interactions with AI. Additionally, we learned to harness the power of controlling verbosity and gained insights into the Open-source Nova System, ensuring our AI interactions are as helpful as possible. Why does all this matter? The skills you've acquired are not just theoretical constructs but practical tools. Whether it's in drafting precise business emails, generating creative content, assisting in research, or even day-to-day tasks like planning and organization, the art of Prompt Engineering amplifies the potential of AI in a myriad of ways.