Artificial Intelligence Fundamentals PDF
Document Details
Uploaded by FlawlessFantasy4551
Tags
Related
- Ethical and social implications of AI 50 questions PDF
- AI Ethics - Deep Learning Seminar (AA 2022/2023) PDF
- Data Ethics in AI: Key Points From Crawford's Atlas of AI PDF
- Artificial Intelligence for Sustainable Development Goals PDF
- Class 15 - Ethics and Technology PDF
- Ethics of Emerging Technologies PDF
Summary
This document explores the ethical considerations of artificial intelligence, focusing on the topic of bias and fairness in AI systems. It discusses the importance of considering diverse perspectives and the potential impact of AI on society. It also offers insights into building an ethical culture within organizations.
Full Transcript
Responsible Creation of Artificial Intelligence.. Understand the Ethical Use of Technology.. Responsible Technology Design and Development.. We are just beginning to understand how emerging technology impacts society. Diverse issues have arisen, from questions about automation replacing jobs to spe...
Responsible Creation of Artificial Intelligence.. Understand the Ethical Use of Technology.. Responsible Technology Design and Development.. We are just beginning to understand how emerging technology impacts society. Diverse issues have arisen, from questions about automation replacing jobs to speculation about the developmental effects of social media. Many industries are governed by standards, protocols, and regulations meant to ensure that their products have a positive impact on society. Medical doctors, for example, follow the Hippocratic Oath and have established research review boards to ensure ethical practices. The automobile industry is subject to driving laws, and the safety standards surrounding seatbelts and airbags. More generally, in 2011 the United Nations endorsed the Guiding Principles for Business and Human Rights, which define the responsibilities that businesses and states have to protect the rights and liberties afforded to all individuals. At Salesforce, we understand that we have a broader responsibility to society, and aspire to create technology that not only drives the success of our customers, but also drives positive social change and benefits humanity. Salesforce established the Office of Ethical and Humane Use to blaze a trail in the Fourth Industrial Revolution by helping our customers use our technology to make a positive impact. This effort is anchored in Salesforce’s core values (trust, customer success, innovation, equality, and sustainability). When it comes to technology ethics, the questions have never been more urgent—and it’s up to all of us to find the solutions. What Does It Mean to Be Biased or Fair?. When you create or use technology, especially involving artificial intelligence or automation, it’s important to ask yourself questions of bias and fairness. At Salesforce, we see bias as, “systematic and repeatable errors in a computer system that create unfair outcomes, in ways different from the intended function of the system, due to inaccurate assumptions in the machine learning process.” In the context of statistics, bias is systematic deviation from the truth or error. From a social and legal perspective, researcher and professor Kate Crawford defines bias as, “Judgement based on preconceived notions or prejudices, as opposed to the impartial evaluation of facts.” Fairness is defined as a decision made free of self-interest, prejudice, or favoritism. In reality, it’s nearly impossible for a decision to be perfectly fair. A panel at the Association for Computing Machinery's Conference on Fairness, Accountability, and Transparency in 2018 developed a list of over 21 definitions of fairness. If there are so many ways to think about fairness, how can you tell if humans or machines are making fair decisions? To make a more informed decision, it’s fundamental to understand the impact of that decision. A decision that benefits the largest number of people still excludes a minority, which is unfair if that minority is often overlooked. You need to ask yourself: Are some individuals or groups disproportionately impacted by a decision? Does systemic bias in past decisions or inaccurate data make some groups less likely to receive a fair or impartial assessment? If the answer is yes, then you must decide if, and how, you should optimize to protect those individuals, even if it won’t benefit the majority. Is There Such a Thing as "Good" Bias?. Some may argue that not all biases are bad. For example, let's say a pharmaceutical company manufactures a prostate cancer drug, and they target only men in their marketing campaigns. The company believes that their marketing campaign targeting is an example of a good bias because it's providing a service by not bothering female audiences with irrelevant ads. But if the company's dataset included only cis-gender people, or failed to acknowledge additional identities (ie. non-binary, transgender woman, transgender man, and agender individuals), then they were likely excluding other people who would benefit from viewing the ad. By incorporating a more complex and accurate understanding of gender and gender identity, the company would be better-equipped to reach everyone who could benefit from the drug. Create an Ethical Culture.. Most companies don’t actively set out to offend or harm people. But they can do so unintentionally if they don't define their core values, and put processes in place to ensure that everyone at the company is working in line with them. By defining values, processes, and incentives, leaders can influence the culture at their companies. Leaders can and should teach students and employees how to apply ethics in their work, but if the company culture isn’t healthy, it’s like planting a healthy tree in a blighted orchard. Eventually, even the healthy tree produces bad apples. Leaders must reward ethical behavior while catching and stopping unethical behavior. It’s important to remember that we are all leaders in this domain. You can make a difference by introducing and maintaining an ethical culture with an end-to-end approach. Build diverse teams. Translate values into processes. Understand your customers. Build Diverse Teams. Research shows that diverse teams (across the spectrum of experience, race, gender, and ability) are more creative, diligent, and hardworking. An organization that includes more women at all levels, especially top management, typically has higher profits. To learn more, check out the Resources section at the end of this unit. Everything we create represents our values, experiences, and biases. For example, facial recognition systems often have more difficulty identifying black or brown faces than white faces. If the teams creating such technology had been more diverse, they would have been more likely to have recognized and addressed this bias. Development teams should strive toward diversity in every area, from age and race to culture, education, and ability. Lack of diversity can create an echo chamber that results in biased products and feature gaps. If you are unable to hire diverse team members, consider seeking feedback from underrepresented groups across your company and user base. A sense of community is also part of the ethical groundwork at a company. No one person should be solely responsible for acting ethically or promoting ethics. Instead, the company as a whole should be mindful and conscious of ethics. Employees should feel comfortable challenging the status quo and speaking up, which can identify risks for your business. Team members should ask ethical questions specific to their domains, such as: 1. Product managers: What is the business impact of a false positive or false negative in our algorithm? 2. Researchers: Who is impacted by our system and how? How can it be abused? How can people try to break the product or use it in unintended ways? What is the social context in which this is used? 3. Designers: What defaults or assumptions am I building into the product? Am I designing this for transparency and equality? 4. Data scientists: What are the implications for users when I optimize my model this way? 5. Content writers: Can I explain why the system made a prediction, recommendation, or decision in terms the user can understand? 6. Engineers: What notifications, processes, checks, or failsafes can we build into the system to mitigate harm? Notice that these questions involve the perspectives of multiple roles. Involving stakeholders and team members at every stage of the product development lifecycle helps correct the impact of systemic social inequalities in your system. If you find yourself on a team that’s missing any of these roles, or where you play multiple roles, you may need to wear multiple hats to ensure each of these perspectives is included—and that may involve seeking out external expertise or advice. When employees are dissatisfied with the answers they receive, there should be a clear process for resolving the problem areas, like a review board. We go into more detail on that later. Translate Values into Processes.. Nearly every organization has a set of values designed to guide their employees’ decision making. There are three ways this can be put into practice. 1. Incentive structures. 2. Resources. 3. Documentation and Communication. Incentive structures.. Incentive structures reward individuals for specific behaviors or achieving specific goals. Incentive structures should be informed by organizational values. More often than not, employees are rewarded based on sales, customer acquisition, and user engagement. These metrics can sometimes run counter to ethical decision making. If an organization wishes to reward behaviors in line with its values, possible incentive structures can include ethical bounties. Similar to bug bounties, ethical bounties reward employees when they identify decisions, processes, or features that are counter to the company’s values or cause harm to others or reward sales reps that share concerns about customer deals and avoid potential legal or public relations risk. You may also include questions about ethical technology development in your hiring process. This sets the expectation for new employees that the ethical culture you’re building is important to the company and that ethical thinking and behavior is rewarded. Employee Support.. Responsible organizations provide resources to support employees and empower them to make decisions in line with the company’s values. This can include employee education (Trailhead is a great resource for this) and review boards to resolve difficult issues and ensure employees are following guidelines. At Salesforce, we have a data science review board, which provides feedback on the quality and ethical considerations of our AI models, training data, and the applications that use them. While building an ethical culture empowers employees to speak up when they have ethical concerns, you may want to also consider creating a clear, anonymous process for employees to submit concerns. Finally, checklists are great resources to provoke discussion and ensure that you don't overlook important concerns. Checklists, although consistent and easy to implement, are rarely exhaustive and must be clear and actionable to be useful. Because they enable employees to have difficult conversations, checklists help your company build an ethical culture from the ground up. Documentation and Communication.. Document decision making for transparency and consistency. If a team is at an ethical crossroads, documenting what is decided and why enables future teams to learn from that experience and act consistently rather than arbitrarily. Documentation and communication also give your employees and stakeholders confidence in your process and the resulting decisions. Understand Your Customers.. It should go without saying that you need to understand all of your customers. If you don't, you could be designing products that ignore a portion of your user base or cause harm to some users with you being none the wiser. Ask yourself, whose needs and values have you assumed rather than consulted? Who is at the greatest risk of harm and why? Are there bad actors that could intentionally use your product to cause harm or individuals who might use it ignorantly and accidentally cause harm? Once you know the answers to these questions, you can work toward solving these problems. We recommend reaching out to a user researcher to learn more about your customers. Recognize Bias in Artificial Intelligence.. Focus on Artificial Intelligence.. Artificial intelligence can augment human intelligence, amplify human capabilities, and provide actionable insights that drive better outcomes for our employees, customers, partners, and communities. We believe that the benefits of AI should be accessible to everyone, not just the creators. It’s not enough to deliver just the technological capability of AI. We also have an important responsibility to ensure that our customers can use our AI in a safe and inclusive manner for all. We take that responsibility seriously and are committed to providing our employees, customers, partners and community with the tools they need to develop and use AI safely, accurately, and ethically. As you learn in the Artificial Intelligence Fundamentals badge, AI is an umbrella term that refers to efforts to teach computers to perform complex tasks and behave in ways that give the appearance of human agency. Training for such a task often requires large amounts of data, allowing the computer to learn patterns in the data. These patterns form a model that represents a complex system, much like you can create a model of our solar system. And with a good model, you can make good predictions (like predicting the next solar eclipse) or generate content (like write me a poem written by a pirate). We don’t always know why a model is making a specific prediction or generating content a certain way. Frank Pasquale, author of The Black Box Society, describes this lack of transparency as the black box phenomenon. While companies that create AI can explain the processes behind their systems, it’s harder for them to tell what’s happening in real time and in what order, including where bias can be present in the model. AI poses unique challenges when it comes to bias and making fair decisions. What Is Ethical vs. Legal?. Every society has laws citizens need to abide by. Sometimes, however, you need to think beyond the law to develop ethical technology. For example, US federal law protects certain characteristics that you generally can’t use in decisions involving hiring, promotion, housing, lending, or healthcare. These protected classes include sex, race, age, disability, color, national origin, religion or creed, and genetic information. If your AI models use these characteristics, you may be breaking the law. If your AI model is making a decision where it is legal to rely on these characteristics, it still may not be ethical to allow those kinds of biases. Issues related to protected classes can also cross over into the realm of privacy and legality, so we recommend taking our GDPR trail to learn more. Finally, it is also important to be aware of the ways that Einstein products may and may not be used in accordance with our Acceptable Use Policy. The good news is that AI presents an opportunity to systematically address bias. Historically, if you recognized that your company’s decision-making resulted in a biased outcome as a result of individual decision-making, it was difficult to redesign the entire process and overcome this intrinsic bias. Now, with AI systems, we have the chance to bake fairness into the design and improve on existing practices. In addition to carefully examining the legal and ethical implications of your AI models, you should assess whether your model is aligned with your business’s responsibility to respect and promote human rights. You should factor in international human rights law and the responsibilities the UN has laid out for businesses to respect human rights, which include a due diligence process to assess human rights impacts, act on the assessment, and communicate how the impacts are addressed. Types of Bias to Look Out For. Bias manifests in a variety of ways. Sometimes it’s the result of systematic error. Other times it’s the result of social prejudice. Sometimes the distinction is blurry. With these two sources of bias in mind, let’s look at the ways in which bias can enter an AI system. Measurement or Dataset Bias. When data are incorrectly labeled or categorized or oversimplified, it results in measurement bias. Measurement bias can be introduced when a person makes a mistake labeling data, or through machine error. A characteristic, factor, or group can be over- or underrepresented in your dataset. Let’s consider a harmless example: an image-recognition system for cats and dogs. The training data seems straightforward enough—photos of cats and dogs. But the image set includes only photos of black dogs, and either white or brown cats. Confronted with a photo of a white dog, the AI categorizes it as a cat. Although real-world training data is rarely so cut and dry, the results can be just as staggeringly wrong—with major consequences. Type 1 vs. Type 2 Error. Think of a bank using AI to predict whether an applicant will repay a loan. If the system predicts that the applicant will be able to repay the loan but they don’t, it’s a false positive, or type 1 error. If the system predicts the applicant won’t be able to repay the loan but they do, that’s a false negative, or type 2 error. Banks want to grant loans to people they are confident can repay them. To minimize risk, their model is inclined toward type 2 errors. Even so, false negatives harm applicants the system incorrectly judges as unable to repay. Association Bias.. Data that are labeled according to stereotypes is an example of association bias. Search most online retailers for “toys for girls" and you get an endless assortment of cooking toys, dolls, princesses, and pink. Search “toys for boys," and you see superhero action figures, construction sets, and video games. Confirmation Bias.. Confirmation bias labels data based on preconceived ideas. The recommendations you see when you shop online reflect your purchasing habits, but the data influencing those purchases already reflect what people see and choose to buy in the first place. You can see how recommendation systems reinforce stereotypes. If superheroes don’t appear on a website’s ‘toys for girls” section, a shopper is unlikely to know they’re elsewhere on the site, much less purchase them. Automation Bias.. Automation bias imposes a system’s values on others. Take, for instance, a beauty contest judged by AI in 2016. The goal was to declare the most beautiful women with some notion of objectivity. But the AI in question was trained primarily on images of white women and its learned definition of "beauty" didn't include features more common in people of color. As a result, the AI chose mostly white winners, translating a bias in training data into real world outcomes. Automation bias isn't limited to AI. Take the history of color photography. Starting in the mid-1950s, Kodak provided photo labs that developed their film with an image of a fair-skinned employee named Shirley Page that was used to calibrate skin tones, shadows, and light. While different models were used over time, the images became known as "Shirley cards." Shirley's skin tone, regardless of who she was (and she was initially always white) was considered standard. As Lorna Roth, a media professor at Canada's Concordia University told NPR, when the cards were first created, "the people who were buying cameras were mostly Caucasian people. And so I guess they didn't see the need for the market to expand to a broader range of skin tones." In the 1970s, they started testing on a variety of skin tones and made multiracial Shirley cards. Societal Bias.. Societal bias reproduces the results of past prejudice toward historically marginalized groups. Consider redlining. In the 1930s, a federal housing policy color-coded certain neighborhoods in terms of desirability. The ones marked in red were considered hazardous. The banks often denied access to low-cost home lending to minority groups residents of these red marked neighborhoods. To this day, redlining has influenced the racial and economic makeup of certain zip codes, so that zip codes can be a proxy for race. If you include zip codes as a data point in your model, depending on the use case you could inadvertently be incorporating race as a factor in your algorithm’s decision-making. Remember that it is also illegal in the US to use protected categories like age, race, or gender in making many financial decisions. Survival or Survivorship Bias.. Sometimes, an algorithm focuses on the results of those were selected, or who survived a certain process, at the expense of those who were excluded. Let’s look at hiring practices. Imagine that you’re the hiring director of a company, and you want to figure out whether you should recruit from a specific university. You look at current employees hired from such-and-such university. But what about the candidates that weren’t hired from that university, or who were hired and subsequently let go? You see the success of only those who “survived.” Interaction Bias.. Humans create interaction bias when they interact with or intentionally try to influence AI systems and create biased results. An example of this is when people intentionally try to teach chatbots bad language. How Does Bias Enter the System?. You know that bias can enter an AI system through a product’s creators, through training data (or lack of information about all the sources that contribute to a dataset), or from the social context in which an AI is deployed. Assumptions.. Before someone starts building a given system, they often make assumptions about what they should build, who they should build for, and how it should work, including what kind of data to collect from whom. This doesn’t mean that the creators of a system have bad intentions, but as humans, we can’t always understand everyone else’s experiences or predict how a given system will impact others. We can try to limit our own assumptions from entering into a product by including diverse stakeholders and participants in our research and design processes from the very beginning. We should also strive to have diverse teams working on AI systems. Training Data.. AI models need training data, and it’s easy to introduce bias with the dataset. If a company historically hires from the same universities, same programs, or along the same gender lines, a hiring AI system will learn that those are the best candidates. The system will not recommend candidates that don’t match those criteria. Model.. The factors you use to train an AI model, such as race, gender, or age, can result in recommendations or predictions that are biased against certain groups defined by those characteristics. You also need to be on the lookout for factors that function as proxies for these characteristics. Someone’s first name, for example, can be a proxy for gender, race, or country of origin. For this reason, Einstein products don't use names as factors in its Lead and Opportunity Scoring model. Human Intervention (or Lack Thereof).. Editing training data directly impacts how the model behaves, and can either add or remove bias. We might remove poor-quality data or overrepresented data points, add labels or edit categories, or exclude specific factors, such as age and race. We can also leave the model as-is, which, depending on the circumstances, can leave room for bias. The stakeholders in an AI system should have the option to give feedback on its recommendations. This can be implicit (say, the system recommends a book the customer might like and the customer does not purchase it) or explicit (say, the customer gives a thumbs up to a recommendation). This feedback trains the model to do more or less of what it just did. According to GDPR, EU citizens must also be able to correct incorrect information a company has about them and ask for that company to delete their data. Even if not required by law, this is best practice as it ensures your AI is making recommendations based on accurate data and is ensuring customer trust. AI Can Magnify Bias.. Training AI models based on biased datasets often amplifies those biases. In one example, a photo dataset had 33 percent more women than men in photos involving cooking, but the algorithm amplified that bias to 68 percent. To learn more, see the blog post in the resources section. Remove Bias from Your Data and Algorithms.. Manage Risks of Bias.. We've discussed the different kinds of bias to consider while working with AI. Now for the hard part: how to prevent or manage the risks those biases create. You can’t magically de-bias your training data. Removing exclusion is both a social and technical problem: You can take precautions as a team in how you plan and execute your product, in addition to modifying your data. Conduct Premortems.. As we discussed in the first unit, creating a product responsibly starts with building an ethical culture. One way to do this is by incorporating premortems into your workflow. A premortem is the opposite of a post-mortem—it's an opportunity to catch the “what went wrong” before it happens. Often, team members can be hesitant to share reservations in the planning phase of a project. In a sensitive area like AI, it’s paramount that you and your team are open about whatever misgivings you might have and are willing to get uncomfortable. Holding such a meeting can temper the desire to throw caution to the wind in the initial enthusiasm over a project by setting measured and realistic expectations. Identify Excluded or Overrepresented Factors in Your Dataset.. Consider the deep social and cultural factors that are reflected in your dataset. As we detailed in the previous unit, any bias at the level of your dataset can impact your AI’s recommendation system, and can result in the over- or underrepresentation of a group. From a technical perspective, here are a couple ways that you can address bias in your data. These techniques are by no means comprehensive. What: Statistical patterns that apply to the majority may be invalid within a minority group. How: Consider creating different algorithms for different groups rather than one size fits all. What: People are excluded from your dataset, and that exclusion has an impact on your users. Context and culture matter, but it may be impossible to see the effects in the data. How: Look for what researchers call unknown unknowns, errors that happen when a model is highly confident about a prediction that is actually wrong. Unknown unknowns are in contrast to known unknowns, incorrect predictions that the model makes with low confidence. Similar to when a model generates content, it can produce information completely unfactual to your request. Regularly Evaluate Your Training Data.. As we’ve said before, developing an AI system starts at the level of your training data. You should be scrupulous about addressing data quality issues as early as possible in the process. Make sure to address extremes, duplicates, outliers, and redundancy in CRM Analytics or other data preparation tools. Before you release your models, make sure to run prerelease trials so that your system doesn't make biased predictions or judgments and impact people in the real world. Ensure that they’ve been tested so that they won’t cause harm. You want to be able to account for your product working across different communities so that you don’t get any surprises upon release. After you release a model, develop a system for periodically checking the data that your algorithms are learning from, and the recommendations your system is making. Think of your data as having a half-life—it won’t work for everyone indefinitely. On the technical side, the more data enters a system, the more an algorithm learns. This can lead the system to identify and match patterns that those developing the product didn’t foresee or want. On the social side, cultural values change over time. Your algorithms’ output may no longer suit the value systems of the communities it serves. Two ways you can address these challenges include paid community review processes to correct oversight, and by creating mechanisms in your product for individuals and users to opt out or correct data about themselves. Community review processes should include people from the communities that may be impacted by the algorithmic system you’re developing. You should also hold sessions with the people who will implement, manage, and use the system to meet their organization’s goals. Head over to our UX Research Basics to learn more about methods you can use to conduct community review processes as well as conduct user research to understand the contexts your tool will be used in. Conclusion.. AI can be a force for good, potentially detecting tumors that humans cannot and Alzheimer’s before one’s family can or preserving indigenous languages. Throughout this module, we’ve shown the power of AI systems, but also their opacity. If we want AI to benefit society more than it harms it, we have to acknowledge the risks and take action to ensure AI systems are designed, developed, and used responsibly. As technologists, even when we’re conscientious and deliberate in our approach there will be surprises along the way. We can’t always predict the interactions between datasets, models, and their cultural context. Datasets often contain biases that we’re not aware of, and it’s our responsibility to evaluate and assess training data and our models’ predictions to ensure they don’t yield any damaging results. Developing ethical AI systems is a sociotechnical process. Look at it not only in terms of its technical implementation, but also through the way it’s developed across teams and the social contexts it will be used in. What’s more, assess who’s involved in the process—how are gender, race, ethnicity, and age represented? The people building AI products and the bias engendered by these systems are interconnected. To realize safe, socially beneficial AI, we need to remember that humans are at the heart of it. AI is a tool and we choose how to use it. Regardless of someone’s role, their minor decisions can have serious, lasting consequences. At Salesforce, we strongly believe that we can do well and do good. You can make a profit without harming others, and, in fact, make a positive impact in the process. Create Responsible Generative AI.. Generative AI, a New Type of Artificial Intelligence.. Until recently, most people who discussed AI were talking about predictive AI. This type of artificial intelligence focuses on looking at an existing set of data and making limited predictions about what should be true given the information at hand. Now there’s a new player on the field—an emerging type of AI that’s generative, not predictive. The key difference? Where predictive AI analyzes trends, generative AI creates new content. Generative AI (gen AI) boasts an impressive array of capabilities—from real-time conversations with bots that effectively simulate talking to a live support agent to applications for marketers, programmers, and creative pioneers. In addition, gen AI’s cultural moment has users flocking to see what it can do. That means that most of us will probably encounter these algorithms in our daily lives, where they may play an increasingly significant role. With all emergent technology comes unknowns. Whether it’s intentional abuse or accidental bias, gen AI poses risks that must be understood and addressed in order to get the most out of this technology. Know the Risks.. At Salesforce, we focus on designing, developing, and distributing technologies in a responsible and trusted way. To do that, we anticipate the intended and unintended consequences of what we build. Let’s review some potential risks to gen AI. Accuracy.. Gen AI models are great at making predictions. Gen AI models make new content by gathering tons of examples of things that fit the same categories. But while a model might be able to create a new sentence in the style of a famous writer, there isn’t any way to know if the same sentence is factually true. And that can be a problem when users assume that an AI's predictions are verified facts. This is both a feature and a bug. It gives the models the creative capabilities that captured imaginations in its earliest days. But it’s easy to mistake something that looks correct for something that’s accurate to the real world. Bias and Toxicity.. Because human interactions can involve a degree of toxicity—that is, harmful behavior like using slurs or espousing bigotry—AI replicates that toxicity when not tuned to recognize and filter it. In fact, it can even amplify the bias it finds, because making predictions often involves dismissing outlying data. To an AI, that might include underrepresented communities. Privacy and Safety.. Gen AI’s two most compelling features are its ability to replicate human behavior and the speed to do so at a massive scale. These features offer amazing possibilities. And there’s a downside: It’s easy to exploit the technology to do huge amounts of damage very quickly. The models have a tendency to “leak” their training data, exposing private info about the people represented in it. And gen AI can even be used to create believable phishing emails or replicate a voice to bypass security. Disruption.. Because of how much AI can do, it poses a risk to society even when working as intended. Economic disruption, job and responsibility changes, and sustainability concerns from the intense computing power required for the models to operate all have implications for the spaces we share. Trust: The Bottom Line.. Trust is the #1 value at Salesforce, and it’s our North Star as we build and deploy gen AI applications. To guide this work, we’ve created a set of principles for developing generative AI responsibly and help others leverage the tech’s potential while guarding against its pitfalls. Accuracy. Accuracy: Gen AI, like other models, makes predictions based on the data it is trained on. That means that it needs good data to deliver accurate results. And it means that people need to be aware of the chance for inaccuracy or uncertainty in an AI’s output. Safety. Safety: Bias, explainability, and robustness assessments, along with deliberate stress testing for negative outcomes, help us keep customers safe from dangers like toxicity and misleading data. We also protect the privacy of any personally identifying information (PII) present in the data used for training. And we create guardrails to prevent additional harm (such as publishing code to a sandbox rather than automatically pushing to production). Honesty. Honesty: Your data is not our product. When collecting data to train and evaluate our models, we need to respect data provenance and ensure that we have consent to use data (for example, open-source, user-provided). It’s also important to notify people when they’re using or talking to an AI, with a watermark or disclaimer, so that they don’t mistake a well-tuned chatbot for a human agent. Empowerment. Empowerment: There are some cases where it’s best to fully automate processes. But there are other cases where AI should play a supporting role to the human—or where human judgment is required. We aim to supercharge what humans can do by developing AI that enhances or simplifies their work and gives customers tools and resources to understand the veracity of content they create. Sustainability. Sustainability: When it comes to AI models, larger doesn’t always mean better: In some instances, smaller, better-trained models outperform larger, more sparsely trained models. Finding the right balance between algorithmic power and long-term sustainability is a key part of bringing gen AI into our shared future. Guidelines Govern AI Action.. So, what does it look like to deliver on those commitments? Here are a few actions Salesforce is taking. The Einstein Trust Layer.: We integrated the Einstein Trust Layer into the Salesforce platform to help elevate the security of gen AI at Salesforce through data and privacy controls that are seamlessly integrated into the end-user experience. You can learn more by checking out Einstein Trust Layer in Help. Product design decisions.: Users should be able to trust that when they use AI, they get reliable insights and assistance that empowers them to meet their needs without exposing them to the risk of sharing something inaccurate or misleading. We build responsibility into our products. We examine everything from the color of buttons to limitations on outputs themselves to ensure that we’re doing everything possible to protect customers from risk without sacrificing the capabilities they rely on to stay competitive. Mindful friction.: Users should always have the information they need to make the best decision for their use case. We help our users stay ahead of the curve with unintrusive, but mindfully-applied friction. In this case, “friction” means interrupting the usual process of completing a task to encourage reflection. For example, in-app guidance popups to educate users on bias, or flag detected toxicity and ask customer service agents to review the answer carefully before sending. Red teaming.: We employ red teaming, a process that involves intentionally trying to find vulnerabilities in a system by anticipating and testing how users might use it and misuse it, to make sure that our gen AI products hold up under pressure. Learn more about how Salesforce builds trust into our products with The Einstein Trust Layer in Trailhead. One way we test our products is by performing precautionary “prompt injection attacks,” by crafting prompts specifically designed to make an AI model ignore previously established instructions or boundaries. Anticipating actual cybersecurity threats like these is essential to refining the model to resist actual attacks. Acceptable Use Policy.: Because AI touches so many different applications, we have specific policies for our AI products. This allows us to transparently set acceptable use guidelines that ensure trust for our customers and end users. That approach is nothing new: Salesforce already had AI policies designed to protect users, including a prohibition on facial recognition and bots masquerading as humans. We’re currently refreshing our existing AI guidelines to account for gen AI, so that customers can keep trusting our tech. With our updated rules, anyone can see whether their use case is supported as we offer even more advanced AI products and features. You can learn more by checking out our Acceptable Use Policy. Navigate the Journey Forward.. Gen AI changes the game for how people and businesses can work together. While we don’t have all the answers, we do suggest a couple of best practices. Collaborate.. Cross-functional partnerships–within companies and between public and private institutions–are essential to drive responsible progress. Our teams are actively participating on external committees and initiatives like the National AI Advisory Committee (NAIAC) and the NIST risk-management framework to contribute to the industry-wide push to create more trusted gen AI. Include Diverse Perspectives.. Throughout the product lifecycle, diverse perspectives deliver the wide-ranging insights needed to effectively anticipate risks and develop solutions to them. Exercises like consequence scanning can help you ensure that your products incorporate essential voices in the conversation about where gen AI is today and where to take it tomorrow. Even the most advanced AI can’t predict how this technology will shape the future of work, commerce, and seemingly everything else. But by working together, we can ensure that human-centric values create a bedrock of trust from which to build a more efficient, scalable future. Generative AI for Images.. Explore Image Generation Models.. Moving from Words to Imagery. While generative artificial intelligence (gen AI) is a relatively new technology, it’s already helping people and organizations work more efficiently. Maybe you’ve used it to summarize meeting notes, make a first-pass outline for a writing project, or create some code. These applications of generative AI tools all have something in common: They’re only focused on creating text in one form or another. There’s another world of gen AI tools that can create high-quality images, 3D objects, and animations, all using the power of large language models (LLMs). So if you’ve begun using gen AI to supercharge writing tasks, it’s likely you can benefit from using gen AI to enhance your work with imagery and animations. In this badge you learn about some of the current, rapidly improving capabilities of generative AI in the multimedia space. You discover ways to effectively incorporate gen AI into your workflow. And you reflect on some of the challenging questions surrounding the responsible use of generative AI for imagery creation. Advances in AI Models. Let’s take a moment to appreciate how this world has been affected by large language models. Before LLMs really took off, for years researchers had been training AI to produce imagery. But those models have been limited in some pretty significant ways. For example, one type of neural network architecture that showed promise was the generative adversarial network (GAN). In short, two networks were set up to play a cat-and-mouse game. One would try to create realistic images, the other would try to distinguish between those generated images and real images. Over time, the first network became very good at tricking the second. This method is capable of generating very convincing images of all sorts of subjects, including people. However GANs usually excel at creating images of just one kind of subject. So a GAN that’s great at creating images of cats would be terrible at creating images of mice. There’s also the possibility that a GAN will experience “mode collapse,” where the first network creates the same image again and again, because that image is known to always trick the second. An AI that only creates one image isn’t exactly useful. What would be really useful is an AI model that could create images of a variety of subjects, whether we ask for a cat, a mouse, or a cat in a mouse costume. As the above ai-generated image demonstrates, those models already exist! They’re known as diffusion models because the underlying math relates to the physical phenomenon of something diffusing, like a drop of dye in a glass of water. Like most AI models, the technical details are the stuff of incredibly complex research papers. The important thing to know is that diffusion models are trained to make connections between images and text. It helps that there are a lot of captioned cat pictures on the internet. With enough samples, a model can extract the essence of “cat,” “mouse,” and “costume.” Then, it embeds that essence into a generated image using diffusion principles. It’s complicated, but the results are often stunning. The number of available diffusion models is growing by the day, but four of the most well-known are DALL-E, Imagen, Stable Diffusion, and Midjourney. Each differs in the data used for training, the way it embeds language details, and how users can interact with it to control output. So results differ significantly from tool to tool. And what one model does well today, another might do better tomorrow as research and development speeds forward. Uses of Generative AI for Imagery.. Generative AI can do more than just create cute cat cartoons. Often gen AI models are fine-tuned and combined with other algorithms and AI models. This allows artists and tinkerers alike to create, manipulate, and animate imagery in a variety of ways. Let’s check out some examples. Text-to-Image.. You can achieve an incredible amount of artistic variety using text-to-image gen AI. In our example, we chose a hand-drawn style of a cat. But we could have gone for hyperrealistic, or represented the scene as a tiled mosaic. If you can imagine it, diffusion models can interpret your intention with some success. In the next unit you learn tips for how to get the best results, but for now understand that the first limit to what you can create is what you can imagine. Browse what others are creating with the different diffusion models. The ability to use image generation inline with text generation has emerged recently. So, as you develop a story with some GPT tools, they can use the context to generate an image. Even better, if you need another image that includes the same subject, like our costume cat, those models can use the first image as reference to maintain character consistency. Text-to-3D Model.. Typically, the tools to create 3D models are technical and require a high level of skill to master. Yet, we’re at a time when 3D models are appearing in more places than ever, from commerce, to manufacturing, to entertainment. Let generative AI help meet some of the demand. Models like the one used for DreamFusion can generate amazing 3D models, along with supporting resources to describe the coloring, lighting, and material properties of the models. Image-to-Image.. If a picture is worth a thousand words, imagine how useful it is as part of the prompt for a generative AI model! Some models are trained to extract meaning from pictures, using similar training that allows for text-to-image generation. This two-way translation is the basis for the following use cases. Style transfer: Start with a simple sketch and a description of what’s happening in the scene and let gen AI fill in all of the details. The output can be in a specific kind of artistic style, like a Renaissance painting or an architectural drawing. Some artists do this iteratively to build an image. Paint out details: Imagine you visit the Leaning Tower of Pisa and get a great photo of yourself pretending to hold up the tower with your own strength. Unfortunately, 20 other people are in the picture doing the same thing. No worries, now you can cut them out and let AI fill in the gaps with realistic grass and sky for a pristine photo. Paint in details: What might it look like to put a party hat on a panther? There’s a dangerous way of finding out, or the much safer way of using generative AI. Tools are used to identify specific locations for items in a scene, and like magic, they appear as if they were always there. Extend picture borders: Generative AI uses the context of the picture to continue what is likely to appear beyond the border of a scene. Animation.. Because there’s a certain amount of randomness inherent to every generated image, creating a series of slightly different images is its own challenge for generative AI. So when you play one image after the other, the variations jump out, lines and shapes shifting and shimmering. But researchers have developed methods of reducing that effect so generated animations have an acceptable level of consistency. All of the previous use cases for still imagery can be adapted to animation in some way. For example, style transfer can take a video of a skateboarder doing a trick and turn it into an anime style video. Or use a model trained on speech patterns to animate the lips of a generated 3D character. There are enormous possibilities to create stunning imagery with generative AI. In the next unit, you learn responsible ways to make use of generative AI’s capabilities. Use Generative AI for Art Effectively and Responsibly.. Bring Generated Art Into Your Projects.. Whether you want to illustrate a concept for a presentation or show how your product looks when used in the real world, generative AI gives you the power to beautify your work with imagery. Using AI to create images is an artform of its own. With the right approach, you can generate images appropriate for your next project. When generating imagery, remember that art is subjective. You might want the perfect picture to punctuate your point, but, there is no perfect! What you find brilliant, others may not appreciate as much. So consider using the image you’re 95% happy with; the last 5% probably falls into the subjective zone anyhow. Remind yourself of your goal for including imagery in your project. Your goal might be to break up your text with interesting images. But once you begin generating images, it’s tempting to let the goal shift to finding a perfect image. That narrower focus leads you to discard options that would meet your original goal of supplementing your content just fine. This includes images that have small imperfections. If the artwork isn’t the focal point of the project, your viewer may not even notice anything amiss. For example, this picture is used in the Generative AI Basics badge. Look closely and you can see that the table has five legs. The image isn’t perfect, but it still does the job well. In general, being flexible gets you to an acceptable result faster. It usually saves money as well, since a lot of generative AI tools are paid services. That said, you can be flexible and smart about how you work with these tools. The Art of Prompt Engineering.. As you learn in the Prompt Fundamentals badge, prompts are how you interact with generative AI models. You give a model directions through text (and maybe an image or two), and it returns its best prediction of what you want. Usually, better prompts mean better output. But what makes a good prompt? That’s a seemingly simple question that has sparked debate among digital artists. Since we can never fully understand the connections forged when a model is trained, there will always be uncertainty in how it responds. So we make an educated guess and hope for the best. But some guesses are better than others. This is the foundation of prompt engineering. That term grew out of the subculture of artists who first adopted generative AI as a tool for creating art. Prompt engineering is about experimenting with prompts to see what happens. Through a lot of trial and error, early prompt engineers discovered techniques that work surprisingly well to influence generative AI output. Prompt engineering has evolved into a sophisticated craft, but there are some simpler, well-established techniques to get better results as you start using generative AI tools. Use style modifiers. From cave drawings to 3D renders, art has taken countless forms. Include a specific style of art, like Impressionism, or a specific artist, like Monet, in your prompt. Describe eras, geographic regions, or materials. Anything that’s frequently associated with a specific art style will be part of the model. Use quality descriptors. Although AI models don’t have opinions about what is beautiful, we humans sure do, and we’re not afraid to write them down! Those subjective notions end up becoming part of the model. So asking for a picture of a “beautiful, high-definition, peaceful countryside village” will probably generate something nice to look at. Repeat what’s important. It would be ridiculous to ask an artist to paint a “snowy snowy snowy snowy snowy snowy countryside village.” But generative AI models respond well to repetition (and won’t get annoyed by it). Anything repeated gets extra attention, even adjectives like “very” or “many.” Weigh what’s important. Some models allow you to directly control the importance of certain terms. For example, Stable Diffusion allows you to put a number value on a portion of the prompt. So “countryside village | snowy:10 | stars:5 | clouds:-10” would make for lots of fallen snow, but a clear and starry night. Not every model supports this kind of direct weighting or will have a different syntax, so investigate the nuances of the tool you’re using. Whether you call it an art, craft, or science, prompt engineering requires practice. Remember: there’s no perfect prompt and there is no perfect artwork. Be open to surprises as you create AI generated images, and you’ll soon find imagery that works well for your next project. Ethics of Generated Artwork.. Advances in AI technology have raised several ethical questions. Although it’s hard to find answers to satisfy everyone, we can try to understand the concerns. For many artists, plagiarism is the primary concern. If their work is used to train a model, then the model can replicate their style. In some cases, the imagery is an obvious derivation of existing work. In others, the style is so similar that the counterfeit could pass as original. Many artists want their work removed from training data, and thankfully curators of popular models are responding in good faith. Impersonation is a less obvious, more insidious concern. You may be familiar with deepfakes, videos where AI is used to replace someone’s face with that of another. Sadly, deepfakes are often created without the consent of the person being imitated. At its most harmless, you get a funny video of a pop star saying something silly. But what if that star’s image is made to sell a product? Or if a politician is made to spread lies about an issue? This is just the tip of the iceberg. We must strengthen our skills in detecting fraud now that “seeing is believing” no longer holds true. Generative AI is only as good as the data it’s trained on. If the data is biased, the generated output will also be biased. Historically, doctors have been depicted as men, so models could have a strong connection between “doctor” and “man”—even if that connection doesn’t reflect reality today. So even if you aren’t trying to perpetuate a stereotype, your model might do it for you. Consider using weighting to counteract biases. Generative AI is always going to be derivative in some capacity. This might actually stifle genuine creativity. Would we have Cubism if Picasso had access to DALL-E? And as tomorrow’s AI is trained on today’s generated images, the same styles will repeat themselves. We really do need humans to contribute their own artistic vision as a form of human-in-the-loop. Finally, if you plan to use generated imagery, consider acknowledging where it came from with something as simple as a watermark that states “AI generated.” Transparency builds trust. Models can be programmed to skip works that would contribute to a feedback loop. There’s no right way to attribute works as AI generated, but the Modern Language Association (MLA) has some guidelines. Now that you know more about using generative AI effectively and responsibly, try adding generated imagery to your next project.