Einstein Generative AI PDF

Summary

This document explores Einstein generative AI, emphasizing its importance in data security and innovation. It highlights Salesforce's commitment to trustworthy practices and outlines key principles like accuracy, safety, and transparency in generative AI development.

Full Transcript

Einstein Generative AI & Trust.. It’s important that your data stays safe while you innovate with new technology. Einstein generative AI keeps trust as its #1 value, and we strive to make sure your data is secure while also creating experiences that are accurate and safe. Einstein Generative AI and...

Einstein Generative AI & Trust.. It’s important that your data stays safe while you innovate with new technology. Einstein generative AI keeps trust as its #1 value, and we strive to make sure your data is secure while also creating experiences that are accurate and safe. Einstein Generative AI and Your Data.. At Salesforce, trust is our #1 value. To keep your data secure, Salesforce has agreements in place with large language model (LLM) providers, such as OpenAI. These agreements allow organizations to use generative AI capabilities without their private data being retained by the LLM providers. Trusted Generative AI.. Salesforce’s Einstein generative AI solutions are designed, developed, and delivered based on our five principles for trusted generative AI. Accuracy.: We back up model responses with explanations and sources whenever possible. We recommend that a human checks model responses before sharing with end users for the majority of use cases. Safety.: We work to detect and mitigate bias, toxicity, and harmful responses from models used in our products through industry-leading detection and mitigation techniques. Transparency.: We ensure that our models and features respect data provenance and are grounded in your data whenever possible. Empowerment.: We believe our products should augment people’s capabilities and make them more efficient and purposeful in their work. Sustainability.: We strive towards building right-sized models that prioritize accuracy and reducing our carbon footprint. Reviewing Generative AI Responses.. Generative AI is a tool that helps you be more creative, productive, and make smarter business decisions. This technology isn’t a replacement for human judgment. You’re ultimately responsible for any LLM-generated response you share with your customers. Whether text is human- or LLM-generated, your customers associate it with your organization’s brand and use it to make decisions. So it’s important to make sure that LLM-generated responses intended for external audiences are accurate and helpful, and that they and align with your company’s values, voice, and tone. When your end users review generated responses for external audiences, focus on the accuracy and safety of the content. Accuracy.: Generative AI can sometimes “hallucinate”—fabricate responses that aren’t grounded in fact or existing sources. Before you publish a response, check to make sure that key details are correct. For example, is the customer service suggestion based on an existing and up-to-date knowledge article? Bias and Toxicity.: Because AI is created by humans and trained on data created by humans, it can also contain bias against historically marginalized groups. Rarely, some responses can contain harmful language. Check your responses to make sure they’re appropriate for your customers. If the response doesn’t meet your standards or your company’s business needs, you don’t have to use it. Some solutions allow end users to edit the response directly, and if not, it’s best to start over and generate another response. Einstein Generative AI Glossary of Terms...Artificial intelligence (AI). A branch of computer science in which computer systems use data to draw inferences, perform tasks, and solve problems with human-like reasoning..Bias. Systematic and repeatable errors in a computer system that create unfair outcomes, in ways different from the intended function of the system, due to inaccurate assumptions in the machine learning process..Corpus. A large collection of textual datasets used to train an LLM..Domain adaptation. The process through which organization-specific knowledge is added into the prompt and the foundation model..Fine-tuning. The process of adapting a pre-trained language model for a specific task by training it on a smaller, task-specific dataset..Generative AI gateway. The gateway exposes normalized APIs to interact with foundation models and services provided by different.Alternate terms: Einstein vendors, internally and from the partner ecosystem. gateway, the gateway..Generative Pre-Trained A family of language models that’s trained on a large body of Transformer (GPT). text data so that they can generate human-like text..Grounding. The process through which domain-specific knowledge and customer information is added to the prompt to give the model the context it needs to respond more accurately..Hallucination. A type of output where the model generates semantically correct text that is factually incorrect or makes little to no sense, given the context..Human in the loop (HITL). A model that requires human interaction..Hyperparameter. A parameter used to control the training process. Hyperparameters sit outside the generated model..Inference. The process of requesting a model to generate content..Inference pipelines. A sequence of reusable generative steps stitched together to accomplish a generation task. This includes resolving prompt instructions, passing it to an LLM, moderating the results, and sending results back to the user..Intent. An end user’s goal for interacting with an AI assistant..Large language model A language model consisting of a neural network with many (LLM). parameters trained on large quantities of text..Machine learning. A subfield of AI specializing in computer systems that are designed to learn, adapt, and improve based on feedback and inferences from data, rather than explicit instruction..Model cards. Documents details about the model’s performance as well as inputs, outputs, training method, conditions under which the model works best, and ethical considerations in use..Natural Language A branch of AI that uses machine learning to understand Processing (NLP). language as written by people. Large language models are one of many approaches to NLP..Parameter size. The number of parameters the model uses to process and generate data..Prompt. A natural language description of the task to be accomplished. An input to the LLM..Prompt chaining. The method of breaking up complex tasks into several intermediate steps and then tying it back together so that the AI generates a more concrete, customized, and better result..Prompt design. Prompt design is the process of creating prompts that improve the quality and accuracy of the model’s responses. Many models expect a certain prompt structure, so it’s important to test and iterate them on the model you’re using. After you understand what structure works best for a model, you can optimize your prompts for the given use case..Prompt engineering. An emerging discipline within AI focused on maximizing the performance and reliability of models by crafting prompts in a systematic and scientifically rigorous way..Prompt injection. A method used to control or manipulate the model's output by giving it certain prompts. With this method, users and third parties attempt to get around restrictions and perform tasks that the model wasn't designed for..Prompt instructions. Prompt instructions are natural language instructions entered into a prompt template. Your user just has to send an instruction to Einstein. Instructions have a verb-noun structure and a task for the LLM, such as “Write a description no longer than 500 characters.” Your user’s instructions are added to the app’s prompt template, and then relevant CRM data replaces the template’s placeholders. The prompt template is now a grounded prompt and is sent to the LLM..Prompt management. The suite of tools used to effectively build, manage, package, and share prompts..Prompt template. A string with placeholders that are replaced with business data values to generate a final text instruction that is sent to the LLM..Retrieval-augmented A form of grounding that uses an information retrieval system generation (RAG). like a knowledge base to enrich a prompt with relevant context, for inference or training..Semantic retrieval. A scenario that allows an LLM to use similar and relevant historical business data that exists in a customer's CRM data..System cards. An expansion of the concepts within, and application of, model cards to address the complexity of an overall AI system, which may integrate multiple models. For LLM-based systems this includes the core components of model cards (for example, performance, use cases, ethical considerations) plus how the system operates, what models it uses, as well as how content is chosen, generated, and delivered..Temperature. A parameter that controls how predictable and varied the model's outputs are. A model with a high temperature generates random and diverse responses. A model with a low temperature generates focused and more consistent responses..Toxicity. A term describing many types of discourse, including but not limited to offensive, unreasonable, disrespectful, unpleasant, harmful, abusive, or hateful language..Trusted AI. Guidelines created by Salesforce that are focused on the responsible development and implementation of AI. Einstein Usage Einstein Requests is a usage metric for generative AI. The use of generative AI capabilities, in either a production or sandbox environment, consumes Einstein Requests and possibly Data Cloud credits. API calls to the LLM gateway use Einstein Requests. For each API call to the LLM gateway, the number of Einstein Requests used depends on the API call size factor and the LLM usage type multiplier. Rate Card for Einstein Requests. Many of Salesforce’s generative AI Services and features consume Einstein Requests. Some Salesforce Services include a quantity of Einstein Requests, with the specific entitlement indicated in the Usage Details table on the Order Form for such Services. In other cases, generative AI features may be enabled within Services that do not include an entitlement of Einstein Requests, in which case, use of those generative AI features will consume entitlements included with other Services purchased by the customer. Use of generative AI features may also consume Data Cloud credits. Each API call made through the LLM gateway to a Large Language Model (LLM) consumes Einstein Requests. Usage calculations include.: 1. a usage type multiplier associated with the LLM used, and. 2. an API call size factor associated with the size of the API call. Einstein Requests has access to Digital Wallet, a free account management tool that offers near real-time consumption data for enabled products across your active contracts. Access Digital Wallet and start tracking your org's usage. Digital Wallet is a free account management tool that gives you near real-time data and tools to monitor and manage how you use your Digital Wallet-enabled consumption-based Salesforce products. The resources linked here can help you navigate Digital Wallet and teach you how to use it efficiently to give you valuable insights into usage trends for your consumption-based products. Large Language Model Support.. Understand supported large language models (LLMs) in Einstein. Identify Salesforce-managed models, available out of the box. Learn how you can bring your own model (BYOLLM) by using Einstein Studio and then use Prompt Builder to configure your prompts with the model. Salesforce-Managed Models.. Choose a Salesforce-managed model in Prompt Builder to get started with prompts fast and with no setup. Prompt Builder allows you to customize prompt templates with different models and introduce them to your apps. For instance, you can update your Sales Emails prompt template in Prompt Builder to use a different model and then bring that updated prompt template into Sales Emails. Bring Your Own Large Language Model (BYOLLM).. The Einstein platform allows you to customize your AI experience by bringing in your own models to Salesforce. You can bring in your own model by using Einstein Studio and write a prompt template in Prompt Builder, which you can then integrate into your own apps or copilot. Some common reasons companies want to use different models with Einstein include:. Your company has an LLM fine-tuned to your data.. You can use your Azure, Bedrock, OpenAI, or Vertex account..BYOLLM supports all the Salesforce-managed models Develop LLM solutions with the Models API. Developers can use Models API to code custom solutions with even more models. Deprecated Models.. Model deprecation is the process by which a model provider gradually phases out a model, usually in favor of a new and improved model. The process starts with an announcement outlining when the model is no longer accessible or supported. The deprecation announcement usually contains a date when the model will no longer be available, sometimes called a shutdown date. Deprecated models are available until the shutdown date. Deprecation announcements are also available in the Einstein Platform release notes on a monthly basis. After the model is no longer available, Salesforce reroutes requests to the next closest replacement model on the Reroute Date. We recommend that you start migrating your applications as soon as the deprecation is announced. During migration, update and test each part of your application with the replacement model that we recommend. Prepare for Model Deprecation and Rerouting.. This document provides guidance on retesting Salesforce AI implementations during model deprecation and model rerouting. Both deprecation and rerouting are considered temporary. If your model is in one of these states, consider switching to a new model. Deprecation = In the context of AI development, “deprecation” describes a model that is no longer recommended for use. Model providers, such as Open AI, typically deprecate old models when new, improved models are available. Deprecation is a formal process that indicates to users to transition to a different model. During the deprecation phase, a deprecated model still works as designed. Rerouting = Model rerouting is a process of redirecting requests from a retired model to an available alternative. Rerouting occurs when the model provider retires the model. Salesforce reroutes requests to ensure the continuity of service. Deprecation and Rerouting Information.. Model deprecation notices are published in the Einstein Platform monthly release notes. Rerouting notices are published in the monthly release notes 30 days prior to the model’s retirement date. When a model provider retires a model, Salesforce routes requests to the latest version of that model family. For example, OpenAI GPT 3.5 Turbo 16k requests route to OpenAI GPT 3.5 Turbo. Define Success for Your Implementation.. 1. Define what success looks like for your end users. Different use cases require unique testing methods. 2. Save testing results from the initial implementation phase. If you have examples of expected responses, use them to compare with the result. Use the same tests for the new model to ensure there aren’t significant variations. Necessary Steps for Rerouting.. 1. When configuring a model in Einstein Studio Model Playground, deprecated models can't be selected. Instead, you can select a new foundation model to continue using your configurations. 2. In Einstein Studio, create a new configured model that matches the rerouted model. To ensure simple rerouting, set the configuration with the same hyperparameters as the retired model. 3. After saving the configuration for the rerouted model, open Prompt Builder. 4. Retest prompt templates using the rerouted model. Testing prompts and responses in Prompt Builder can be done directly in the Model Playground in the Einstein Studio. Adjust the prompt template so that it aligns with your goals. 5. If your implementation includes agents, open the agent in Agent Builder. In Agent Builder, you can test topics, actions, and knowledge to configure an agent. Adjust the agent behavior so that it aligns with the predetermined definition of success. 6. After setting the configured model, prompt template, and agent individually, test your entire AI implementation end to end. It is important to continually get user feedback for improvements. Geo-Aware LLM Request Routing on the Einstein Generative AI Platform.. The Einstein generative AI platform routes large language model (LLM) requests to servers that are closest to where your Einstein generative AI platform instance is located. Geo-aware routing includes LLM API requests made by Einstein generative AI features that use the Einstein generative AI platform. Supported Models.. Geo-aware routing supports OpenAI models (such as GPT-3.5 Turbo and GPT-4 Omni) and Anthropic models (hosted on Amazon Bedrock), provided that the models are available in the relevant regions. Availability. Geo-aware routing is available to:. Salesforce applications and features that use OpenAI and Anthropic models through the Einstein generative AI platform, and. Customers who use OpenAI and Anthropic models through Models API or Prompt Builder. To find out whether geo-aware routing is enabled for any specific Salesforce AI feature, refer to its documentation or contact your Salesforce account executive. Proximity and Routing.. Proximity to the nearest LLM server is determined by the region in which your Einstein generative AI platform instance is located. If you enabled the Einstein generative AI platform on or after June 13, 2024, then your Einstein generative AI platform region is the same as your Data Cloud region (Data Cloud: Data Center Locations). Otherwise, contact your Salesforce account executive to learn where it’s provisioned. If a model isn’t available in a nearby region, the requests are routed to the US. You can’t disable geo-aware routing and ‌rerouting to the US when models aren't available in a nearby region. Routing in the Models API.. If you use the Einstein generative AI platform directly through Models API, then it's recommended that you use model API names for geo-aware models. Feedback and Audit.. To view where your LLM requests are being routed to, refer to Feedback and Audit features. Feature-specific Support.. To track whether an Einstein feature supports geo-aware LLM request routing, see the feature's documentation. Retrieval Augmented Generation.. Retrieval Augmented Generation (RAG) in Data Cloud is a framework for grounding large language model (LLM) prompts. By augmenting the prompt with accurate, current, and pertinent information, RAG improves the relevance and value of LLM responses for users. When you submit an LLM prompt, RAG in Data Cloud: 1. Retrieves relevant information from a knowledge store containing structured and unstructured content 2. Augments the prompt by combining this information with the original prompt 3. Generates a prompt response RAG In Data Cloud.. It’s helpful to think of RAG in two main parts: offline preparation and online usage. Quick Start for Offline Preparation.. The fastest way to set up your RAG solution is to Add a Data Library, either in Agent Builder or Setup. When you create an Einstein Data Library library, you automatically create all the elements needed for a working RAG-powered solution. Salesforce uses default settings for all of the components: vector data store, search index, retriever, prompt template, and standard action. If you want, you can then customize these elements to fine-tune RAG solutions for your use cases. Advanced Setup for Offline Preparation.. To implement RAG in Data Cloud, start by connecting structured and unstructured data that RAG uses to ground LLM prompts. Data Cloud uses a search index to manage structured and unstructured content in a search-optimized way. Content can be ingested from a variety of sources and file types. Some examples of unstructured content used with RAG include service replies, cases, RFP responses, knowledge articles, FAQs, emails, and meeting notes. Offline preparation involves these steps: 1. Connect your unstructured data. 2. Create a search index configuration to chunk and vectorize the content. Chunking breaks the text into smaller units, reflecting passages of the original content, such as sentences or paragraphs. Vectorization converts chunks into numeric representations of the text that capture semantic similarities. 3. Store and manage the search index in Data Cloud. To learn more, see Search for AI, Automation, and Analytics. Retrievers serve as the bridge between search indexes and prompt templates. When you create a search index, Data Cloud automatically creates a default retriever for it. To support a variety of use cases, you can create custom retrievers in Einstein Studio. Custom retrievers further refine the search criteria and retrieve the most relevant information used to augment prompts. To learn more, see Manage Retrievers. Online Usage.. The final piece of the RAG implementation puzzle is to add a call to a retriever in a prompt template. For a given prompt template, the prompt designer can customize retriever query and results settings to populate the prompt with the most relevant information. To learn more, see Ground with Retrieval Augmented Generation (RAG) in Data Cloud. Each time a prompt template with a retriever is run, this sequence occurs: 1. The retriever is invoked with a dynamic query initiated from the prompt template. 2. The query is vectorized (converted to numeric representations). Vectorization enables search to find semantic matches in the search index (which is already vectorized). 3. The query retrieves the relevant context from the indexed data in the search index. 4. The original prompt is populated with the information retrieved from the search index. 5. The prompt is submitted to the LLM, which generates and returns the prompt response. Many LLMs were trained generally across the Internet on static and publicly available content. RAG adds information to a prompt that's accurate, up-to-date, and not available as part of the LLM's trained knowledge. It’s like supplementing the LLM’s capabilities by providing relevant information retrieved from a knowledge store that contains the latest, best version of the facts. With RAG, prompt template users can bring proprietary data to the LLM without retraining and fine-tuning the model, resulting in generated responses that are more pertinent to their context and use case. Considerations for Einstein Generative AI. Review these considerations before you use Einstein generative AI. Rate Limits.. Customers have a default rate limit of 300 Large Language Model (LLM) generation requests per minute at their Salesforce Organization ID level. These generations can be triggered from Prompt Builder, Einstein Studio, or Models API (REST, Apex) as part of Einstein Generative AI capabilities. To learn more, see Developer Documentation: Rate Limits. To request a rate limit increase, contact your Salesforce account executive. Record Pages in Lightning App Builder.. We don't recommend putting record fields into a narrow region on the right-hand side of the page when designing a custom record page in the Lightning App Builder. On pages that support Einstein Copilot, the Einstein panel covers up content on the right-hand side of the page. If record fields are behind the Einstein panel, users who use Einstein for field generation can’t see the updated field information. Generative AI-Enabled Fields.. If you don’t see a generative AI-enabled field icon on a record field ( ), then a prompt template isn’t supported for that field component. Einstein Generative AI Features...Agentforce.Agentforce Agents increase productivity and reduce your Agent Types teams' workload by automating routine tasks and assisting with complex ones. They're more autonomous than other conversational AI solutions, so they can independently identify opportunities for action, anticipate next steps, and initiate tasks within the use cases and guardrails you specify. Agent types built for common use cases make it easy to create your first agent and get started..Analytics.Report Formula Helps with the technical work of creating row-level Generation and summary formulas in Data Cloud reports. Describe a calculation in simple terms, and Einstein generative AI discovers the relevant data and suggests a formula..Commerce.Return Insights Uses multiple streams of data to gather return information about frequently returned products, before analyzing and sorting the return reasons..Commerce.Product Fields Enhances product fields for multiple products in a single process using Einstein generative AI. Einstein uses your instructions and any linked reference fields to generate revised product descriptions for the selected products..Commerce.Semantic Improves your search results by accounting for Search synonyms, alternate spellings, abbreviations, typos, and product themes. Reduces “no results found” pages, lower bounce rates, and improves conversion rates..Commerce.Smart Uses Einstein and trusted data from Commerce Promotions Cloud to draft promotions with a few clicks. Quickly create basic and advanced promotions using natural language instructions and generative AI..Commerce.Commerce Uses generative AI and learning language models Concierge for to inform, empower, and assist shoppers Commerce throughout their discovery, purchase, and support Stores journeys..Data Cloud.Einstein Quickly create targeted audience segments using Segments trusted customer data available in Data Cloud..Education.Intelligent Uses Einstein to draft assessment questions for Cloud Question support program intake assessments. Generation.Education.Einstein Skills Uses Einstein in the Learning Wizard to suggest Cloud Generator skills that fit your learning courses and learning programs based on data from the public domain and your Education Cloud org..Field Service.Pre-Work Brief Shows Field Service mobile workers a brief that uses generative AI to tell them everything they need to know about their upcoming work order..Field Service.Post-Work Helps your mobile workers save time by getting Summary (Beta) them to ask Einstein Copilot for a comprehensive summary of their completed job..Field Service.Field Service Uses Copilot to get a daily overview of service Dispatcher appointments that require immediate attention, Actions such as appointments with rule violations, overlaps, SLA risks, or emergencies. You can also use Copilot to quickly find service appointments and convert your search criteria into filters that you can access from the appointment list..Industries.Contracts AI Use generative AI to change how your business handles contracts, clauses, and documents. Contracts AI streamlines the labor-intensive process of drafting clauses for legal agreements and enables the efficient extraction of contract details and clauses from complex PDFs..Loyalty.Einstein for Use generative AI to summarize loyalty programs Management Loyalty and loyalty promotions. Management.Marketing Cloud.Einstein Use Einstein Assistant to quickly generate content Account Assistant for forms, landing pages, and to draft email content. Engagement.Marketing Cloud.Subject Line & Quickly generate subject lines and body copy for Engagement Body Copy messages. In Einstein Copy Insights, test, copy, or Generation download generated content, and create subject lines and body copy directly in Content Builder..Marketing Cloud.Einstein Creates a target segment for your campaign by Segment using Einstein. Creation.Marketing Cloud.Campaign Brief Creates a campaign brief and campaign and Campaign components by using Einstein Copilot. Einstein drafts your brief, creates the campaign, and drafts the subject line, preheader, and paragraph copy for your email campaign..Marketing Cloud.Subject Line, In the content builder, generates an email based on Preheader, and the key message and target audience in your Message Copy campaign brief. Or, you can enter a target audience in the Content and key message for a standalone marketing email Builder or SMS message. Einstein can create a subject line, preheader, and paragraph text for an email or text for an SMS message..Net Zero Cloud.Einstein Get recommended answers from uploaded Generative AI disclosure reports using generative AI. Improve For ESG Report environment, social, and governance (ESG) report Generation accuracy by using Einstein to compare the guidance for a question to your proposed answer..Nonprofit and.Copilot Actions: Uses Einstein Copilot to streamline and expedite Education Fundraising Gift writing compelling and personalized gift proposals Clouds Proposals using data in your Salesforce org..Nonprofit and.Einstein Uses the power of generative AI to produce a Education Generative AI in variety of summaries and a streamlined version of Clouds Nonprofit Cloud grant applications from data in your Salesforce org..Platform.Agentforce for Agentforce for Developers is an AI-powered Developers developer tool that’s available as a Visual Studio Code extension in VS Code desktop and Code Builder. Agentforce for Developers is built using CodeGen and xGen-Code, secure, custom AI models from Salesforce. To learn how to enable Agentforce for Developers, see Agentforce for Developers Setup..Platform.Einstein for Einstein for Formulas can assist admins with Formulas formula explanations and fixing syntax errors for new or existing Salesforce formulas. This applies to formulas used in formula fields, default field values, and record validation rules..Sales.Automatic Uses generative AI to add your contacts’ phone, Contact address, title, seniority level, department group, and Enhancements buyer attributes automatically from email interactions. Automatic Contact Enhancements builds content-rich contact profiles to show on buyer relationship maps to help close deals. Sales.Call Summaries Einstein Conversation Insights users can create generative call summaries on voice and video calls. Sales reps can create and edit summaries that include next steps and customer feedback, and share summaries for easier team collaborations in the flow of work. Sales.Call Explorer Using generative AI, Call Explorer answers questions about competitor mentions, coaching opportunities, and more directly from voice and video call records. Sales.Einstein Coach Uses generative AI to provide sales reps with quick, personal, and actionable feedback on their sales pitch. Sales.Sales Signals Use Sales Signals to see the topics that your customers are bringing up with your sales teams. Filter topics by category and keyword to see a dashboard of relevant conversations. Sales.Generative Use Generative Conversation Insights to answer Conversation any questions you have about your teams’ sales Insights calls. Define prompts that query a large language model (LLM) with the transcript of the call, and show relevant insights on the call record. Sales.Einstein Sales Sales reps save time using Einstein generative AI Emails to generate personalized emails to send to contacts and leads. Sales reps review and customize the emails before sending them. Sales.Sales Sales Summaries give sales users AI-generated Summaries summaries of accounts, contacts, leads, and opportunities. When users ask Einstein to summarize an account, contact, lead, or opportunity record, the Summarize Record copilot standard action uses a Sales Summaries prompt template to generate the summary. Sales:.Sales Emails for Einstein generative AI is also available to help Experience Partners partner users draft emails to contacts and leads Cloud from Experience Cloud Aura sites. Salesforce Flow.Einstein for Save time by describing what you want to automate Flow and letting Einstein generative AI build a draft flow to get you started with your automation. Service.Einstein Resolve and deflect issues with AI-Generated AI-Generated Search Answers. Enter a question or phrase in the Search Answers global search bar, knowledge sidebar, or Experience site. Einstein AI-Generated Search Answers creates a summarized response tailored to your question based on knowledge articles and other sources Service.Generative AI Uses generative AI to create and translate surveys Surveys tailored for diverse needs. Generative AI makes survey creation and translation faster and more global-friendly. Service.Service AI Use Einstein AI to draft responses using a defined Grounding data source. Grounding indexes the objects and fields so that Einstein knows which information to base recommendations on. With grounding, your unique knowledge articles and case history add context and personalization to customer communications. Service.Knowledge Optimize agent productivity and efficiently grow Creation your knowledge base with AI-generated article drafts. Based on a customer conversation, Einstein drafts fluent, relevant knowledge articles that agents review, edit, and save. As your knowledge base develops, agents can quickly find answers to issues and address emerging customer pain points. Service.Service Replies Optimizes agent productivity and response quality with AI-generated replies. Based on customer conversations, Einstein drafts and recommends fluent, courteous, and relevant responses to agents as chat conversations unfold. Service.Work Save agents time with AI-generated case Summaries summaries. Based on a conversation between an agent and customer, Einstein predicts and fills a summary, issue, and resolution. Agents can then review, edit, and save these summaries. With Conversation Catch-Up, Einstein also shows mid-conversation summaries to agents and supervisors when they accept or monitor an ongoing conversation.

Use Quizgecko on...
Browser
Browser