Data Cloud Identity Resolution & Data Stream Troubleshooting (PDF)
Document Details
Uploaded by FantasticEpigram9171
Tags
Summary
This document contains various Data Cloud questions and answers regarding identity resolution, data stream troubleshooting, and data modeling best practices. It provides guidance on unifying customer profiles, mapping data fields, resolving data stream issues, and calculating customer lifetime value (LTV).
Full Transcript
A retailer wants to unify profiles using Loyalty ID which is different than the unique ID of their customers. Which object should the consultant use in identity resolution to perform exact match rules on the Loyalty ID? Individual object Party Identification object Contact Identification object Loya...
A retailer wants to unify profiles using Loyalty ID which is different than the unique ID of their customers. Which object should the consultant use in identity resolution to perform exact match rules on the Loyalty ID? Individual object Party Identification object Contact Identification object Loyalty Identification object Overall explanation The Party Identification object is the correct object to use in identity resolution to perform exact match rules on the Loyalty ID. The Party Identification object is a child object of the Individual object that stores different types of identifiers for an individual, such as email, phone, loyalty ID, social media handle, etc. Each identifier has a type, a value, and a source. The consultant can use the Party Identification object to create a match rule that compares the Loyalty ID type and value across different sources and links the corresponding individuals. The other options are not correct objects to use in identity resolution to perform exact match rules on the Loyalty ID. The Loyalty Identification object does not exist in Data Cloud. The Individual object is the parent object that represents a unified profile of an individual, but it does not store the Loyalty ID directly. The Contact Identification object is a child object of the Contact object that stores identifiers for a contact, such as email, phone, etc., but it does not store the Loyalty ID. Reference: Data Modeling Requirements for Identity Resolution Identity Resolution in a Data Space Configure Identity Resolution Rulesets Map Required Objects Data and Identity in Data Cloud Question 2Correct A customer has a Master Customer table from their CRM to ingest into Data Cloud. The table contains a name and primary email address, along with other personally Identifiable information (Pll). How should the fields be mapped to support identity resolution? Create a new custom object with fields that directly match the incoming table. Map name to the Individual object and email address to the Contact Phone Email object. Map all fields to the Customer object. Map all fields to the Individual object, adding a custom field for the email address. Overall explanation To support identity resolution in Data Cloud, the fields from the Master Customer table should be mapped to the standard data model objects that are designed for this purpose. The Individual object is used to store the name and other personally identifiable information (PII) of a customer, while the Contact Phone Email object is used to store the primary email address and other contact information of a customer. These objects are linked by a relationship field that indicates the contact information belongs to the individual. By mapping the fields to these objects, Data Cloud can use the identity resolution rules to match and reconcile the profiles from different sources based on the name and email address fields. The other options are not recommended because they either create a new custom object that is not part of the standard data model, or map all fields to the Customer object that is not intended for identity resolution, or map all fields to the Individual object that does not have a standard email address field. Reference: Data Modeling Requirements for Identity Resolution, Create Unified Individual Profiles Question 3Correct Which two steps should a consultant take if a successfully configured Amazon S3 data stream fails to refresh with a "NO FILE FOUND" error message? Choose 2 answers Check If the file exists in the specified bucket location. Check if correct permissions are configured for the Data Cloud user. Check if the Amazon S3 data source is enabled in Data Cloud Setup. Check if correct permissions are configured for the S3 user. Overall explanation A “NO FILE FOUND” error message indicates that Data Cloud cannot access or locate the file from the Amazon S3 source. There are two possible reasons for this error and two corresponding steps that a consultant should take to troubleshoot it: The Data Cloud user does not have the correct permissions to read the file from the Amazon S3 bucket. This could happen if the user s permission set or profile does not include the Data Cloud Data Stream Read permission, or if the user s Amazon S3 credentials are invalid or expired. To fix this issue, the consultant should check and update the user s permissions and credentials in Data Cloud and Amazon S3, respectively. The file does not exist in the specified bucket location. This could happen if the file name or path has changed, or if the file has been deleted or moved from the Amazon S3 bucket. To fix this issue, the consultant should check and verify the file name and path in the Amazon S3 bucket, and update the Question 4Correct Which data model subject area defines the revenue or quantity for an opportunity by product family? Product Engagement Sales Order Party Overall explanation The Sales Order subject area defines the details of an order placed by a customer for one or more products or services. It includes information such as the order date, status, amount, quantity, currency, payment method, and delivery method. The Sales Order subject area also allows you to track the revenue or quantity for an opportunity by product family, which is a grouping of products that share common characteristics or features. For example, you can use the Sales Order Line Item DMO to associate each product in an order with its product family, and then use the Sales Order Revenue DMO to calculate the total revenue or quantity for each product family in an opportunity. Reference: Sales Order Subject Area, Sales Order Revenue DMO Reference Question 5Skipped A Company created a segment targeting high value customers that it activates through Marketing Cloud for email communication. The company notices that the activated count is smaller than the segment count. What is a reason for this? Marketing Cloud activations automatically suppress individuals who are unengaged and have not opened or clicked on an email in the last six months. Marketing Cloud activations apply a frequency cap and limit the number of records that can be sent in an activation. Data Cloud enforces the presence of Contact Point for Marketing Cloud activations. If the individual does not have a related Contact Point, it will not be activated. Marketing Cloud activations only activate those individuals that already exist in Marketing Cloud. They do not allow activation of new records. Overall explanation Data Cloud requires a Contact Point for Marketing Cloud activations, which is a record that links an individual to an email address. This ensures that the individual has given consent to receive email communications and that the email address is valid. If the individual does not have a related Contact Point, they will not be activated in Marketing Cloud. This may result in a lower activated count than the segment count. Reference: Data Cloud Activation, Contact Point for Marketing Cloud Question 6Skipped A Company wants to be able to calculate each customer s lifetime value {LTV) but also create breakdowns of the revenue sourced by website, mobile app, and retail channels. What should a consultant use to address this use case in Data Cloud? Flow Orchestration Streaming data transform Metrics on metrics Nested segments Overall explanation Metrics on metrics is a feature that allows creating new metrics based on existing metrics and applying mathematical operations on them. This can be useful for calculating complex business metrics such as LTV, ROI, or conversion rates. In this case, the consultant can use metrics on metrics to calculate the LTV of each customer by summing up the revenue generated by them across different channels. The consultant can also create breakdowns of the revenue by channel by using the channel attribute as a dimension in the metric definition. Reference: Metrics on Metrics, Create Metrics on Metrics Question 7Skipped What should an organization use to stream inventory levels from an inventory management system into Data Cloud in a fast and scalable, near-real-time way? Marketing Cloud Personalization Connector Commerce Cloud Connector Cloud Storage Connector Ingestion API Overall explanation The Ingestion API is a RESTful API that allows you to stream data from any source into Data Cloud in a fast and scalable way. You can use the Ingestion API to send data from your inventory management system into Data Cloud as JSON objects, and then use Data Cloud to create data models, segments, and insights based on your inventory data. The Ingestion API supports both batch and streaming modes, and can handle up to 100,000 records per second. The Ingestion API also provides features such as data validation, encryption, compression, and retry mechanisms to ensure data quality and security. Reference: Ingestion API Developer Guide, Ingest Data into Data Cloud Question 8Skipped A customer is concerned that the consolidation rate displayed in the identity resolution is quite low compared to their initial estimations. Which configuration change should a consultant consider in order to increase the consolidation rate? Change reconciliation rules to Most Occurring. Increase the number of matching rules. Reduce the number of matching rules. Include additional attributes in the existing matching rules. Overall explanation The consolidation rate is the amount by which source profiles are combined to produce unified profiles, calculated as 1 - (number of unified individuals / number of source individuals). For example, if you ingest 100 source records and create 80 unified profiles, your consolidation rate is 20%. To increase the consolidation rate, you need to increase the number of matches between source profiles, which can be done by adding more match rules. Match rules define the criteria for matching source profiles based on their attributes. By increasing the number of match rules, you can increase the chances of finding matches between source profiles and thus increase the consolidation rate. On the other hand, changing reconciliation rules, including additional attributes, or reducing the number of match rules can decrease the consolidation rate, as they can either reduce the number of matches or increase the number of unified profiles. Reference: Identity Resolution Calculated Insight: Consolidation Rates for Unified Pr ofiles, Identity Resolution Ruleset Processing Results, Configure Identity Resolution Rulesets Question 9Skipped A Company recently started a new line of business. The new business specializes in gourmet camping food. For business reasons as well as security reasons, it s important to keep all Data Cloud data separated by brand. Which capability best supports to separate its data by brand? Data streams for each brand Data sources for each brand Data spaces for each brand Data model objects for each brand Overall explanation Data spaces are logical containers that allow you to separate and organi ze your data by different criteria, such as brand, region, product, or business unit1. Data spaces can help you manage data access, security, and governance, as well as enable cross- cloud data integration and activation2. For NTO, data spaces can support their desire to separate their data by brand, so that they can have different data models, rules, and insights for their outdoor lifestyle clothing and gourmet camping food businesses. Data spaces can also help NTO comply with any data privacy and securit y regulations that may apply to their different brands3. The other options are incorrect because they do not provide the same level of data separation and organization as data spaces. Data streams are used to ingest data from different sources into Data Cloud, but they do not separate the data by brand4. Data model objects are used to define the structure and attributes of th e data, but they do not isolate the data by brand5. Data sources are used to identify the origin and type of the data, but they do not partition the data by brand. Reference: Data Spaces Overview, Create Data Spaces, Data Privacy and Security in Data Cloud, Data Streams Overview, Data Model Objects Overview, [Data Sources Overview] Question 10Skipped An organization wants to enable users with the ability to identify and select text attributes from a picklist of options. Which Data Cloud feature should help with this use case? Value suggestion Data harmonization Transformation formulas Global picklists Overall explanation Value suggestion is a Data Cloud feature that allows users to see and select the possible values for a text field when creating segment filters. Value suggestion can be enabled or disabled for each data model object (DMO) field in the DMO record home. Value suggestion can help users to identify and select text attributes from a picklist of options, without having to type or remember the exact values. Value suggestion can also reduce errors and improve data quality by ensuring consistent and valid values for the segment filters. Reference: Use Value Suggestions in Segmentation, Considerations for Selecting Related Attributes Question 11Skipped Data Cloud receives a nightly file of all ecommerce transactions from the previous day. Several segments and activations depend upon calculated insights from the updated data in order to maintain accuracy in the customer s scheduled campaign messages. What should the consultant do to ensure the ecommerce data is ready for use for each of the scheduled activations? Use Flow to trigger a change data event on the ecommerce data to refresh calculated insights and segments before the activations are scheduled to run. Set a refresh schedule for the calculated insights to occur every hour. Ensure the segments are set to Rapid Publish and set to refresh every hour. Ensure the activations are set to Incremental Activation and automatically publish every hour. Overall explanation The best option that the consultant should do to ensure the ecommerce data is ready for use for each of the scheduled activations is A. Use Flow to trigger a change data event on the ecommerce data to refresh calculated insights and segments before the activations are scheduled to run. This option allows the consultant to use the Flow feature of Data Cloud, which enables automation and orchestration of data processing tasks based on events or schedules. Flow can be used to trigger a change data event on the ecommerce data, which is a type of event that indicates that the data has been updated or changed. This event can then trigger the refresh of the calculated insights and segments that depend on the ecommerce data, ensuring that they reflect the latest data. The refresh of the calculated insights and segments can be completed before the activations are scheduled to run, ensuring that the customer s scheduled campaign messages are accurate and relevant. The other options are not as good as option A. Option B is incorrect because setting a refresh schedule for the calculated insights to occur every hour may not be sufficient or efficient. The refresh schedule may not align with the activation schedule, resulting in outdated or inconsistent data. The refresh schedule may also consume more resources and time than necessary, as the ecommerce data may not change every hour. Option C is incorrect because ensuring the activations are set to Incremental Activation and automatically publish every hour may not solve the problem. Incremental Activation is a feature that allows only the new or changed records in a segment to be activated, reducing the activation time and size. However, this feature does not ensure that the segment data is updated or refreshed based on the ecommerce data. The activation schedule may also not match the ecommerce data update schedule, resulting in inaccurate or irrelevant campaign messages. Option D is incorrect because ensuring the segments are set to Rapid Publish and set to refresh every hour may not be optimal or effective. Rapid Publish is a feature that allows segments to be published faster by skipping some validation steps, such as checking for duplicate records or invalid values. However, this feature may compromise the quality or accuracy of the segment data, and may not be suitable for all use cases. The refresh schedule may also have the same issues as option B, as it may not sync with the ecommerce data update schedule or the activation schedule, resulting in outdated or inconsistent data. Reference: Salesforce Data Cloud Consultant Exam Guide, Flow, Change Data Events, Calculated Insights, Segments, [Activation] Question 12Skipped To import campaign members into a campaign in Salesforce CRM, a user wants to export the segment to Amazon S3. The resulting file needs to include the Salesforce CRM Campaign ID in the name. What are two ways to achieve this outcome? Choose 2 answers Include campaign identifier in the filename specification. Include campaign identifier in the segment name. Include campaign identifier in the activation name. Hard code the campaign identifier as a new attribute in the campaign activation. Overall explanation The two ways to achieve this outcome are A and C. Include campaign identifier in the activation name and include campaign identifier in the filename specification. These two options allow the user to specify the Salesforce CRM Campaign ID in the name of the file that is exported to Amazon S3. The activation name and the filename specification are both configurable settings in the activation wizard, where the user can enter the campaign identifier as a text or a variable. The activation name is used as the prefix of the filename, and the filename specification is used as the suffix of the filename. For example, if the activation name is “Campaign_123” and the filename specification is “{segmentName}_{date}”, the resulting file name will be “Campaign_123_SegmentA_2023-12- 18.csv”. This way, the user can easily identify the file that corresponds to the campaign and import it into Salesforce CRM. The other options are not correct. Option B is incorrect because hard coding the campaign identifier as a new attribute in the campaign activation is not possible. The campaign activation does not have any attributes, only settings. Option D is incorrect because including the campaign identifier in the segment name is not sufficient. The segment name is not used in the filename of the exported file, unless it is specified in the filename specification. Therefore, the user will not be able to see the campaign identifier in the file name. Question 13Skipped A Company is configuring an identity resolution ruleset based on Fuzzy Name and Normalized Email. What should the Company do to ensure the best email address is activated? Set the default reconciliation rule to Last Updated. Include Contact Point Email object Is Active field as a match rule. Ensure Marketing Cloud is prioritized as the first data source in the Source Priority reconciliation rule. Use the source priority order in activations to make sure a contact point from the desired source is delivered to the activation target. Question 14Skipped During a privacy law discussion with a customer, the customer indicates they need to honor requests for the right to be forgotten. The consultant determines that Consent API will solve this business need. Which two considerations should the consultant inform the customer about? Choose 2 answers Data deletion requests are submitted for Individual profiles. Data deletion requests are processed within 1 hour. Data deletion requests submitted to Data Cloud are passed to all connected Salesforce clouds. Data deletion requests are reprocessed at 30, 60, and 90 days. Overall explanation When advising a customer about using the Consent API in Salesforce to comply with requests for the right to be forgotten, the consultant should focus on two primary considerations: Data deletion requests are submitted for Individual profiles (Answer C): The Consent API in Salesforce is designed to handle data deletion requests specifically for individual profiles. This means that when a request is made to delete data, it is targeted at the personal data associated with an individual s profile in the Salesforce system. The consultant should inform the customer that the requests must be specific to individual profiles to ensure accurate processing and compliance with privacy laws. Data deletion requests submitted to Data Cloud are passed to all connected Salesforce clouds (Answer D): When a data deletion request is made through the Consent API in Salesforce Data Cloud, the request is not limited to the Data Cloud alone. Instead, it propagates through all connected Salesforce clouds, such as Sales Cloud, Service Cloud, Marketing Cloud, etc. This ensures comprehensive compliance with the right to be forgotten across the entire Salesforce ecosystem. The customer should be aware that the deletion request will affect all instances of the individual s data across the connected Salesforce environments. Question 15Skipped A customer needs to integrate in real time with Salesforce CRM. Which feature accomplishes this requirement? Sales and Service bundle Data actions and Lightning web components Streaming transforms Data model triggers Overall explanation The correct answer is A. Streaming transforms. Streaming transforms are a feature of Data Cloud that allows real-time data integration with Salesforce CRM. Streaming transforms use the Data Cloud Streaming API to synchronize micro- batches of updates between the CRM data source and Data Cloud in near-real time1. Streaming transforms enable Data Cloud to have the most current and accurate CRM data for segmentation and activation2. The other options are incorrect for the following reasons: B. Data model triggers. Data model triggers are a feature of Data Cloud that allows custom logic to be executed when data model objects are created, updated, or deleted 3. Data model triggers do not integrate data with Salesforce CRM, but rather manipulate data within Data Cloud. C. Sales and Service bundle. Sales and Service bundle is a feature of Data Cloud that allows pre-built data streams, data model objects, segments, and activations for Sales Cl oud and Service Cloud data sources4. Sales and Service bundle does not integrate data in real time with Salesforce CRM, but rather ingests data at scheduled intervals. D. Data actions and Lightning web components. Data actions and Lightning web components are features of Data Cloud that allow custom user interfaces and workflows to be built and embedded in Salesforce applications5. Data actions and Lightning web components do not integrate data with Salesforce CRM, but rather display and interact with data within Salesforce applications. Question 16Skipped A customer notices that their consolidation rate has recently increased. They contact the consultant to ask why. What are two likely explanations for the increase? Choose 2 answers Duplicates have been removed from source system data streams. New data sources have been added to Data Cloud that largely overlap with the existing profiles. Identity resolution rules have been removed to reduce the number of matched profiles. Identity resolution rules have been added to the ruleset to increase the number of matched profiles. Overall explanation The consolidation rate is a metric that measures the amount by which source profiles are combined to produce unified profiles in Data Cloud, calculated as 1 - (number of unified profiles / number of source profiles). A higher consolidation rate means that more source profiles are matched and merged into fewer unified profiles, while a lower consolidation rate means that fewer source profiles are matched and more unified profiles are created. There are two likely explanations for why the consolidation rate has recently increased for a customer: New data sources have been added to Data Cloud that largely overlap with the existing profiles. This means that the new data sources contain many profiles that are similar or identical to the profiles from the existing data sources. For example, if a customer adds a new CRM system that has the same customer records as their old CRM system, the new data source will overlap with the existing one. When Data Cloud ingests the new data source, it will use the identity resolution ruleset to match and merge the overlapping profiles into unified profiles, resulting in a higher consolidation rate. Identity resolution rules have been added to the ruleset to increase the number of matched profiles. This means that the customer has modified their identity resolution ruleset to include more match rules or more match criteria that can identify more profiles as belonging to the same individual. For example, if a customer adds a match rule that matches profiles based on email address and phone number, instead of just email address, the ruleset will be able to match more profiles that have the same email address and phone number, resulting in a higher consolidation rate. Reference: Identity Resolution Calculated Insight: Consolidation Rates for Unified Pr ofiles, Configure Identity Resolution Rulesets Question 17Skipped What is Data Cloud s primary value to customers? To provide a unified view of a customer and their related data To connect all systems with a golden record To create a single source of truth for all anonymous data To create personalized campaigns by listening, understanding, and acting on customer behavior Overall explanation Data Cloud is a platform that enables you to activate all your customer data across Salesforce applications and other systems. Data Cloud allows you to create a unified profile of each customer by ingesting, transforming, and linking data from various sources, such as CRM, marketing, commerce, service, and external data providers. Data Cloud also provides insights and analytics on customer behavior, preferences, and needs, as well as tools to segment, target, and personalize customer interactions. Data Cloud s primary value to customers is to provide a unified view of a customer and their related data, which can help you deliver better customer experiences, increase loyalty, and drive growth. Reference: Salesforce Data Cloud, When Data Creates Competitive Advantage Question 18Skipped Which configuration supports separate Amazon S3 buckets for data ingestion and activation? Multiple S3 connectors in Data Cloud setup Separate user credentials for data stream and activation target Dedicated S3 data sources in activation setup Dedicated S3 data sources in Data Cloud setup Overall explanation To support separate Amazon S3 buckets for data ingestion and activation, you need to configure dedicated S3 data sources in Data Cloud setup. Data sources are used to identify the origin and type of the data that you ingest into Data Cloud1. You can create different data sources for each S3 bucket that you want to use for ingestion or activation, and specify the bucket name, region, and access credentials2. This way, you can separate and organize your data by different criteria, s uch as brand, region, product, or business unit3. The other options are incorrect because they do not support separate S3 buckets for data ingestion and activation. Multiple S3 connectors are not a valid configuration in Data Cloud setup, as there is only one S3 connector ava ilable4. Dedicated S3 data sources in activation setup are not a valid configuration either, as activat ion setup does not require data sources, but activation targets5. Separate user credentials for data stream and activation target are not sufficient to support separate S3 buckets, as you also need to sp ecify the bucket name and region for each data source2. Reference: Data Sources Overview, Amazon S3 Storage Connector, Data Spaces Overview, Data Streams Overview, Data Activation Overview Question 19Skipped How can a consultant modify attribute names to match a naming convention in Cloud File Storage targets? Update field names in the data model object. Set preferred attribute names when configuring activation. Use a formula field to update the field name in an activation. Update attribute names in the data stream configuration. Overall explanation A Cloud File Storage target is a type of data action target in Data Cloud that allows sending data to a cloud storage service such as Amazon S3 or Google Cloud Storage. When configuring an activation to a Cloud File Storage target, a consultant can modify the attribute names to match a naming convention by setting preferred attribute names in Data Cloud. Preferred attribute names are aliases that can be used to control the field names in the target file. They can be set for each attribute in the activation configuration, and they will override the default field names from the data model object. The other options are incorrect because they do not affect the field names in the target file. Using a formula field to update the field name in an activation will not change the field name, but only the field value. Updating attribute names in the data stream configuration will not affect the existing data lake objects or data model objects. Updating field names in the data model object will change the field names for all data sources and activations that use the object, which may not be desirable or consistent. Reference: Preferred Attribute Name, Create a Data Cloud Activation Target, Cloud File Storage Target Question 20Skipped A Company wants to use some of its Marketing Cloud data in Data Cloud. Which engagement channel data will require custom integration? SMS CloudPage Email Mobile push Overall explanation CloudPage is a web page that can be personalized and hosted by Marketing Cloud. It is not one of the standard engagement channels that Data Cloud s Domain Salesforce Question 21Skipped A customer is trying to activate data from Data Cloud to an Amazon S3 Cloud File Storage Bucket. Which authentication type should the consultant recommend to connect to the S3 bucket from Data Cloud? Use an S3 Encrypted Username and Password. Use an S3 Access Key and Secret Key. Use a JWT Token generated on S3. Use an S3 Private Key Certificate Overall explanation To use the Amazon S3 Storage Connector in Data Cloud, the consultant needs to provide the S3 bucket name, region, and access key and secret key for authentication. The access key and secret key are generated by AWS and can be managed in the IAM console. The other options are not supported by the S3 Storage Connector or by Data Cloud. Reference: Amazon S3 Storage Connector - Salesforce, How to Use the Amazon S3 Storage Connector in Data Cloud | Salesforce Developers Blog Learn more 1help.salesforce.com2developer.salesforce.com Domain Salesforce Question 22Skipped A customer has a requirement to receive a notification whenever an activation fails for a particular segment. Which feature should the consultant use to solution for this use case? Flow Activation alert Report Dashboard Overall explanation The feature that the consultant should use to solution for this use case is C. Activation alert. Activation alerts are notifications that are sent to users when an activation fails or succeeds for a segment. Activation alerts can be configured in the Activation Settings page, where the consultant can specify the recipients, the frequency, and the conditions for sending the alerts. Activation alerts can help the customer to monitor the status of their activations and troubleshoot any issues that may arise. Reference: Salesforce Data Cloud Consultant Exam Guide, Activation Alerts Question 23Skipped A client wants to bring in loyalty data from a custom object in Salesforce CRM that contains a point balance for accrued hotel points and airline points within the same record. The client wants to split these point systems into two separate records for better tracking and processing. What should a consultant recommend in this scenario? Create a data kit from the data lake object and deploy it to the same Data Cloud org. Clone the data source object. Create a junction object in Salesforce CRM and modify the ingestion strategy. Use batch transforms to create a second data lake object. Overall explanation Batch transforms are a feature that allows creating new data lake objects based on existing data lake objects and applying transformations on them. This can be useful for splitting, merging, or reshaping data to fit the data model or business requirements. In this case, the consultant can use batch transforms to create a second data lake object that contains only the airline points from the original loyalty data object. The original object can be modified to contain only the hotel points. This way, the client can have two separate records for each point system and track and process them accordingly. Reference: Batch Transforms, Create a Batch Transform Question 24Skipped A customer wants to create segments of users based on their Customer Lifetime Value. However, the source data that will be brought into Data Cloud does not include that key performance indicator (KPI). Which sequence of steps should the consultant follow to achieve this requirement? Ingest Data > Create Calculated Insight > Map Data to Data Model > Use in Segmentation Ingest Data > Map Data to Data Model > Create Calculated Insight > Use in Segmentation Create Calculated Insight > Ingest Data > Map Data to Data Model> Use in Segmentation Create Calculated Insight > Map Data to Data Model> Ingest Data > Use in Segmentation Overall explanation To create segments of users based on their Customer Lifetime Value (CLV), the sequence of steps that the consultant should follow is Ingest Data > Map Data to Data Model > Create Calculated Insight > Use in Segmentation. This is because the first step is to ingest the source data into Data Cloud using data streams1. The second step is to map the source data to the data model, which defi nes the structure and attributes of the data2. The third step is to create a calculated insight, which is a derived attribute that is computed based on the source or unified data3. In this case, the calculated insight would be the CLV, which can be calculated using a formula or a q uery based on the sales order data4. The fourth step is to use the calculated insight in segmentation, which is the process of creating groups of individuals or entities based on their attributes and behaviors. By using the CLV calculated insight, the consultant can segment the users by their predicted revenue from the lifespan of their relationship with the brand. The other options are incorrect because they do not follow the correct sequence of steps to achieve the requirement. Option B is incorrect because it is not possible to create a calculated insight before ingesting and mapping the data, as the calculated insight depends on the data model objects3. Option C is incorrect because it is not possible to create a calculated insight before mapping the data, as the calculated insight de pends on the data model objects3. Option D is incorrect because it is not recommended to create a calculat ed insight before mapping the data, as the calculated insight may not reflect the correct d ata model structure and attributes3. Reference: Data Streams Overview, Data Model Objects Overview, Calculated Insights Question 25Skipped A Data Cloud customer wants to adjust their identity resolution rules to increase their accuracy of matches. Rather than matching on email address, they want to review a rule that joins their CRM Contacts with their Marketing Contacts, where both use the CRM ID as their primary key. Which two steps should the consultant take to address this new use case? Choose 2 answers Create a custom matching rule for an exact match on the Individual ID attribute. Map the primary key from the two systems to Party Identification, using CRM ID as the identification name for both. Create a matching rule based on party identification that matches on CRM ID as the party identification name. Map the primary key from the two systems to party identification, using CRM ID as the identification name for individuals Overall explanation To address this new use case, the consultant should map the primary key from the two systems to Party Identification, using CRM ID as the identification name for both, and create a matching rule based on party identification that matches on CRM ID as the party identification name. This way, the consultant can ensure that the CRM Contacts and Marketing Contacts are matched based on their Question 26Skipped A Company wants to implement Data Cloud and has several use cases in mind. Which two use cases are considered a good fit for Data Cloud? Choose 2 answers To eliminate the need for separate business intelligence and IT data management tools To use harmonized data to more accurately understand the customer and business impact To create and orchestrate cross-channel marketing messages To ingest and unify data from various sources to reconcile customer identity Overall explanation Data Cloud is a data platform that can help customers connect, prepare, harmonize, unify, query, analyze, and act on their data across various Salesforce and external sources. Some of the use cases that are considered a good fit for Data Cloud are: To ingest and unify data from various sources to reconcile customer identity. Data Cloud can help customers bring all their data, whether streaming or batch, into Salesforce and map it to a common data model. Data Cloud can also help customers resolve identities across different channels and sources and create unified profiles of their customers. To use harmonized data to more accurately understand the customer and business impact. Data Cloud can help customers transform and cleanse their data before using it, and enrich it with calculated insights and related attributes. Data Cloud can also help customers create segments and audiences based on their data and activate them in any channel. Data Cloud can also help customers use AI to predict customer behavior and outcomes. The other two options are not use cases that are considered a good fit for Data Cloud. Data Cloud does not provide features to create and orchestrate cross-channel marketing messages, as this is typically handled by other Salesforce solutions such as Marketing Cloud. Data Cloud also does not eliminate the need for separate business intelligence and IT data management tools, as it is designed to work with them and complement their capabilities. Reference: Learn How Data Cloud Works About Salesforce Data Cloud Discover Use Cases for the Platform Understand Common Data Analysis Use Cases Question 27Skipped A consultant is integrating an Amazon 53 activated campaign with the customer s destination system. In order for the destination system to find the metadata about the segment, which file on the 53 will contain this information for processing? The.csv file The json file The.txt file The.zip file Overall explanation The file on the Amazon S3 that will contain the metadata about the segment for processing is B. The json file. The json file is a metadata file that is generated along with the csv file when a segment is activated to Amazon S3. The json file contains information such as the segment name, the segment ID, the segment size, the segment attributes, the segment filters, and the segment schedule. The destination system can use this file to identify the segment and its properties, and to match the segment data with the corresponding fields in the destination system. Reference: Salesforce Data Cloud Consultant Exam Guide, Amazon S3 Activation Question 28Skipped Which consideration related to the way Data Cloud ingests CRM data is true? Formula fields are refreshed at regular sync intervals and are updated at the next full refresh. CRM data cannot be manually refreshed and must wait for the next scheduled synchronization, The CRM Connector allows standard fields to stream into Data Cloud in real time. The CRM Connector s synchronization times can be customized to up to 15-minute intervals. Overall explanation The correct answer is D. The CRM Connector allows standard fields to stream into Data Cloud in real time. This means that any changes to the standard fields in the CRM data source are reflected in Data Cloud almost instantly, without waiting for the next scheduled synchronization. This feature enables Data Cloud to have the most up- to-date and accurate CRM data for segmentation and activation1. The other options are incorrect for the following reasons: A. CRM data can be manually refreshed at any time by clicking the Refresh button on the data stream detail page2. This option is false. B. The CRM Connector s synchronization times can be customized to up to 60-minute intervals, not 15-minute intervals3. This option is false. C. Formula fields are not refreshed at regular sync intervals, but only at th e next full refresh4. A full refresh is a complete data ingestion process that occurs once every 24 hours or when manually triggered. This option is false. Reference: 1: Connect and Ingest Data in Data Cloud article on Salesforce Help 2: Data Sources in Data Cloud unit on Trailhead 3: Data Cloud for Admins module on Trailhead 4: [Formula Fields in Data Cloud] unit on Trailhead : [Data Streams in Data Cloud] unit on Trailhead Question 29Skipped A segment fails to refresh with the error "Segment references too many data lake objects (DLOS)". Which two troubleshooting tips should help remedy this issue? Choose 2 answers Refine segmentation criteria to limit up to five custom data model objects (DMOs). Use calculated insights in order to reduce the complexity of the segmentation query. Space out the segment schedules to reduce DLO load. Split the segment into smaller segments. Overall explanation The error “Segment references too many data lake objects (DLOs)” occurs when a segment query exceeds the limit of 50 DLOs that can be referenced in a single query. This can happen when the segment has too many filters, nested segments, or exclusion criteria that involve different DLOs. To remedy this issue, the consultant can try the following troubleshooting tips: Split the segment into smaller segments. The consultant can divide the segment into multiple segments that have fewer filters, nested segments, or exclusion criteria. This can reduce the number of DLOs that are referenced in each segment query and avoid the error. The consultant can then use the smaller segments as nested segments in a larger segment, or activate them separately. Use calculated insights in order to reduce the complexity of the segmentation query. The consultant can create calculated insights that are derived from existing data using formulas. Calculated insights can simplify the segmentation query by replacing multiple filters or nested segments with a single attribute. For example, instead of using multiple filters to segment individuals based on their purchase history, the consultant can create a calculated insight that calculates the lifetime value of each individual and use that as a filter. The other options are not troubleshooting tips that can help remedy this issue. Refining segmentation criteria to limit up to five custom data model objects (DMOs) is not a valid option, as the limit of 50 DLOs applies to both standard and custom DMOs. Spacing out the segment schedules to reduce DLO load is not a valid option, as the error is not related to the DLO load, but to the segment query complexity. Reference: Troubleshoot Segment Errors Create a Calculated Insight Question 30Skipped When performing segmentation or activation, which time zone is used to publish and refresh data? Time zone set by the Salesforce Data Cloud org Time zone of the user creating the activity Time zone of the Data Cloud Admin user Time zone specified on the activity at the time of creation Overall explanation The time zone that is used to publish and refresh data when performing segmentation or activation is D. Time zone set by the Salesforce Data Cloud org. This time zone is the one that is configured in the org settings when Data Cloud is provisioned, and it applies to all users and activities in Data Cloud. This time zone determines when the segments are scheduled to refresh and when the activations are scheduled to publish. Therefore, it is important to consider the time zone difference between the Data Cloud org and the destination systems or channels when planning the segmentation and activation strategies. Reference: Salesforce Data Cloud Consultant Exam Guide, Segmentation, Activation Question 31Skipped Which two common use cases can be addressed with Data Cloud? Choose 2 answers Understand and act upon customer data to drive more relevant experiences. Harmonize data from multiple sources with a standardized and extendable data model. Safeguard critical business data by serving as a centralized system for backup and disaster recovery. Govern enterprise data lifecycle through a centralized set of policies and processes. Overall explanation Data Cloud is a data platform that can help customers connect, prepare, harmonize, unify, query, analyze, and act on their data across various Salesforce and external sources. Some of the common use cases that can be addressed with Data Cloud are: Understand and act upon customer data to drive more relevant experiences. Data Cloud can help customers gain a 360-degree view of their customers by unifying data from different sources and resolving identities across channels. Data Cloud can also help customers segment their audiences, create personalized experiences, and activate data in any channel using insights and AI. Harmonize data from multiple sources with a standardized and extendable data model. Data Cloud can help customers transform and cleanse their data before using it, and map it to a common data model that can be extended and customized. Data Cloud can also help customers create calculated insights and related attributes to enrich their data and optimize identity resolution. The other two options are not common use cases for Data Cloud. Data Cloud does not provide data governance or backup and disaster recovery features, as these are typically handled by other Salesforce or external solutions. Reference: Learn How Data Cloud Works About Salesforce Data Cloud Discover Use Cases for the Platform Understand Common Data Analysis Use Cases Domain Salesforce Question 32Skipped Which permission setting should a consultant check if the custom Salesforce CRM object is not available in New Data Stream configuration? Confirm that the Modify Object permission is enabled in the Data Cloud org. Confirm the View All object permission is enabled in the source Salesforce CRM org. Confirm the Ingest Object permission is enabled in the Salesforce CRM org. Confirm the Create object permission is enabled in the Data Cloud org. Overall explanation To create a new data stream from a custom Salesforce CRM object, the consultant needs to confirm that the View All object permission is enabled in the source Salesforce CRM org. This permission allows the user to view all records associated with the object, regardless of sharing settings1. Without this permission, the custom object will not be available in the New Data Stream configuration2. Reference: Manage Access with Data Cloud Permission Sets Object Permissions Domain Salesforce Question 33Skipped Which two requirements must be met for a calculated insight to appear in the segmentation canvas? Choose 2 answers The calculated insight must contain a dimension including the Individual or Unified Individual Id. The primary key of the segmented table must be a dimension in the calculated insight. The metrics of the calculated insights must only contain numeric values. The primary key of the segmented table must be a metric in the calculated insight. Overall explanation A calculated insight is a custom metric or measure that is derived from one or more data model objects or data lake objects in Data Cloud. A calculated insight can be used in segmentation to filter or group the data based on the calculated value. However, not all calculated insights can appear in the segmentation canvas. There are two requirements that must be met for a calculated insight to appear in the segmentation canvas: The calculated insight must contain a dimension including the Individual or Unified Individual Id. A dimension is a field that can be used to categorize or group the data, such as name, gender, or location. The Individual or Unified Individual Id is a unique identifier for each individual profile in Data Cloud. The calculated insight must include this dimension to link the calculated value to the individual profile and to enable segmentation based on the individual profile attributes. The primary key of the segmented table must be a dimension in the calculated insight. The primary key is a field that uniquely identifies each record in a table. The segmented table is the table that contains the data that is being segmented, such as the Customer or the Order table. The calculated insight must include the primary key of the segmented table as a dimension to ensure that the calculated value is associated with the correct record in the segmented table and to avoid duplication or inconsistency in the segmentation results. Reference: Create a Calculated Insight, Use Insights in Data Cloud, Segmentation Question 34Skipped A consultant is working in a customer s Data Cloud org and is asked to delete the existing identity resolution ruleset. Which two impacts should the consultant communicate as a result of this action? Choose 2 answers All source profile data will be removed All individual data will be removed. Dependencies on data model objects will be removed. Unified customer data associated with this ruleset will be removed. Overall explanation Deleting an identity resolution ruleset has two major impacts that the consultant should communicate to the customer. First, it will permanently remove all unified customer data that was created by the ruleset, meaning that the unified profiles and their attrib utes will no longer be available in Data Cloud1. Second, it will eliminate dependencies on data model objects that were used by the ruleset, meaning that the data model objects can be modifi ed or deleted without affecting the ruleset1. These impacts can have significant consequences for the customer s data quality, segmentation, activation, and analytics, so the consultant should advise the customer to carefully consider the implications of deleting a ruleset before proceeding. The other options are incorrect because they are not impacts of deleting a ruleset. Option A is incorrect because deleting a ruleset will not remove all individual data, but only the unified customer data. The individual data from the source systems will still be available in Data Cloud1. Option D is incorrect because deleting a ruleset will not remove all source profile data, but only the unified customer data. The source profile data from the data streams will still be available in Data Cloud1. Reference: Delete an Identity Resolution Ruleset Question 35Skipped A Company created a segment called Multiple Investments that contains individuals who have invested in two or more mutual funds. The company plans to send an email to this segment regarding a new mutual fund offering, and wants to personalize the email content with information about each customer s current mutual fund investments. How should the Data Cloud consultant configure this activation? Choose the Multiple Investments segment, choose the Email contact point, and add related attribute Fund Type. Include Fund Type equal to "Mutual Fund" as a related attribute. Configure an activation based on the new segment with no additional attributes. Include Fund Name and Fund Type by default for post processing in the target system. Choose the Multiple Investments segment, choose the Email contact point, add related attribute Fund Name, and add related attribute filter for Fund Type equal to "Mutual Fund". Overall explanation To personalize the email content with information about each customer s current mutual fund investments, the Data Cloud consultant needs to add related attributes to the activation. Related attributes are additional data fields that can be sent along with the segment to the target system for personalization or analysis purposes. In this case, the consultant needs to add the Fund Name attribute, which contains the name of the mutual fund that the customer has invested in, and apply a filter for Fund Type equal to “Mutual Fund” to ensure that only relevant data is sent. The other options are not correct because: A. Including Fund Type equal to “Mutual Fund” as a related attribute is not enough to personalize the email content. The consultant also needs to include the Fund Name attribute, which contains the specific name of the mutual fund that the customer has invested in. C. Adding related attribute Fund Type is not enough to personalize the email content. The consultant also needs to add the Fund Name attribute, which contains the specific name of the mutual fund that the customer has invested in, and apply a filter for Fund Type equal to “Mutual Fund” to ensure that only relevant data is sent. D. Including Fund Name and Fund Type by default for post processing in the target system is not a valid option. The consultant needs to add the related attributes and filters during the activation configuration in Data Cloud, not after the data is sent to the target system. Reference: Add Related Attributes to an Activation - Salesforce, Related Attributes in Activation - Salesforce, Prepare for Your Salesforce Data Cloud Consultant Credential Question 36Skipped A consultant has an activation that is set to publish every 12 hours, but has discovered that updates to the data prior to activation are delayed by up to 24 hours. Which two areas should a consultant review to troubleshoot this issue? Choose 2 answers Review data transformations to ensure they re run after calculated insights. Review segments to ensure they re refreshed after the data is ingested. Review calculated insights to make sure they re run after the segments are refreshed. Review calculated insights to make sure they re run before segments are refreshed. Overall explanation The correct answer is B and C because calculated insights and segments are both dependent on the data ingestion process. Calculated insights are derived from the data model objects and segments are subsets of data model objects that meet certain criteria. Therefore, both of them need to be updated after the data is ingested to reflect the latest changes. Data transformations are optional steps that can be applied to the data streams before they are mapped to the data model objects, so they are not relevant to the issue. Reviewing calculated insights to make sure they re run after the segments are refreshed (option D) is also incorrect because calculated insights are independent of segments and do not need to be refreshed after them. Reference: Salesforce Data Cloud Consultant Exam Guide, Data Ingestion and Modeling, Calculated Insights, Segments Domain Salesforce Question 37Skipped Which two dependencies prevent a data stream from being deleted? Choose 2 answers The underlying data lake object is used in activation. The underlying data lake object is used in segmentation. The underlying data lake object is used in a data transform. The underlying data lake object is mapped to a data model object. Overall explanation To delete a data stream in Data Cloud, the underlying data lake object (DLO) must not have any dependencies or references to other objects or processes. The following two dependencies prevent a data stream from being deleted1: Data transform: This is a process that transforms the ingested data into a standardized format and structure for the data model. A data transform can use one or more DLOs as input or output. If a DLO is used in a data transform, it cannot be deleted until the data transfor m is removed or modified2. Data model object: This is an object that represents a type of entity or relationship in the data model. A data model object can be mapped to one or more DLOs to define its attributes and values. If a DLO is mapped to a data model object, it cannot be deleted until the mappin g is removed or changed3. Question 38Skipped A user wants to be able to create a multi-dimensional metric to identify unified individual lifetime value (LTV). Which sequence of data model object (DMO) joins is necessary within the calculated Insight to enable this calculation? Sales Order > Unified Individual Unified Individual > Individual > Sales Order Sales Order > Individual > Unified Individual Unified Individual > Unified Link Individual > Sales Order Overall explanation To create a multi-dimensional metric to identify unified individual lifetime value (LTV), the sequence of data model object (DMO) joins that is necessary within the calculated Insight is Unified Individual > Unified Link Individual > Sales Order. This is because the Unified Individual DMO represents the unified profile of an individual or entity that is created by identity resolu tion1. The Unified Link Individual DMO represents the link between a unified individual and an individual from a source system2. The Sales Order DMO represents the sales order information from a sou rce system3. By joining these three DMOs, you can calculate the LTV of a unified individual based on the sales order data from different source systems. The other options are incorrect because they do not join the correct DMOs to enable the LTV calculation. Option B is incorrect because the Individual DMO represents the source profile of an individual or entity from a source sys tem, not the unified profile4. Option C is incorrect because the join order is reversed, and you need to start with the Unified Individual DMO to identify the unified profile. Option D is incorrect because it is missing the Unified Link Individual DMO, which is needed to link the unified profile with the source profile. Reference: Unified Individual Data Model Object, Unified Link Individual Data Model Object, Sales Order Data Model Object, Individual Data Model Object Question 39Skipped During discovery, which feature should a consultant highlight for a customer who has multiple data sources and needs to match and reconcile data about individuals into a single unified profile? Identity Resolution Harmonization Data Consolidation Data Cleansing Overall explanation Identity resolution is the feature that allows Data Cloud to match and reconcile data about individuals from multiple data sources into a single unified profile. Identity resolution uses rulesets to define how source profiles are matched and consolidated based on common attributes, such as name, email, phone, or party identifier. Identity resolution enables Data Cloud to create a 360- degree view of each customer across different data sources and systems 12. The other options are not the best features to highlight for this customer need because: A. Data cleansing is the process of detecting and correcting errors or inconsistencies in data, such as duplicates, missing values, or invalid formats. Data cleansing can improve the quality and accuracy of data, but it does not match or reconcile data across different data sourc es3. B. Harmonization is the process of standardizing and transforming data from different sources into a common format and structure. Harmonization can enable data integration and interoperability, but it does not match or reconcile data across different data sources4. C. Data consolidation is the process of combining data from different sources into a single data set or system. Data consolidation can reduce data redundancy and complexity, but it d oes not match or reconcile data across different data sources5. Reference: 1: Data and Identity in Data Cloud | Salesforce Trailhead, 2: Data Cloud Identiy Resolution | Salesforce AI Research, 3: [Data Cleansing - Salesforce], 4: [Harmonizat ion - Salesforce], 5: [Data Consolidation - Salesforce] Question 40Skipped What does the Ignore Empty Value option do in identity resolution? Ignores Individual object records with empty fields when running identity resolution rules Ignores empty fields when running reconciliation rules Ignores empty fields when running the standard match rules Ignores empty fields when running any custom match rules Overall explanation The Ignore Empty Value option in identity resolution allows customers to ignore empty fields when running reconciliation rules. Reconciliation rules are used to determine the final value of an attribute for a unified individual profile, based on the values from different sources. The Ignore Empty Value option can be set to true or false for each attribute in a reconciliation rule. If set to true, the reconciliation rule will skip any source that has an empty value for that attribute and move on to the next source in the priority order. If set to false, the reconciliation rule will consider any source that has an empty value for that attribute as a valid source and use it to populate the attribute value for the unified individual profile. The other options are not correct descriptions of what the Ignore Empty Value option does in identity resolution. The Ignore Empty Value option does not affect the custom match rules or the standard match rules, which are used to identify and link individuals across different sources based on their attributes. The Ignore Empty Value option also does not ignore individual object records with empty fields when running identity resolution rules, as identity resolution rules operate on the attribute level, not the record level. Reference: Data Cloud Identity Resolution Reconciliation Rule Input Configure Identity Resolution Rulesets Data and Identity in Data Cloud Question 41Skipped What should a user do to pause a segment activation with the intent of using that segment again? Delete the segment. Stop the publish schedule. Deactivate the segment. Skip the activation. Overall explanation The correct answer is A. Deactivate the segment. If a segment is no longer needed, it can be deactivated through Data Cloud and applies to all chosen targets. A deactivated segment no longer publishes, but it can be reactivated at any time1. This option allows the user to pause a segment activation with the intent of using that segment again. The other options are incorrect for the following reasons: B. Delete the segment. This option permanently removes the segment from Data Cloud and can not be undone2. This option does not allow the user to use the segment again. C. Skip the activation. This option skips the current activation cycle for the segment, but does not affect the future activation cycles3. This option does not pause the segment activation indefinitely. D. Stop the publish schedule. This option stops the segment from publishing to the chosen targets, but does not deactivate the segment4. This option does not pause the segment activation completely. Reference: 1: Deactivated Segment article on Salesforce Help 2: Delete a Segment article on Salesforce Help 3: Skip an Activation article on Salesforce Help 4: Stop a Publish Schedule article on Salesforce Help Question 42Skipped A Company is currently using Data Cloud and ingesting transactional data from its backend system via an S3 Connector in upsert mode. During the initial setup six months ago, the company created a formula field in Data Cloud to create a custom classification. It now needs to update this formula to account for more classifications. What should the consultant keep in mind with regard to formula field updates when using the S3 Connector? Data Cloud will only update the formula on a go-forward basis for new records. Data Cloud will update the formula for all records at the next incremental upsert refresh. Data Cloud will initiate a full refresh of data from $3 and will update the formula on all records. Data Cloud does not support formula field updates for data streams of type upsert. Overall explanation A formula field is a field that calculates a value based on other fields or constants. When using the S3 Connector to ingest data from an Amazon S3 bucket, Data Cloud supports creating and updating formula fields on the data lake objects (DLOs) that store the data from the S3 source. However, the formula field updates are not applied immediately, but rather at the next incremental upsert refresh of the data stream. An incremental upsert refresh is a process that adds new records and updates existing records from the S3 source to the DLO based on the primary key field. Therefore, the consultant should keep in mind that the formula field updates will affect both new and existing records, but only after the next incremental upsert refresh of the data stream. The other options are incorrect because Data Cloud does not initiate a full refresh of data from S3, does not update the formula only for new records, and does support formula field updates for data streams of type upsert. Reference: Create a Formula Field, Amazon S3 Connection, Data Lake Object Question 43Skipped A Company wants to connect their B2C Commerce data with Data Cloud and bring two years of transactional history into Data Cloud. What should the Company use to achieve this? Direct Sales Product entity ingestion Direct Sales Order entity ingestion B2C Commerce Starter Bundles B2C Commerce Starter Bundles plus a custom extract Overall explanation The B2C Commerce Starter Bundles are predefined data streams that ingest order and product data from B2C Commerce into Data Cloud. However, the starter bundles only bring in the last 90 days of data by default. To bring in two years of transactional history, NTO needs to use a custom extract from B2C Commerce that includes the historical data and configure the data stream to use the custom extract as the source. The other options are not sufficient to achieve this because: A. B2C Commerce Starter Bundles only ingest the last 90 days of data by default. B. Direct Sales Order entity ingestion is not a supported method for connecting B2C Commerce data with Data Cloud. Data Cloud does not provide a direct-access connection for B2C Commerce data, only data ingestion. C. Direct Sales Product entity ingestion is not a supported method for connecting B2C Commerce data with Data Cloud. Data Cloud does not provide a direct-access connection for B2C Commerce data, only data ingestion. Reference: Create a B2C Commerce Data Bundle - Salesforce, B2C Commerce Connector - Salesforce, Salesforce B2C Commerce Pricing Plans & Costs Question 44Skipped A Company received a Request to be Forgotten by a customer. In which two ways should a consultant use Data Cloud to honor this request? Choose 2 answers Use Data Explorer to locate and manually remove the Individual. Add the Individual ID to a headerless file and use the delete from file functionality. Delete the data from the incoming data stream and perform a full refresh. Use the Consent API to suppress processing and delete the Individual and related records from source data streams. Overall explanation To honor a Request to be Forgotten by a customer, a consultant should use Data Cloud in two ways: Add the Individual ID to a headerless file and use the delete from file functionality. This option allows the consultant to delete multiple Individuals from Data Cloud by uploadi ng a CSV file with their IDs1. The deletion process is asynchronous and can take up to 24 hours to co mplete1. Use the Consent API to suppress processing and delete the Individual and related records from source data streams. This option allows the consultant to submit a Data Deletion request for an Individual profile in Data Cloud using the Consent API2. A Data Deletion request deletes the specified Individual entity and any entities where a relationship has been defined between that entity s identifying attribute and the Individual ID attribute2. The deletion process is reprocessed at 30, 60, and 90 days to ensure a full deletion2. The other options are not correct because: Deleting the data from the incoming data stream and performing a full r efresh will not delete the existing data in Data Cloud, only the new data from the source system3. Using Data Explorer to locate and manually remove the Individual will not delete the related records from the source data streams, only the Individual entity in Data Cloud. Reference: Delete Individuals from Data Cloud Requesting Data Deletion or Right to Be Forgotten Data Refresh for Data Cloud [Data Explorer] Question 45Skipped A customer wants to use the transactional data from their data warehouse in Data Cloud. They are only able to export the data via an SFTP site. How should the file be brought into Data Cloud? Ingest the file through the Cloud Storage Connector. Manually import the file using the Data Import Wizard. Use Salesforce s Dataloader application to perform a bulk upload from a desktop. Ingest the file with the SFTP Connector. Overall explanation The SFTP Connector is a data source connector that allows Data Cloud to ingest data from an SFTP server. The customer can use the SFTP Connector to create a data stream from their exported file and bring it into Data Cloud as a data lake object. The other options are not the best ways to bring the file into Data Cloud because: B. The Cloud Storage Connector is a data source connector that allows Data Cloud to ingest data from cloud storage services such as Amazon S3, Azure Storage, or Google Cloud Storage. The customer does not have their data in any of these services, but only on an SFTP site. C. The Data Import Wizard is a tool that allows users to import data for many standard Salesforce objects, such as accounts, contacts, leads, solutions, and campaign members. It is not designed to import data from an SFTP site or for custom objects in Data Cloud. D. The Dataloader is an application that allows users to insert, update, delete, or export Salesforce records. It is not designed to ingest data from an SFTP site or into Data Cloud. Reference: SFTP Connector - Salesforce, Create Data Streams with the SFTP Connector in Data Cloud - Salesforce, Data Import Wizard - Salesforce, Salesforce Data Loader Question 46Skipped A Company uses Service Cloud as its CRM and stores mobile phone, home phone, and work phone as three separate fields for its customers on the Contact record. The company plans to use Data Cloud and ingest the Contact object via the CRM Connector. What is the most efficient approach that a consultant should take when ingesting this data to ensure all the different phone numbers are properly mapped and available for use in activation? Ingest the Contact object and then create a calculated insight to normalize the phone numbers, and then map to the Contact Point Phone data map object. Ingest the Contact object and map the Work Phone, Mobile Phone, and Home Phone to the Contact Point Phone data map object from the Contact data stream Ingest the Contact object and create formula fields in the Contact data stream on the phone numbers, and then map to the Contact Point Phone data map object. Ingest the Contact object and use streaming transforms to normalize the phone numbers from the Contact data stream into a separate Phone data lake object (DLO) that contains three rows, and then map this new DLO to the Contact Point Phone data map object. Overall explanation The most efficient approach that a consultant should take when ingesting this data to ensure all the different phone numbers are properly mapped and available for use in activation is B. Ingest the Contact object and use streaming transforms to normalize the phone numbers from the Contact data stream into a separate Phone data lake object (DLO) that contains three rows, and then map this new DLO to the Contact Point Phone data map object. This approach allows the consultant to use the streaming transforms feature of Data Cloud, which enables data manipulation and transformation at the time of ingestion, without requiring any additional processing or storage. Streaming transforms can be used to normalize the phone numbers from the Contact data stream, such as removing spaces, dashes, or parentheses, and adding country codes if needed. The normalized phone numbers can then be stored in a separate Phone DLO, which can have one row for each phone number type (work, home, mobile). The Phone DLO can then be mapped to the Contact Point Phone data map object, which is a standard object that represents a phone number associated with a contact point. This way, the consultant can ensure that all the phone numbers are available for activation, such as sending SMS messages or making calls to the customers. The other options are not as efficient as option B. Option A is incorrect because it does not normalize the phone numbers, which may cause issues with activation or identity resolution. Option C is incorrect because it requires creating a calculated insight, which is an additional step that consumes more resources and time than streaming transforms. Option D is incorrect because it requires creating formula fields in the Contact data stream, which may not be supported by the CRM Connector or may cause conflicts with the existing fields in the Contact object. Reference: Salesforce Data Cloud Consultant Exam Guide, Data Ingestion and Modeling, Streaming Transforms, Contact Point Phone Question 47Skipped A consultant is discussing the benefits of Data Cloud with a customer that has multiple disjointed data sources. Which two functional areas should the consultant highlight in relation to managing customer data? Choose 2 answers Data Marketplace Unified Profiles Master Data Management Data Harmonization Overall explanation Data Cloud is an open and extensible data platform that enables smarte r, more efficient AI with secure access to first-party and industry data1. Two functional areas that the consultant should highlight in relation to managing customer data are: Data Harmonization: Data Cloud harmonizes data from multiple sources and formats into a common schema, enabling a single source of truth for customer data1. Data Cloud also applies data quality rules and transformations to ensure data accuracy and consistency. Unified Profiles: Data Cloud creates unified profiles of customers and pr ospects by linking data across different identifiers, such as email, phone, cookie, and device ID1. Unified profiles provide a holistic view of customer behavior, preferences, and interactions across channels and touchpoints. The other options are not correct because: Master Data Management: Master Data Management (MDM) is a process of creating and maintaining a single, consistent, and trusted source of master data, such as product, customer, supplier, or location data. Data Cloud does not provide MDM functionality, but it can integrate with MDM solutions to enrich customer data. Data Marketplace: Data Marketplace is a feature of Data Cloud that allows users to discover, access, and activate data from third-party providers, such as demographic, behavioral, and intent data. Data Marketplace is not a functional area related to managing customer data, but rather a source of external data that can enhance customer data. Reference: Salesforce Data Cloud [Data Harmonization for Data Cloud] [Unified Profiles for Data Cloud] [What is Master Data Management?] [Integrate Data Cloud with Master Data Management] [Data Marketplace for Data Cloud] Question 48Skipped Where is value suggestion for attributes in segmentation enabled when creating the DMO? Data Stream Setup Data Transformation Data Mapping Segment Setup Overall explanation Value suggestion for attributes in segmentation is a feature that allows you to see and select the possible values for a text field when creating segment filters. You can enable or disable this feature for each data model object (DMO) field in the DMO record home. Value suggestion can be enabled for up to 500 attributes for your entire org. It can take up to 24 hours for suggested values to appear. To use value suggestion when creating segment filters, you need to drag the attribute onto the canvas and start typing in the Value field for an attribute. You can also select multiple values for some operators. Value suggestion is not available for attributes with more than 255 characters or for relationships that are one-to-many (1:N). Reference: Use Value Suggestions in Segmentation, Considerations for Selecting Related Attributes Question 49Skipped What does the Source Sequence reconciliation rule do in identity resolution? Sets the priority of specific data sources when building attributes in a unified profile, such as a first or last name Includes data from sources where the data is most frequently occurring Identifies which individual records should be merged into a unified profile by setting a priority for specific data sources Identifies which data sources should be used in the process of reconciliation by prioritizing the most recently updated data source Overall explanation The Source Sequence reconciliation rule sets the priority of specific data sources when building attributes in a unified profile, such as a first or last name. This rule allows you to define which data source should be used as the primary source of truth for each attribute, and which data sources should be used as fallbacks in case the primary source is missing or invalid. For example, you can set the Source Sequence rule to use data from Salesforce CRM as the first priority, data from Marketing Cloud as the second priority, and data from Google Analytics as the third priority for the first name attribute. This way, the unified profile will use the first name value from Salesforce CRM if it exists, otherwise it will use the value from Marketing Cloud, and so on. This rule helps you to ensure the accuracy and consistency of the unified profile attributes across different data sources. Reference: Salesforce Data Cloud Consultant Exam Guide, Identity Resolution, Reconciliation Rules Question 50Skipped What is the result of a segmentation criteria filtering on City | Is Equal To | San José ? Cities containing San José , San Jose , san jose , or san jose Cities only containing San Jose or san jose Cities only containing San José’ or san josé Cities only containing San Jose or San Jose Overall explanation The result of a segmentation criteria filtering on City | Is Equal To | San José is cities only containing San José or san josé. This is because the segmentation criteria is case-sensitive and accent- sensitive, meaning that it will only match the exact value that is entered in the filter1. Therefore, cities containing San Jose , san jose , or San Jose will not be included in the result, as they do not match the filter value exactly. To include cities with different variations of the name San José , you would need to use the OR operator and add multiple filter values, such as San José OR San Jose OR san jose OR san josé 2. Reference: Segmentation Criteria, Segmentation Operators Question 51Skipped What does it mean to build a trust-based, first-party data asset? To ensure opt-in consents are collected for all email marketing as required by law To obtain competitive data from reliable sources through interviews, surveys, and polls To provide trusted, first-party data in the Data Cloud Marketplace that follows all compliance regulations To provide transparency and security for data gathered from individuals who provide consent for its use and receive value in exchange Overall explanation Building a trust-based, first-party data asset means collecting, managing, and activating data from your own customers and prospects in a way that respects their privacy and preferences. It also means providing them with clear and honest information about how you use their data, what benefits they can expect from sharing their data, and how they can control their data. By doing so, you can create a mutually beneficial relationship with your customers, where they trust you to use their data responsibly and ethically, and you can deliver more relevant and personalized experiences to them. A trust-based, first-party data asset can help you improve customer loyalty, retention, and growth, as well as comply with data protection regulations and standards. Reference: Use first-party data for a powerful digital experience, Why first-party data is the key to data privacy, Build a first- party data strategy Question 52Skipped A customer requests that their personal data be deleted. Which action should the consultant take to accommodate this request in Data Cloud? Use a streaming API call to delete the customer s information. Use Profile Explorer to delete the customer data from Data Cloud. Use the Data Rights Subject Request tool to request deletion of the customer s information. Use Consent API to request deletion of the customer s information. Overall explanation The Data Rights Subject Request tool is a feature that allows Data Cloud users to manage customer requests for data access, deletion, or portability. The tool provides a user interface and an API to create, track, and fulfill data rights requests. The tool also generates a report that contains the customer s personal data and the actions taken to comply with the request. The consultant should use this tool to accommodate the customer s request for data deletion in Data Cloud. Reference: Data Rights Subject Request Tool, Create a Data Rights Subject Request Question 53Skipped A Company wants to send a promotional campaign for customers that have purchased within the past 6 months. The consultant created a segment to meet this requirement. Now, the Company brings an additional requirement to suppress customers who have made purchases within the last week. What should the consultant use to remove the recent customers? Segmentation exclude rules Related attributes Batch transforms Streaming insight Overall explanation The consultant should use B. Segmentation exclude rules to remove the recent customers. Segmentation exclude rules are filters that can be applied to a segment to exclude records that meet certain criteria. The consultant can use segmentation exclude rules to exclude customers who have made purchases within the last week from the segment that contains customers who have purchased within the past 6 months. This way, the segment will only include customers who are eligible for the promotional campaign. The other options are not correct. Option A is incorrect because batch transforms are data processing tasks that can be applied to data streams or data lake objects to modify or enrich the data. Batch transforms are not used for segmentation or activation. Option C is incorrect because related attributes are attributes that are derived from the relationships between data model objects. Related attributes are not used for excluding records from a segment. Option D is incorrect because streaming insights are derived attributes that are calculated at the time of data ingestion. Streaming insights are not used for excluding records from a segment. Reference: Salesforce Data Cloud Consultant Exam Guide, Segmentation, Segmentation Exclude Rules Question 54Skipped A Company created a segment called High Investment Balance Customers. This is a foundational segment that includes several segmentation criteria the marketing team should consistently use. Which feature should the consultant suggest the marketing team use to ensure this consistency when creating future, more refined segments? Package High Investment Balance Customers in a data kit. Create new segments by cloning High Investment Balance Customers. Create a High Investment Balance calculated insight. Create new segments using nested segments. Overall explanation Nested segments are segments that include or exclude one or more existing segments. They allow the marketing team to reuse filters and maintain consistency in their data by using an existing segment to build a new one. For example, the marketing team can create a nested segment that includes High Investment Balance Customers and excludes customers who have opted out of email marketing. This way, they can leverage the foundational segment and apply additional criteria without duplicating the rules. The other options are not the best features to ensure consistency because: B. A calculated insight is a data object that performs calculations on data lake objects or CRM data and returns a result. It is not a segment and cannot be used for activation or personalization. C. A data kit is a bundle of packageable metadata that can be exported and imported across Data Cloud orgs. It is not a feature for creating segments, but rather for sharing components. D. Cloning a segment creates a copy of the segment with the same rules and filters. It does not allow the marketing team to add or remove criteria from the original segment, and it may create confusion and redundancy. Reference: Create a Nested Segment - Salesforce, Save Time with Nested Segments (Generally Available) - Salesforce, Calculated Insights - Salesforce, Create and Publish a Data Kit Unit | Salesforce Trailhead, Create a Segment in Data Cloud - Salesforce Question 55Skipped A Company creates a calculated insight to compute recency, frequency, monetary {RFM) scores on its unified individuals. NTO then creates a segment based on these scores that it activates to a Marketing Cloud activation target. Which two actions are required when configuring the activation? Choose 2 answers Add additional attributes. Choose a segment. Add the calculated insight in the activation. Select contact points. Overall explanation To configure an activation to a Marketing Cloud activation target, you need to choose a segment and select contact points. Choosing a segment allows you to specify which unified individuals you want to activate. Selecting contact points allows you to map the attributes from the segment to the fields in the Marketing Cloud data extension. You do not need to add additional attributes or add the calculated insight in the activation, as these are already part of the segment definition. Reference: Create a Marketing Cloud Activation Target; Types of Data Targets in Data Cloud Question 56Skipped When creating a segment on an individual, what is the result of using two separate containers linked by an AND as shown below? GoodsProduct | Count | At Least | 1 Color | Is Equal To | red AND GoodsProduct | Count | At Least | 1 PrimaryProductCategory | Is Equal To | shoes Individuals who purchased at least one of any red product or purchased at least one pair of Individuals who purchased at least one of any red’ product and also purchased at least one pair of ‘shoes’ Individuals who purchased at least one red shoes as a single line item in a purchase Individuals who made a purchase of at least one red shoes and nothing else Overall explanation When creating a segment on an individual, using two separate containers linked by an AND means that the individual must satisfy both the conditions in the containers. In this case, the individual must have purchased at least one product with the color attribute equal to red and at least one product with the primary product category attribute equal to shoes. The products do not have to be the same or purchased in the same transaction. Therefore, the correct answer is A. The other options are incorrect because they imply different logical operators or conditions. Option B implies that the individual must have purchased a single product that has both the color attribute equal to red and the primary product category attribute equal to shoes. Option C implies that the individual must have purchased only one product that has both the color attribute equal to red and the primary product category attribute equal to shoes and no other products. Option D implies that the individual must have purchased either one product with the color attribute equal to red or one product with the primary product category attribute equal to shoes or both, which is equivalent to using an OR operator instead of an AND operator. Reference: Create a Container for Segmentation Create a Segment in Data Cloud Navigate Data Cloud Segmentation Question 57Skipped A Company uses Data Cloud to segment banking customers and activate them for direct mail via a Cloud File Storage activation. The company also wants to analyze individuals who have been in the segment within the last 2 years. Which Data Cloud component allows for this? Segment exclusion Nested segments Calculated insights Segment membership data model object Overall explanation Data Cloud allows customers to analyze the segment membership history of individuals using the Segment Membership data model object. This object stores information about when an individual joined or left a segment, and can be used to create reports and dashboards to track segment performance over time. Cumulus Financial can use this object to filter individuals who have been in the segment within the last 2 years and compare them with other metrics. The other options are not Data Cloud components that allow for this analysis. Segment exclusion is a feature that allows customers to remove individuals from a segment based on another segment. Nested segments are segments that are created from other segments using logical operators. Calculated insights are derived attributes that are created from existing data using formulas. Reference: Segment Membership Data Model Object Data Cloud Reports and Dashboards Create a Segment in Data Cloud Question 58Skipped A Company uploads new customer data to an Amazon S3 Bucket on a daily basis to be ingested in Data Cloud. In what order should each process be run to ensure that freshly imported data is ready and available to use for any segment? Refresh Data Stream > Calculated Insight > Identity Resolution Identity Resolution > Refresh Data Stream > Calculated Insight Refresh Data Stream > Identity Resolution > Calculated Insight Calculated Insight > Refresh Data Stream > Identity Resolution Overall explanation To ensure that freshly imported data from an Amazon S3 Bucket is ready and available to use for any segment, the following processes should be run in this order: Refresh Data Stream: This process updates the data lake objects in Data Cloud with the latest data from the source system. It can be configured to run automatically or manually, depending on the data stream settings1. Refreshing the data stream ensures that Data Cloud has the most recent and accurate data from the Amazon S3 Bucket. Identity Resolution: This process creates unified individual profiles by matching and consolidating source profiles from different data streams based on the identity resolution ruleset. It runs daily by default, but can be triggered manually as well2. Identity resolution ensures that Data Cloud has a single view of each customer across different data sources. Calculated Insight: This process performs calculations on data lake objects or CRM data and returns a result as a new data object. It can be used to create metrics or measures for segmentation or analys is purposes3. Calculated insights ensure that Data Cloud has the derived data that can be used for personalization or activation. Reference: 1: Configure Data Stream Refresh and Frequency - Salesforce 2: Identity Resolution Ruleset Processing Results - Salesforce 3: Calculated Insights - Salesforce Question 59Skipped During an implementation project, a consultant completed ingestion of all data streams for their customer. Prior to segmenting and acting on that data, which additional configuration is required? Identity Resolution Data Mapping Calculated Insights Data Activation Overall explanation After ingesting data from different sources into Data Cloud, the additional configuration that is required before segmenting and acting on that data is Identity Resolution. Identity Resolution is the process of matching and reconciling source profiles from different data sources and creating unified profiles that represent a single individual or entity1. Identity Resolution enables you to create a 360- degree view of your customers and prospects, and to segment and activ ate them based on their attributes and behaviors2. To configure Identity Resolution, you need to create and deploy a rulese t that defines the match rules and reconciliation rules for your data3. The other options are incorrect because they are not required before segmenting and acting on the data. Data Activation is the process of sending data from Data Cloud to other Salesforce clouds or e xternal destinations for marketing, sales, or service purposes4. Calculated Insights are derived attributes that are computed based on the source or unified data, such as lifetime value, churn risk, o r product affinity5. Data Mapping is the process of mapping source attributes to unified attributes in the data model. These configurations can be done after segmenting and acting on the data, or in parallel with Identity Resolution, but they are not prerequisites for it. Reference: Identity Resolution Overview, Segment and Activate Data in Data Cloud, Configure Identity Resolution Rulesets, Data Activation Overview, Calculated Insights Overview, [Data Mapping Overview] Question 60Skipped A consultant wants to ensure that every segment managed by multiple brand teams adheres to the same set of exclusion criteria, that are updated on a monthly basis. What is the most efficient option to allow for this capability? Create, publish, and deploy a data kit. Create a reusable container block with common criteria. Create a nested segment. Create a segment and copy it for each brand. Overall explanation The most efficient option to allow for this capability is to create a reusable container block with common criteria. A container block is a segment component that can be reused across multiple segments. A container block can contain any combination of filters, nested segments, and exclusion criteria. A consultant can create a container block with the exclusion criteria that apply to all the segments managed by multiple brand teams, and then add the container block to each segment. This way, the consultant can update the exclusion criteria in one place and have them reflected in all the segments that use the container block. The other options are not the most efficient options to allow for this capability. Creating, publishing, and deploying a data kit is a way to share data and segments across different data spaces, but it does not allow for updating the exclusion criteria on a monthly basis. Creating a nested segment is a way to combine segments using logical operators, but it does not allow for excluding individuals based on specific criteria. Creating a segment and copying it for each brand is a way to create multiple segments with the same exclusion criteria, but it does not allow for updating the exclusion criteria in one place. Reference: Create a Container Block Create a Segment in Data Cloud Create and Publish a Data Kit Create a Nested Segment Back to result overviewScroll back to top Learning tools Question 1Correct A Company wants to segregate Salesforce CRM Account data based on Country for its Data Cloud users. What should the consultant do to accomplish this? Use formula fields based on the account Country field to filter incoming records. Use the data spaces feature and applying filtering on the Account data lake object based on Country. Use Salesforce sharing rules on the Account object to filter and segregate records based on Country. Use streaming transforms to filter out Account data based on Country and map to separate data model objects accordingly. Overall explanation Data spaces are a feature that allows Data Cloud users to create subsets of data based on filters and permissions. Data spaces can be used to segregate data based on different criteria, such as geography, business unit, or product line. In this case, the consultant can use the data spaces feature and apply filtering on the Account data lake object based on Country. This way, the Data Cloud users can access only the Account data that belongs to their respective countries. Reference: Data Spaces, Create a Data Space Question 2Correct A user Is not seeing suggested values from newly-modeled data when building a segment. What is causing this issue? Value suggestion is still processing and takes up to 24 hours to be available. Value suggestion will only return results for the first 50 values of a specific attribute, Value suggestion can only work on direct attributes and not related attributes. Value suggestion requires Data Aware Specialist permissions at a minimum. Overall explanation The most likely cause of this issue is that value suggestion is still processing and takes up to 24 hours to be available. Value suggestion is a feature that enables you to see suggested values for data model object (DMO) fields when creating segment filters. However, this feature needs to be enabled for each DMO field, and it can take up to 24 hours for the suggested values to appear after enabling the feature1. Therefore, if a user is not seeing suggested values from newly-modeled data, it could be that the data has not been processed yet by the value suggestion feature. Reference: Use Value Suggestions in Segmentation Question 3Correct A Company wants to be able to track the daily transaction volume of each of its customers in real time and send out a notification as soon as it detects volume outside a customer s normal range. What should a consultant do to accommodate this request? Use a calculated insight paired with a flow. Use streaming data transform combined with a data action. Use streaming data transform with a flow. Use a streaming insight paired with a data action Overall explanation A streaming insight is a type of insight that analyzes streaming data in real time and triggers actions based on predefined conditions. A data action is a type of action that executes a flow, a data action target, or a data action script when an insight is triggered. By using a streaming insight paired with a data action, a consultant can accommodate Cumulus Financial s request to track the daily transaction volume of each customer and send out a notification when the volume is outside the normal range. A calculated insight is a type of insight that performs calculations on data in a data space and stores the results in a data extension. A streaming data transform is a type of data transform that applies transformations to streaming data in real time and stores the results in a data extension. A flow is a type of automation that executes a series of actions when triggered by an event, a schedule, or another flow. None of these options can achieve the same functionality as a streaming insight paired with a data action. Reference: Use Insights in Data Cloud Unit, Streaming Insights and Data Actions Use Cases, Streaming Insights and Data Actions Limits and Behaviors Question 4Incorrect A Data Cloud consultant recently discovered that their identity resolution process is matching individuals that share email addresses or phone numb