🎧 New: AI-Generated Podcasts Turn your study notes into engaging audio conversations. Learn more

SFDC Data Cloud Notes.pdf

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Transcript

In the cookieless future, most companies’ primary source of digital information about customers will be first-party data Data Cloud Value and Use Cases Industry Level: Financial Services: It can spot important personal events like marriage or retirement to help banks offer relevant services to grow...

In the cookieless future, most companies’ primary source of digital information about customers will be first-party data Data Cloud Value and Use Cases Industry Level: Financial Services: It can spot important personal events like marriage or retirement to help banks offer relevant services to grow customers' savings and investments. It can quickly identify and alert customers about unusual card activity to prevent fraud. Healthcare and Life Sciences: It can pull together data from various health devices to create a comprehensive health score, helping doctors pinpoint when a patient might need extra care. It helps get doctors and healthcare providers up to speed quickly and keeps them engaged by giving a complete view of patient interactions across different channels. Retail and Consumer Goods: It can merge customer information to create targeted ads and find new potential customers who share traits with the best existing ones. It uses live data to catch important shopping behaviors, like when a customer leaves items in their online cart, to send timely reminders or updates about their orders. Media and Communications: It helps figure out the best time to suggest new products or upgrades by looking at past purchases and current customer behavior. It transforms customer support by using a full view of the customer's history, encouraging service agents to recommend new deals. Benefits of Data Cloud: - Reduce costs by combining Einstein’s AI-powered predictions, recommendations, and insights with real-time unified profile data to personalize every customer experience. - Increase productivity by connecting real-time data with Salesforce Flow to automate any business process and reduce manual tasks. - Reduce time to market by powering low-code app development. You can use Data Cloud to create unique audience segments to market across various channels, including Marketing Cloud Customer 360 Data Model High Level Customer 360 Data Model (C360DM): Think of it as the "Rosetta Stone" for a company's data. Just like the Rosetta Stone helped us understand different languages, the C360DM helps different parts of a company's software understand each other's data. It does this by creating a common language for all the data from different places (like sales, marketing, customer service, etc.), no matter how it's structured. This common language makes it much easier for the company to get a complete view of their customers. Why It's Helpful: Saves Time: Instead of spending hours manually linking different data types, C360DM does this automatically. Improves Quality: It reduces errors that can happen when people try to mix and match data manually. Customization: Companies aren't stuck with one-size-fits-all; they can still tailor the system to their needs. Subject Area: Imagine you have a bookshelf organized by genres. Each "subject area" is like one genre on that shelf, helping you find the kind of book you want more easily. In data terms, it's a way to organize similar types of data to make modeling easier. Data Stream: Think of a data stream like a river carrying water to a lake. The "water" is data flowing into Data Cloud, and it can either flow continuously or just drop in occasionally, like a river that floods once a day. Data Model Object (DMO): This is like a container in a warehouse. Each container (DMO) holds items (data attributes) that belong together. For example, one container might hold all the information related to sales orders, like dates, amounts, and customer names. Attribute: An attribute is just a specific detail. In our warehouse, it would be a single item in a container, like a customer's first name. Foreign Key: This is like a passport number. Just as a passport number can help link a traveler to their travel records, a foreign key in a database helps connect one piece of data to related data in a different table. Data Bundles: These are pre-packaged sets of data, kind of like meal kits. Instead of gathering and preparing each ingredient (data point) yourself, you get everything you need in one convenient package, making it easier and faster to cook up what you need. By using all these tools, Salesforce Data Cloud can provide companies with a clearer and more complete picture of their customers. This unified view can lead to better customer service, more effective marketing, and ultimately more sales, as companies are better able to understand and anticipate customer needs. As a marketer, data cloud allows you to: - Ingest data from a variety of sources including second- and third-party data. Cleanse unusable or “dirty” data Use suggested or custom models to facilitate data mapping that identifies relationships between your data Can narrow down data to identify each customer Determine where you want to send you segmented data to (Marketing Cloud, Google Cloud Analytics, etc) Create flexible audience segments using easy drag-and-drop functionality. Analyze and visualize customer engagement across channels and touchpoints. Roles in Data Cloud setup Data Cloud for Marketing Admin—Responsible for the setup of the application, user provisioning, and assigning permission sets within the system. This role has access to Salesforce Sales and Service Clouds in addition to other integrated systems within the Core cloud platform. The admin executes day-to-day configuration, support, maintenance and improvement, and performs regular internal system audits. Data Cloud for Marketing Data Aware Specialist—Creates and manages the data model defined by the team. They also create identity resolution rules, create data streams, map data in the data model, and build calculated insights. Data Cloud for Marketing Manager—Manages the overall segmentation strategy and identifies target campaigns. Data Cloud for Marketing Specialist—Creates, manages, and publishes segments for the identified campaigns. Each Data Cloud instance is allowed to be connected to one enterprise Marketing Cloud instance or a single EID, with all associated business unit Audience suppression: - Create a suppression audience for marketing and advertising campaigns based on business use cases. Isabelle is collaborating with the Customer Support team to suppress customers with open support tickets from receiving marketing communications. Jason immediately notices this improves customer satisfaction scores. Consent management: - Deliver communications based on channel-specific consent and customer preference. Data Cloud includes connectors to: Every Salesforce Cloud, Cloud Storage platforms including Amazon S3 and Google, streaming data from web and mobile sources, APIs integrated with MuleSoft, or custom API connections. Once ingested, the data is then mapped to a standardized data model based on Salesforce Customer 360. This ensures Data Cloud has a uniform way of identifying everything from individuals, to transactions, to channel engagement. Data Cloud basically ingests data from a bunch of different systems and then maps that to a standardized data model Data Cloud identity resolution allows customers to bring identities together and match them across common data points—such as an email address or first and last name—creating a unified view of each customer. Standardized and unified customer data can be used within Data Cloud to power insights like Customer Lifetime Value, or be connected to external artificial intelligence (AI) or business intelligence (BI) tools for predictions, recommendations, and data discovery. Update Lightning pages in App Builder or trigger Flows. Power decisions, real-time applications, analytics, and more, on one platform. Publish unified segments of customers to advertising or marketing platforms. Data Cloud for Financial Services: Use Segments to identify major life events, such as graduation, first job, marriage, childbirth, divorce, retirement, or inheritance, to grow deposits and generate revenue. Use Streaming Insights to detect possible fraudulent transactions and launch a real-time journey to notify customers to review any suspicious behavior. Integrate Data Cloud with multiple health telematics systems to calculate a Unified Health Score, or identify critical points for intervention in care. - Onboard healthcare providers with greater efficiency and reduced risk of churn with aggregated insights across multiple channels. Unify known consumer profiles to identify key audiences for personalized ads and new audiences through look-alikes. Select audiences by top purchases, top-tier loyalty, and highly engaged customers. Use real-time engagement data to identify moments that matter in a customer journey, from an abandoned cart to delivery notifications. Identify an optimal purchasing window for new devices and services by combining historical transactions and real-time behavioral data. Turn customer service into an upsell channel by having a shared, Customer 360 view across subscription management and sales. Sharing Rules and other data restrictions in Core CRM DO NOT apply to data stored in Data Cloud Data Cloud data is not stored as sObjects, but in the Data Lake outside of Core CRM You can provision data cloud within an SFDC org currently used by a business if: - The customer has a single line of business - Customer data is housed in a single SFDC org - Use cases require OOTB Data Cloud LWCs (lightning web components ways to build user interfaces for SFDC apps) and search capabilities for service agents (I don’t know what these use cases would be) However, keep in mind that: - Data org migration and object model refactoring require Data Cloud rework or reimplementation - Data Cloud still requires API access to sObjects from within this org because it replicates that data to the Data Lake - A single Data Cloud instance can connect to multiple SFDC Core orgs You can create a new SFDC org solely for housing the Data Cloud if: - Multiple Salesforce customer orgs exist - Highly complex enterprise architecture exists - Data Cloud administration users are different from SFDC admin users - Existing data org is customized Keep in mind that: - Data Cloud is meant to support a multi tenant level - You’ll need to build custom lightning web components to provide these users with Data Cloud data views - Data Cloud can connect to multiple SFDC core orgs, but only one Marketing Cloud account Steps of setup process for Data Cloud: 1) Set up Data Cloud account a) Admin has to assign themselves the Customer Data Platform Admin permission set then set up the Data Cloud instance 2) Configure additional users by creating profiles a) You have assign permissions - permissions are as follows: i) Customer Data Platform Admin (1) Sets up app, provisions users, has access to Sales and Service cloud. Conducts day to day configuration, support, maintenance, improvement, and regular system audits ii) Customer Data Platform Data Aware Specialist (1) Creates data streams, maps data to the data model, creates identity resolution rulesets for unified profiles, and creates calculated insights iii) Customer Data Platform Marketing Manager (1) Responsible for segmentation strategy, creates activation targets, activations, and ‘Customer Data Platform Marketing specialist” permission iv) Cusotmer Data Platform Marketing Specialist (1) Responsible for creating segments in Customer Data Platform 3) Set up connectors to connect data sources i) A cloud is relevant only if a customer has a license for that specific cloud ii) If a customer only has a sales cloud license, that’s the only relevant cloud when it comes to connecting with Data Cloud b) Before connecting a data source, an admin must set up the connectors to bring data in i) Data cloud has pre built connectors to: (1) all Salesforce clouds (2) External sources, like external file storage (GCP, AWS S3) (3) API connectors - Ingestion API, web, mobile SDK ii) Each connector varies in terms of how the data can be ingested: (1) CRM connector, marketing cloud could batch upload data to data cloud (2) Ingestion API is near real time (processes records every 15 minutes) (3) Web and Mobile Connector are real time (can process records every 2 minutes) Salesforce CRM connector supports connections to the following CRM org types: - Home org: org where it’s installed - External orgs: Customers can connect to any production external orgs - Sandbox orgs: Customers can connect to any sandbox external orgs Use cases for Google Cloud storage connector; - Connecting Google Analytics and Google Big Query with Data Cloud - Sample example: - A Data Cloud segment of any customers who have browsed running shoes on the website in the past seven days but have not yet purchased anything. You may want to activate that audience for personalized omni-channel messaging to influence their behavior to complete the online purchase. - Allows you to enrich data cloud profiles with Google Analytics data Steps to ingest/activate Google Cloud data: - Ingest data using Google buckets - ingest flat file data using Google Cloud storage - Define Google buckets in setup - register buckets in setup to simplify stream definition and management - Limit and refresh schedules in GCS - Five GCS connections per org are supported - Data and files are kept in sync hourly with Data Cloud infrastructure - Data from GCS is synced with Data Cloud ONCE PER HOUR Google Cloud Connector Implementation Steps: - Create a connection - Create a data stream - Monitor the stream As an admin, you can package and install your Data Cloud Amazon S3 data streams for distribution. A package is a container for Salesforce Metadata Component that can be either individual configurations or an entire custom SFDC platform app. You’ll also want to identify your account’s functional domain – refers to SFDC public cloud infrastructure where instance is located. Auditing and ongoing maintenance of data cloud account: - Monitor usage entitlements - make sure your are aware of any activities that impact your contract (unified profiles, segment publishes, engagement events or records) - View identity resolution processing history - View record modification fields - stores name of user who created the record and who last modified the record - View and monitor setup changes with Setup Audit trail - logs when modifications are made to organization’s configuration - View and monitor login history - Monitor account status of Salesforce Trust Salesforce packages are like boxes filled with various tools and features (known as objects) that you can move from one Salesforce environment to another. Imagine you have a toolbox that you've customized for a specific job - this toolbox is your package, and the job sites are the different Salesforce environments (or orgs). There are two main types of packages you can use: Unmanaged Package Think of an unmanaged package as a box of lego blocks. You give it to someone, and they can build anything they like with those blocks, even paint or reshape them. If you come up with a new set of blocks or a new design, they can’t just add it to the existing set. They must put away the old set and take your new one. You can create these lego sets (unmanaged packages) with any kind of Salesforce account, whether it's a big business or a small developer account. If you’ve got your name or label on those blocks (using namespaces), don’t mix up your sets in one box (or Developer Edition org) – it gets confusing! They can also be created in any SFDC edition (Enterprise, Unlimited, Performance, and Developer) Unmanaged packages are usually used for a one-time migration of metadata All components are editable Not upgradeable or supported No control over components once installed Unmanaged packages support packaging AWS S3 data streams with relationships to both standard and custom data models. Managed Package Now, a managed package is more like a high-tech gadget that you've made. It has your brand name (namespace) and you can control it. You can send out updates and improvements directly to the gadget after your friends have it - like updating software on a phone. If you think your gadget is really cool and you want to sell it in the Salesforce App Store (AppExchange), you’d go for this option. Typically used by software vendors to sell their apps. Contains a distinguishing namespace Managed packages support packaging AWS S3 data stream with relationships to custom data models only Summary of managed/unmanaged packages In summary, unmanaged packages are for when you want to share something that can be changed by anyone, and there’s no easy way to update it. Managed packages are for when you want to keep control and be able to send updates, and possibly sell your package in the Salesforce marketplace. Unmanaged packages are not upgradeable but managed packages are. Packages are distributed containers and can include Data Streams, Data Models, and Calculated Insights Unmanaged packages support packaging AWS S3 data streams with relationships to standard and custom data models, but managed packages support packaging AWS S3 data streams with relationships only to custom data models Standard Salesforce Package When we talk about "packageable components," we're referring to different elements or features in the Salesforce Data Cloud that you can bundle up and transfer or replicate across different areas of the Salesforce platform. Let's look at what you can put into these bundles, which come in two types: Standard Salesforce Packages and Data Kit Packages. Calculated Insights: These are like special formulas and calculations that help you sort and understand your customers better. You can put these SQL-based formulas in a standard package. S3 Data Streams: Think of data streams as rivers of information. For S3 data streams, the package includes not just the flow of data but also the directions on where it should go (mapping). These can be used with the regular setup Salesforce offers or with custom setups you’ve created. You don’t need an admin to set up this connection. Ingestion API Data Streams: Similar to S3 data streams, these are another kind of information flow that comes from different sources outside Salesforce. You can package the data and its mapping instructions for standard or customized structures in Salesforce. Data Models: These are the blueprints that define how your data is organized. You can include custom blueprints in your standard package. Data Kit Package Commerce and CRM Data Streams: These are specific types of data flows coming from Salesforce's Commerce Cloud and CRM. They can be neatly packed in a data kit, making it easier to handle sales and customer information. Data Model: When you add a data stream to a data kit, it automatically brings along the blueprint it’s connected to. This blueprint is set up for you so that it's ready to use for sorting and understanding your customer base right away. In essence, Standard Salesforce Packages are for general Data Cloud elements that can be used with the default or custom setups, while Data Kit Packages are more specialized, catering to commerce and customer relationship data, and they come with some automation features to make setting up easier. Why Use Packaged Components? Packaged components are like pre-assembled kits that help Salesforce developers move configurations and features from one place to another easily. This is really handy when you're making apps because it lets you: Test Configurations: You can try out different settings and setups without affecting your main Salesforce environment. It's like having a rehearsal before the live performance. Move Faster: These packages allow you to work quickly and more reliably because you’re reusing proven components rather than making new ones from scratch every time. Stay Organized: By using packages, you keep your work organized. You know exactly what’s inside each package, which makes it easier to manage and update things. Ensure Quality: You can be more confident in the quality because you've tested everything thoroughly in different stages before it goes live. Application Lifecycle Management (ALM) Stages: When you're building an application, you generally go through these stages: Plan: Deciding what your app should do and planning the features. Code: Writing the actual code for your app. Merge & Test: Combining different parts of the code and checking to make sure everything works together. Test & UAT (User Acceptance Testing): Making sure not only that it works, but that it meets the users' needs. Release: Making the app available for users in the live environment. How Developers Use Packages in ALM: Here's a simple way to understand how packages fit into the ALM process: Developers build their app in a "development org," which is like their workshop. They then put together a package from this workshop, which is like boxing up a DIY furniture set with all the necessary parts and instructions. Next, they bring this box over to a "test org," which is like a display room where they set up the furniture to make sure it looks and functions as expected. If everything checks out, the package, or DIY set, can be delivered to the "production org," which is like the customer's home where the furniture will actually be used. This is particularly important for the Data Cloud because it ensures that all the data pieces fit together correctly and work as expected, which is crucial for features like identifying who your customers are and grouping them into segments. Typical Environment Setup for Packaging: The usual setup for working with packages is like having different rooms for different stages: Developer: This is the workspace where the app is first built. Test: This is like a testing ground to try out the app and make sure it's working properly. Production: This is the final destination, like a store where the app is made available to users. For standard packages, you need: At least two rooms (environments): one to build the package (Developer) and one to open and use it (Production). While you could go without a test room (Test Org), it's like skipping a dress rehearsal—it's always safer to test everything before the final show. By following these best practices and using a test environment, you can make any adjustments needed before you bring your app to the main stage (Production Org). It’s like double-checking that everything in your DIY furniture set fits perfectly before you deliver it to the customer. Packaging and Deploying Calculated Insights: You package calculated insights by first creating a new package in Package Manager within your development org. In the package, you specifically add components of the type "Calculated Insight Object Definition" to include your insight. After adding the necessary insights to the package, you assign it a version name and number, with optional password protection, and upload it. You receive a unique installation URL post-upload, which can then be used in a different org for deployment. Installing Packaged Calculated Insights: Installation cannot happen in the same org where the package was created. In the target test org, you navigate to the Calculated Insights tab, select "Create from Package," follow the prompts, and save, which allows you to view the new insight. Packaging S3 and API Data Streams: Similar to calculated insights, for AWS S3 or API data streams, the process starts in the development org with the creation of a new package in Package Manager. The component type to be added is either "Data Stream Definition" for S3 or an equivalent type for API data streams. After completing the package with all desired data streams and details, the package is uploaded and made available through a URL or AppExchange for installation in a test org. Creating Data Streams from Packaged Components: In the test org, you create new data streams from installed packages by selecting the relevant packaged data streams and configuring the necessary details like authentication and schedule frequency. If the original data stream was mapped to data models, that mapping is retained and applied automatically upon creation in the new org. By understanding this flow and the specific components mentioned, you can anticipate and answer questions that explore: The process of creating and adding specific component types to packages. The restrictions on where a package can be deployed (i.e., not in the same org it was created). The steps involved in installing and configuring the components from a package in a new environment. What is a Data Kit? A Data Kit is like a toolkit in the Data Cloud that helps users handle and organize related settings and information (metadata) more easily. It's designed to let users see how different pieces of metadata are connected and decide if they want to include them in their project. The Data Kit allows you to bundle together different pieces of metadata — these could be data streams, data models, or other configurations within the Data Cloud. What makes Data Kits special is that they can intelligently suggest connections between pieces of metadata that might not usually be considered related in the standard Salesforce environment, thanks to the additional context and capabilities available in Data Cloud. Essentially, it offers a way to package data configurations in a more customized and potentially insightful manner by leveraging relationships that might not be immediately obvious. For those managing Data Cloud (Data Cloud admins), they have the ability to pick and choose which parts of the Data Cloud setup (like data streams which are channels of incoming data, and data models which are the structures that organize this data) to put into a Data Kit. They do this before they wrap everything up into a package, which is like a container used to move these setups from one place to another within Salesforce environments. When you're preparing your metadata to create a package, you simply include the Data Kit and its contents. This process is designed to be worry-free: you won't accidentally leave out important metadata or include too much that isn't needed. Data Kits provide administrators with a user-friendly way to select and organize the specific metadata from Data Cloud that they want to include in their package. This gives them precise control over what is included in the package, ensuring it contains exactly what is needed without any excess. What problems do Data Kits solve? When administrators are packaging data for complex systems like Data Cloud, they often face a tricky situation. If they choose to package all the available metadata at once, they can end up with a lot of unnecessary copies of the same data, which clutters the system and makes it hard to manage. On the other hand, if they try to be selective but miss out on important pieces that are connected, some parts of the system might not work because they don’t have all the data they need. How do Data Kits solve this problem? Selective Inclusion: They allow administrators to selectively pick the precise pieces of metadata needed for a specific purpose. This avoids the clutter of duplicates because you're not just packaging everything available. Relationship Mapping: They automatically detect and recommend metadata that is related, even if those relationships aren't immediately obvious. This means administrators don't have to worry about missing interconnected pieces that are essential for the system's functionality. Customization: Administrators can customize the contents of their package to fit the unique requirements of their project. This customization ensures that the package contains exactly what's needed for the system to operate properly, nothing more and nothing less. Simplified Packaging: The user-friendly interface of Data Kits helps administrators avoid the complexity of manually assembling a package. It guides them in putting together a package that’s both comprehensive and coherent. Data Kit Use Cases: 1) Create and test a custom object data stream in a Data Cloud org. Deploy the predefined CRM data stream bundles along with data model mappings in a single Data Kit. Purpose: The main goal is to build and check how a custom object (a unique data structure that you define for your organization) sends information through a data stream within a Data Cloud environment. Value: This Data Kit allows you to do two important things at once: Deploy Predefined CRM Data Streams: You can quickly roll out a set of ready-made data connections (known as data stream bundles) for Customer Relationship Management (CRM) systems. These are like standard data pipelines that many organizations need. Include Data Model Mappings: Along with these data streams, you can also package the "blueprints" (data model mappings) that show how different pieces of data relate to each other. Combining both of these steps into a single Data Kit means you can move faster because you’re deploying a complete data system all at once. Instead of setting up data connections and then figuring out how they match to your data models in separate, time-consuming steps, you do it together. This makes the whole process more efficient and ensures that your data systems are ready to go with everything they need to work correctly from the start. What challenges does this use case solve for? Complex Setup: Without a Data Kit, setting up and testing a custom data stream could be a complex and error-prone process, requiring multiple steps and configurations. Time Consumption: Manually deploying CRM data streams and then separately mapping data models would consume a lot of time, slowing down the data integration process. Risk of Errors: Doing these tasks separately increases the chance of making mistakes, such as mismatched data models, which could lead to data inaccuracies or integration issues. Deployment Delays: If you’re deploying to multiple environments or need to repeat the process, doing everything step-by-step for each instance can lead to significant delays. By using a Data Kit to create and test custom object data streams along with deploying predefined CRM data stream bundles and their associated data model mappings, the administrator can: - Streamline the deployment process, reducing the time and effort needed. Minimize the risk of errors by using predefined, tested components. Ensure consistency across different environments. Accelerate the time to production by simplifying the integration of complex data models and streams. 2) Optionally, data models are added to a Data Kit. Developers and Admins don’t need to manually recreate the data models between environments or select them individually in the Package Manager view. In this use case, data models, which are essentially blueprints that define how data is organized and interconnected, can be optionally included in a Data Kit. This allows for a more streamlined and efficient process for developers and administrators. Here's a breakdown: What's happening: Instead of having to manually build or select each data model one by one, developers and administrators can add them to a Data Kit. This Data Kit can then be moved between different environments (like from testing to production) without the need to rebuild the models. The Value: Saves time and reduces the likelihood of human error since the models don't need to be recreated or manually selected each time. Ensures consistency because the same data models are used across all environments, which helps in maintaining data integrity and accuracy. What problems does this solve? Labor Intensiveness: It eliminates the tedious and repetitive task of manually creating or choosing data models for each environment. Error Reduction: By automating the inclusion of data models, the risk of errors, such as omissions or incorrect selections, is significantly reduced. Consistency Maintenance: Ensures that the same data structure is maintained across various environments, which is crucial for the integrity of data and systems. Deployment Speed: Speeds up the process of moving from development to production environments since data models are readily included without additional steps. In summary, by using Data Kits to include data models, developers and admins can more easily manage and deploy their data architecture across different stages of development, saving time and reducing errors while ensuring consistency throughout the system. Creating and Installing Data Kits Summary: Data kits are used to transfer data streams and data models between Salesforce orgs and offer a more streamlined process within Data Cloud. Unlike standard packages, data kits allow for deployment within the same org they were created in, offering a simpler integration for CRM and Commerce Cloud data streams. Creating a Data Kit: The process is carried out within the Data Cloud interface. A video tutorial is typically available to demonstrate the creation and upload of a package of a data stream as a data kit. Installing a Data Kit: Installation is done in the test environment through the Data Cloud interface. The Package Install URL is used to initiate the process. After the data stream deployment, a new data stream is created from the Data Cloud UI. The process includes selecting the source as Salesforce CRM, choosing the specific data kit, and configuring data stream definitions. The resulting data stream retains the model and mappings from the dev environment. You can create data kits from data streams Supported Data Streams: Only CRM and Commerce Cloud connector data streams are currently supported by data kits. Tips and Best Practices: If installation issues occur, verify that you're not attempting to install a standard package in its org of origin. You also need to check environment setup Managed package versions are recommended for regular updates and upgrades. CRM and Commerce data kits can be repurposed within multiple orgs. S3 and API data streams packaged via standard packages are typically for one-time use in one org. Calculated insights require an established and mapped data model in the target org. When data model relationships change, they must be included in the updated package. Managed packages can only be created in developer edition orgs, and namespaces must be claimed in Package Manager UI. Using the Data Cloud Configuration: The knowledge of how to create and install data kits and packages is essential for efficient configuration and testing within the Data Cloud environment. By understanding these elements, you should be able to answer questions that focus on: The capabilities and limitations of data kits, particularly with CRM data streams. The correct approach for troubleshooting installation errors with standard packages. The types of data streams supported by data kits. The conditions required for deploying calculated insights and the significance of data models. Entire Process of Creating and Deploying Data Kit Developer Org: 1. 2. 3. 4. Creates and maps CRM data streams Creates new Data Kit Adds CRM Data Stream and Data Model Creates and uploads managed/unmanaged packages Subscriber Org: 5. 6. 7. 8. Installs managed/unmanaged packages Creates CRM Data Stream from Data Kit Data Models mapped automatically Test and validate data against segment Metadata in the SFDC Context: Business data includes the records directly corresponding to your company’s business such as an address, account, or product. Examples of business metadata include: Descriptions of Data: Information that explains what data is being stored, such as field labels and descriptions within a database or a Salesforce object that tell users what kind of information is held there. Data Lineage: Information about the source of data, how it flows through systems, and any transformations it undergoes. This is crucial for tracking the data back to its source for auditing and troubleshooting. Data Quality Rules: Rules or policies that define what constitutes good quality data within the business context. Business Glossary: A dictionary of business terms and definitions that provide a common language for stakeholders across the organization. Usage Patterns: Information on how often and in what context data is accessed or updated, which can be useful for understanding the importance of different data elements. Data Ownership: Information on who is responsible for managing and maintaining specific datasets. Regulatory Compliance Information: Details on how data is affected by various compliance and regulatory requirements. Salesforce metadata refers to the data about the configuration of the Salesforce environment. It includes the setup configurations that determine how Salesforce applications run, rather than the data within the applications themselves. Examples of Salesforce metadata include: Custom Object Definitions: Metadata about the custom objects you create, including fields, relationships, page layouts, and record types. Custom Field Definitions: The specifics of custom fields added to standard or custom objects, including field type, length, and validation rules. Page Layouts: Information about how fields, related lists, and other components are arranged on object record pages. Profiles and Permission Sets: Details about the security settings, which control user access to various functionalities within Salesforce. Workflow Rules: Configuration of automated workflows that define actions based on criteria within Salesforce. Process Builder Processes: The specifications for processes created using the Process Builder tool, which automates complex business processes. Apex Classes/Triggers: The code and metadata for custom Apex classes and triggers, which provide custom business logic. Visualforce Pages/Components: Markup and metadata for custom Visualforce pages and components that define user interfaces. Reports and Dashboards: Configuration information for custom reports and dashboard components. Lightning Components: Metadata for custom components created using the Lightning Component framework. Email Templates: Custom email templates including their subject, body, and related merge fields. Roles and Role Hierarchy: Metadata that defines the roles within the organization and their hierarchy. Validation Rules: Rules that enforce data integrity based on specified criteria before records can be saved. Record Types: Metadata that allows the creation of different business processes, picklist values, and page layouts for different users. AppExchange Packages: Metadata components related to installed managed and unmanaged packages from Salesforce AppExchange. Custom Settings and Custom Metadata Types: Configuration data that applications can use, which are org-specific or can be packaged to be used in different orgs. Flow Definitions: Metadata for Flows that automate complex business processes without writing Apex code. Quick Actions: Configurations for quick actions that users can take on records. Translations: Metadata for language translations used within the org for different labels and messages. Salesforce metadata is crucial for Salesforce org migration, version control, and deployment processes. It enables administrators and developers to maintain consistent configurations and customization across multiple Salesforce environments. Where is Metadata API used in SFDC Data Cloud? AWS Data Streams: Metadata API Usage: The API could be used to define and manage the structure and configuration of the data streams from AWS, ensuring they are correctly set up to flow into Salesforce. Ingestion API Data Stream: Metadata API Usage: For streams created through the Ingestion API, the Metadata API can help in defining the schema or structure of the data as it's ingested into Salesforce, automating the creation of fields and objects that correspond to the ingested data. Mobile and Web Data Streams: Metadata API Usage: Mobile and web streams involve tracking and managing customer interactions across mobile and web channels. The Metadata API could be used to set up the Salesforce schema that will receive this data, such as event logs, user actions, etc. Data Lakes: Metadata API Usage: When Salesforce interacts with data lakes, the Metadata API might be used to manage the configurations that define how Salesforce connects to the data lake and how that data is structured and stored in Salesforce. Data Models: Metadata API Usage: Data models are essentially the blueprints for how data is organized within Salesforce. The Metadata API would be utilized to create, update, and deploy these models across different Salesforce environments. Pieces of Data Cloud Analytics: Data Lake Objects (DLOs): Think of a data lake as a big, vast pool of water where every drop is a piece of data. Data Lake Objects are like containers or buckets in this pool that hold data in its raw form, which can be structured (like Excel files), semi-structured (like JSON files), or unstructured (like emails). These are important because they let businesses store massive amounts of data without having to organize it first, making it a flexible option for data storage. Data Model Objects (DMOs): These are more refined. If DLOs are like the water in the lake, DMOs are like water bottles that have been filled and labeled for easy use. They organize the raw data into a structured form that’s easier to understand and work with, such as customer profiles, sales orders, or engagement metrics. This helps businesses make sense of their data and use it effectively. Calculated Insights (CIs): Now that we have our data nicely packaged, Calculated Insights are like the nutritional labels on the water bottles – they give us valuable information based on analyzing the data. CIs are the results you get after crunching numbers and running analyses, like finding out which customers are most likely to buy a new product or how many products were sold last month. Salesforce Data Cloud: This is the platform where all of this happens. It’s like the water company that manages the lake, the bottling process, and the delivery of water bottles with labels. Data Cloud connects various data sources and structures them so that they can be used across different Salesforce tools. Analytical Tools (Tableau, CRM Analytics, Marketing Intelligence): These tools are like the water testing kits or filtration systems that help ensure the water (data) is clean, safe, and in the form you need. They take the organized data from DMOs and turn it into visual reports, dashboards, or actionable insights. Tableau, for instance, can create interactive graphs and charts that make it easier to understand trends and patterns. JDBC Driver: JDBC (Java Database Connectivity) is like a universal tap that lets you get water (data) from the Data Cloud and use it with different tools or applications. It’s a way for other non-Salesforce tools that a business might already be using to access and analyze the data stored in Salesforce Data Cloud. Customer Value: By using Salesforce Data Cloud and these tools, businesses can: Store large amounts of data affordably and flexibly (DLO). Organize their data in a way that makes it easier to use (DMO). Gain insights that help make informed decisions to grow their business (CI). Use their favorite tools to work with this data, whether it’s Salesforce’s own analytics tools or others they are accustomed to (via JDBC). Reports and Configuration Objects Several configuration objects are supported by Data Cloud - you can use CRM Reports and Dashboards on top of these Lightning report builder is a powerful and intuitive tool for analyzing your Salesforce data. Group, filter, and summarize records to answer business questions. The following objects are currently supported in Lightning Report Builder. Data Stream Segment Activation Target Identity Resolution Lightning charts are the actual data visualizations and charts that you make in Lightning report builder. This is basically the front end that shows you the insights that you need (from Data Cloud). How does Lightning Report Builder tie into Data Cloud? What’s the value here on an industry level? Financial Services Lightning Report Builder & Data Cloud: Financial advisors can use the Report Builder to visualize and monitor key events like marriages or retirements captured in Data Cloud. They can see trends in when these events typically occur and the associated financial products that customers inquire about, helping to tailor financial advice or offers. Data Stream, Segment, Activation Target, Identity Resolution: Data Stream: Real-time spending patterns flow into Data Cloud, allowing banks to quickly spot and alert unusual card activity for fraud prevention. Segment: Clients can be segmented based on their life stages, investment behavior, or saving patterns. Activation Target: Specific financial products can be marketed to segments likely to be interested, like retirement plans for customers nearing retirement age. Identity Resolution: Ensures the correct linking of a customer’s accounts and activities, enabling a unified view for personalized service. Healthcare and Life Sciences Lightning Report Builder & Data Cloud: The Report Builder can help healthcare providers visualize a patient's comprehensive health score, which is a summary of data from various health devices stored in Data Cloud. Data Stream, Segment, Activation Target, Identity Resolution: Data Stream: Continuous data from health monitoring devices is integrated into patient profiles. Segment: Patients with similar health conditions or care requirements can be grouped. Activation Target: Educational campaigns or preventive care information can be sent to specific patient segments. Identity Resolution: With accurate patient profiles, doctors can get up to speed quickly on patient history. Retail and Consumer Goods Lightning Report Builder & Data Cloud: Retailers can use the Report Builder to identify patterns such as frequent cart abandonment and create strategies to convert these into sales. Data Stream, Segment, Activation Target, Identity Resolution: Data Stream: Live data about shopping behaviors and preferences feed into customer profiles. Segment: Customers can be segmented into groups such as frequent buyers, seasonal shoppers, or those with high cart abandonment rates. Activation Target: Targeted ads or reminder emails can be sent to customers who left items in their cart. Identity Resolution: Helps create a single customer view, ensuring that communications are consistent and relevant. Media and Communications Lightning Report Builder & Data Cloud: Service agents can access reports showing a customer’s history of interactions, purchases, and content preferences, helping them make informed recommendations for new products or upgrades. Data Stream, Segment, Activation Target, Identity Resolution: Data Stream: A stream of data including subscription details, viewing habits, and service interactions. Segment: Viewers can be segmented by genre preference, subscription tier, or viewing platform. Activation Target: Personalized content recommendations can be sent to viewers based on their segment. Identity Resolution: A unified customer view allows for tailored support and upsell opportunities. Customer Value in Data Cloud Context Across industries, the integration of Data Cloud with analytical tools like Lightning Report Builder enables organizations to: Gain deep insights from their unified customer data. Create detailed and personalized customer experiences. Make strategic decisions based on data-driven insights. Respond quickly to market and customer behavior changes. Enhance customer engagement and loyalty with targeted actions. These capabilities transform raw data into a strategic asset, enabling businesses to not only understand their customers better but also to anticipate needs and optimize interactions at every touchpoint. What is the journey of the data between data ingestion (via Data Cloud) and data visualization (and insights for the customer)? What is the value at each step? 1. Data Ingestion and Integration: Data Cloud is designed to pull in data from a multitude of sources, like Amazon S3, Mulesoft, Google Cloud Storage, and Salesforce-specific streams (Commerce and CRM). This data can come in various formats and structures, which makes it challenging to unify. Value for the Customer: Businesses no longer need to struggle with connecting disparate data sources or dealing with incompatible data formats. Data Cloud handles the heavy lifting, saving time and resources. 2. Data Mapping and Standardization: Once the data is ingested, Data Cloud uses data model blueprints to map the diverse data to a standardized format. This helps in identifying individuals, their transactions, and how they engage across channels. Value for the Customer: Standardization makes it possible to compare apples to apples. When data from various sources conforms to a single model, insights derived from that data are more accurate and actionable. 3. Calculated Insights and Identity Resolution: Data Cloud provides calculated insights by analyzing the standardized data, and with identity resolution, it ensures that customer profiles are not duplicated or fragmented across datasets. Value for the Customer: This means that a company can trust the insights they’re seeing because they're based on a holistic and accurate view of customer behavior. It's the difference between guessing customer needs based on incomplete data versus knowing what they need because all of their data tells a unified story. 4. Accessibility through Lightning Report Builder: The data is now ripe for analysis. With Lightning Report Builder, users can create insightful reports and charts that tap into this rich, standardized, and unified data. Value for the Customer: The insights gained here drive strategic decision-making. For example, a retailer might use these insights to craft a targeted marketing campaign, or a healthcare provider might use them to predict patient needs. In essence, Data Cloud acts as the data processor and unifier before Lightning Report Builder comes into play as the analysis tool. By the time the data hits the Report Builder, it's clean, organized, and ready for visualization and interpretation, which enables companies to extract valuable customer insights quickly and accurately. This capability to swiftly turn massive and varied data streams into actionable business intelligence is where Data Cloud proves its worth. Anything else worth knowing about how Data Cloud ties into Lightning Report Builder? To create a report on a Data Cloud object, you need to configure a custom report type. Once the custom report type is created, it will become available in the Lightning Report Builder. Workflow Orchestration: What is it and why is it useful? Workflow Orchestration in Data Cloud: Imagine you have a series of tasks that you need to get done in a specific order. In Data Cloud, "workflow orchestration" is like a smart assistant that helps you organize and run these tasks efficiently. Automated workflows can be built in Salesforce Flow builder. How It Works: Automated Batch Uploads: You can set the system to automatically pull in large amounts of data at once (batch upload), from various sources like Salesforce CRM or external data like Amazon S3. Automated Follow-Up Actions: After the data is uploaded, the system can automatically carry out a series of follow-up actions. For instance, it can start analyzing the data to update customer profiles or segments without any manual intervention. Trigger-Based Tasks: The completion of one task can automatically trigger the start of the next task in the workflow. For example, once customer data is updated, it could automatically trigger a marketing campaign specifically targeted to customers based on their new data. What specific OBJECTS in SFDC Flow Builder can you use to build a workflow? (Keep in mind that these are referred to as OBJECTS), not actions, nor anything else 1) Data Ingestion for CRM datastream a) This action takes data from Salesforce CRM (like customer contact details, interaction history, sales data, etc.) and brings it into the Data Cloud environment. The value for the customer is that all their CRM data is now ready to be analyzed and leveraged for deeper insights into their customer base. 2) Data Ingestion for S3 datastream. a) Here, the system imports data from an Amazon S3 bucket, which could include a variety of datasets like website logs, customer feedback, or transaction records. The customer benefit is having a more comprehensive data set by combining external data with CRM data to inform better decision-making. 3) Publish Calculated Insight. a) This action processes the ingested data to compute valuable metrics or KPIs (Calculated Insights) such as customer lifetime value, churn rate, or conversion rates. The key advantage to customers is gaining actionable insights that can inform strategies to improve business performance. 4) Trigger Identity Resolution Job. a) Identity resolution takes different pieces of customer data (which could be fragmented across multiple sources) and matches them to unique customer profiles. For customers, this means a cleaner, more accurate view of each customer, which is critical for personalized marketing and customer service. 5) Publish Segments, materialize segments, and activate. What’s a real world example of how this works? Situation: A retail company wants to launch a targeted marketing campaign for a new line of winter clothing. They have a customer base spread across different regions, with varying preferences and purchasing history. Data Cloud Workflow Orchestration Example: Data Ingestion: The company schedules a workflow in Data Cloud to automatically ingest customer data nightly from Salesforce CRM and their online shopping portal hosted on Amazon S3. Chained Processes: As soon as the data ingestion process completes, the workflow automatically triggers the next step: updating customer segments. The system categorizes customers into segments based on recent purchases, browsing history, and geographical location. Trigger-Based Marketing: Upon successful segmentation, the workflow immediately triggers a personalized email campaign. Customers in colder regions receive promotions for heavy coats and winter gear, while those in warmer areas see lighter winter wear. Rewarding Loyalty: Suppose the system notices a delay in shipping for certain orders. It automatically triggers a process to award those affected customers with loyalty points and sends an apology email with a discount on their next purchase. Feedback Loop: Customer responses and engagement from the email campaign are fed back into the system. The workflow is set to re-segment customers based on their interaction with the campaign, refining the target audience for future promotions. In this way, the entire cycle from data ingestion to customer engagement is automated. The workflow not only saves time but also ensures that marketing efforts are dynamically adjusted to customer behavior and external factors, such as delivery delays, enhancing the customer experience and potentially increasing sales with timely and relevant offers. Customer Value: Speed: By running tasks one after another without delays, everything happens faster. This means you can react to customer needs or market changes quickly. Accuracy: Each step is triggered only after the previous one is correctly finished. This reduces the chance of errors and ensures that any actions taken (like sending out marketing emails) are based on the most up-to-date information. Efficiency: It avoids unnecessary work. If something doesn’t need to happen, the system won’t do it. This saves resources and keeps the focus on what's important. Use Cases and Benefits: Issue Recovery: If there’s a technical issue, and customers are affected, the system can automatically add loyalty points to their accounts and notify them by email. This helps maintain good customer relationships. Timely Marketing: As soon as new data comes in (like recent purchases), the system can update customer profiles and immediately start a marketing campaign for those customers. This means customers get offers that are relevant to them without delay. Streamlined Data Handling: Only after new data is fully brought into the system, the tasks to organize and use that data (like figuring out who to target for a campaign) begin. This ensures marketing or sales efforts are based on the latest customer information. In simple terms, workflow orchestration in Data Cloud helps businesses be more responsive, efficient, and accurate by automating and connecting the flow of tasks based on when they need to happen, rather than just doing them at random or scheduled times. This means happier customers and less wasted effort for the business. Configurability around Calculated Insights and Identity Resolution Can a customer define the KPIs that are part of Calculated Insights? Or are they set in stone? Customers have a significant level of control over how these Key Performance Indicators (KPIs) or Calculated Insights are defined and computed within systems like Salesforce Data Cloud. Here’s how: Custom Definitions: Customers can typically define their own KPIs based on the specific metrics that matter to their business. They can set the parameters and formulas that will be used to calculate these values. Pre-built Options: Platforms often offer a range of pre-built, standard KPIs that are widely used across the industry. Customers can choose to use these or customize them according to their needs. Integration with Analytics Tools: By integrating with analytics tools like Tableau or Salesforce’s own analytics, customers can further manipulate and analyze the data, creating even more nuanced KPIs. Adjustment and Iteration: Customers can adjust the definitions of their KPIs over time as their business needs change. They can iterate on their Calculated Insights, refining them for better accuracy or more relevance. Access to Raw Data: Since customers have access to the raw data, they can use it to build out their own insights if the existing calculated insights do not meet their needs. In essence, while there may be some constraints based on the capabilities of the platform, customers generally have the flexibility to tailor KPIs to suit their unique business processes and objectives. It's a collaborative process between the capabilities of the tool and the strategic goals of the customer. Can a customer customize identity resolution and data ingestion as well? Yes, the principles of control and customization also apply to identity resolution and data ingestion within platforms like Salesforce Data Cloud. Identity Resolution: Custom Matching Rules: Customers can often configure the rules that determine how different pieces of data are matched to identities. They decide which fields (like email, phone number, or customer ID) should be considered for matching and how much weight each should carry. Manual Review and Overrides: Some systems allow manual intervention to review and override the automatic matching if necessary, ensuring that the identity resolution aligns with the customer's understanding of their data. Integration Rules: Customers can set up how various data sources integrate and what data takes precedence in case of a conflict, maintaining the integrity of the customer profiles. Data Ingestion: Data Sources and Formats: Customers have the choice to ingest data from a variety of sources, such as CRMs, ERPs, databases, and even flat files. They can ofte

Tags

salesforce customer data data management
Use Quizgecko on...
Browser
Browser