Salesforce Certified MuleSoft Integration Architect I Practice Exam PDF

Document Details

Uploaded by Deleted User

Salesforce

Tags

MuleSoft Integration Architect Salesforce certification

Summary

This is a practice exam for the Salesforce Certified MuleSoft Integration Architect certification. It includes questions and answers covering topics from initiating integration solutions to designing integration solutions that meet performance requirements. The document also covers deployment and security aspects, and using a persistent object store.

Full Transcript

Salesforce Certified MuleSoft Integration Architect I Practice Exam This practice exam helps you prepare for the Salesforce Certified MuleSoft Integration Architect certification exam. It has the same format, length, duration, and type of questions as the exam. Number of questions: 60 P...

Salesforce Certified MuleSoft Integration Architect I Practice Exam This practice exam helps you prepare for the Salesforce Certified MuleSoft Integration Architect certification exam. It has the same format, length, duration, and type of questions as the exam. Number of questions: 60 Passing score: 70% (42 questions) Duration: 120 minutes Table of Contents Salesforce Certified MuleSoft Integration Architect I Practice Exam SECTION 1: Initiating integration solutions on the Anypoint Platform SECTION 2: Designing for the runtime plane technology architecture SECTION 3: Designing architecture using integration paradigms SECTION 4: Designing and developing Mule applications SECTION 5: Designing automated tests for Mule applications SECTION 6: Designing integration solutions to meet persistence requirements SECTION 7: Designing integration solutions to meet reliability requirements SECTION 8: Designing integration solutions to meet performance requirements SECTION 9: Designing integration solutions to meet security requirements SECTION 10: Applying DevOps practices and operating integration solutions Scoring Table SECTION 1: Initiating integration solutions on the Anypoint Platform 1. An organization has used a Center for Enablement (C4E) to help teach its various business groups best practices for building a large and mature application network. What is a key performance indicator (KPI) to measure the success of the C4E in teaching the organization's various business groups how to build an application network? A The number of each business group's APIs that connect with C4E-documented APIs B The number of end user or consumer requests per day to C4E-deployed API instances The number of each C4E-managed business group's Anypoint Platform user requests to C the CloudHub Shared Load Balancer service The number of C4E-documented code snippets used by Mule apps deployed by the C4E D to each environment in each network region 2. An internet company is building a new search engine that indexes sites on the internet and ranks them according to various signals. The management team wants various features added to the site. There is a team of software developers eager to start on the functional requirements received from the management team. Which two traditional architectural requirements should the integration architect ensure are in place to support the new search engine? (Choose two.) A The system can handle increased load as more people utilize the engine B New features can be added to the system with ease C Relevant search results are returned for a query D Search results are returned in the language chosen by the user E Search result listings link to the correct website 3. An API is being implemented using the components of Anypoint Platform. The API implementation must be managed and governed (by applying API policies) on Anypoint Platform. What must be done before the API implementation can be governed by Anypoint Platform? The API must be published to Anypoint Exchange, and a corresponding API Instance ID A must be obtained from API Manager to be used in the API implementation A RAML definition of the API must be created in API Designer so the API can then be B published to Anypoint Exchange The OAS definitions in the Design Center project of the API and the API C implementation's corresponding Mule project in Anypoint Studio must be synchronized The API must be published to the organization's public portal so potential developers D and API consumers both inside and outside of the organization can interact with the API 4. Additional nodes are being added to an existing customer-hosted Mule runtime cluster to improve performance. Mule applications deployed to this cluster are invoked by API clients through a load balancer. What is also required to carry out this change? External monitoring tools or log aggregators must be configured to recognize the new A nodes B A new load balancer entry must be configured to allow traffic to the new nodes New firewall rules must be configured to accommodate communication between API C clients and the new nodes D API impleme 5. A Mule application is deployed to an existing Runtime Fabric (RTF) cluster and must access the data saved in the Object Store V2 by a CloudHub application. Which steps should be followed to achieve the requirement and enable the shared Object Store access across these two applications? Obtain the Client ID and Client Secret from the CloudHub App Object Store Obtain the access token from the /oauth2/token endpoint A Invoke from the application deployed in RTF the Object Store API including the Bearer token Obtain the Client ID and Client Secret from the Business Group Obtain the access token from the /object-store/token endpoint B Invoke the Object Store API from the application deployed in RTF including the Bearer token Obtain the Access Token from the CloudHub App Object Store C Obtain the Client ID and Client Secret from the /object-store/client credentials endpoint Invoke the Object Store API including the Bearer token Obtain the Access Token from the /oauth2/token endpoint D Invoke the Access Management API to approve the read access Invoke the Object Store API from the application in CloudHub including the Bearer token SECTION 2: Designing for the runtime plane technology architecture 6. A Mule application is deployed to a cluster of two customer-hosted Mule runtimes. Currently, the node named Alex is the primary node and the node named Lee is the secondary node. The Mule application has a flow that polls a directory on a file system for new files. The primary node Alex fails for an hour and then is restarted. After the Alex node completely restarts, from which node are the files polled, and which node is now the primary node for the cluster? Files are polled from the Lee node A Lee is now the primary node Files are polled from the Lee node B Alex is now the primary node Files are polled from the Alex node C Alex is now the primary node Files are polled from the Alex node D Lee is now the primary node 7. Runtime Manager as the App URL. Requests are sent by external web clients over the public internet to the Mule application's App URL. Each of these requests is routed to the HTTPS Listener event source of the running Mule application. Later, the DevOps team edits some properties of this running Mule application in Runtime Manager. Immediately after the new property values are applied in Runtime Manager, how is the current Mule application deployment affected, and how will future web client requests to the Mule application be handled? CloudHub 1.0 will redeploy the Mule application to a new CloudHub 1.0 worker. A New web client requests are routed to the old CloudHub 1.0 worker until the new CloudHub 1.0 worker is available. CloudHub 1.0 will redeploy the Mule application to a new CloudHub 1.0 worker. B New web client requests will return an error until the new CloudHub 1.0 worker is available. CloudHub 1.0 will redeploy the Mule application to the old CloudHub 1.0 worker. C New web client requests are routed to the old CloudHub 1.0 worker both before and after the Mule application is redeployed. CloudHub 1.0 will redeploy the Mule application to the old CloudHub 1.0 worker. D New web client requests will return an error until the Mule application is redeployed to the old CloudHub 1.0 worker. 8. Refer to the exhibits. A company has several applications deployed to a CloudHub VPC in the Asia-Pacific (Sydney) region. The VPC is connected to the corporate network with a transit gateway. The development team plans to create a solution that provides a REST API using a custom domain, available on the public internet. The solution will use a custom connector that requires Microsoft Windows Server 2022. Which deployment model minimizes disruption to existing integrations while meeting the new requirements? Provision Microsoft Windows 2022 servers, and install customer-hosted standalone Mule runtimes Build a Mule application with the custom connector, and deploy it to the customer-hosted runtimes Deploy the public-facing API application to the CloudHub VPC, and configure a CloudHub Dedicated Load Balancer A Provision six Windows Server 2022 virtual machines, and install Runtime Fabric for VMs Build a Mule application with the custom connector, and deploy it to Runtime Fabric Deploy a public-facing API application to the CloudHub VPC, and configure a CloudHub Dedicated Load Balancer B Provision an Azure Kubernetes Service cluster, and install Runtime Fabric for Self-Managed Kubernetes Build a Mule application with the custom connector, and deploy it to Runtime Fabric Deploy a public-facing API application to the CloudHub VPC, and configure a CloudHub Dedicated Load Balancer C Build a Mule application with the custom connector Deploy the application to CloudHub using Microsoft Windows 2022 workers Deploy the public-facing API application to the CloudHub VPC, and configure a CloudHub Dedicated Load Balancer D 9. An organization is sizing an Anypoint Virtual Private Cloud (VPC) to extend its internal network to CloudHub 1.0. For this sizing calculation, the organization assumes three production-type environments will each support up to 150 Mule application deployments. Each Mule application deployment is expected to be configured with two CloudHub 1.0 workers and will use the zero-downtime feature in CloudHub 1.0. This is expected to result in, at most, several Mule application deployments per hour. What is the minimum number of IP addresses that should be configured for this VPC resulting in the smallest usable range of private IP addresses to support the deployment and zero-downtime of these 150 Mule applications (not accounting for any future Mule applications)? A 10.0.0.0/21 (2048 IPs) B 10.0.0.0/22 (1024 IPs) C 10.0.0.0/23 (512 IPs) D 10.0.0.0/24 (256 IPs) 10. An organization has previously provisioned its own AWS virtual private cloud (VPC) that contains several AWS instances. The organization now needs to use CloudHub 1.0 to host a Mule application that will implement a REST API. Once deployed to CloudHub 1.0, this Mule application must be able to communicate securely with the customer-provisioned AWS VPC resources within the same region, without being interceptable on the public internet. Which Anypoint Platform features should be used to meet these network communication requirements between CloudHub 1.0 and the existing customer-provisioned AWS VPC? Add a MuleSoft-hosted (CloudHub 1.0) Anypoint VPC configured with VPC peering to the A range of IP addresses located in the customer-provisioned AWS VPC Add default API Allowlist policies to API Manager that automatically secure traffic from B the range of IP addresses located in the customer-provisioned AWS VPC to access the Mule application Configure a MuleSoft-hosted (CloudHub 1.0) Dedicated Load Balancer with mapping C rules that allow secure traffic from the range of IP addresses located in the customer-provisioned AWS VPC to access the Mule application Configure an external identity provider (IdP) in Anypoint Platform with certificates from D an AWS Transit Gateway for the customer-hosted AWS VPC, where the certificates allow the range of IP addresses located in the customer-provisioned AWS VPC 11. A company uses CloudHub for API application deployment so that experience APIs and/or API proxies are publicly exposed using custom mTLS. The company's InfoSec team requires isolated, restricted access that is limited internally to system APIs deployed to CloudHub and the company's data center. What are the minimum infrastructure, component, connection, and software requirements to meet the company's goal and the InfoSec team's requirements? Virtual Private Cloud Two Dedicated Load Balancers for access to public APIs and internal APIs using IP A Allowlist rules Two-way custom TLS VPN IPSec tunneling to connect the VPC to the company's on-premises data center Virtual Private Cloud One Shared Load Balancer and One Dedicated Load Balancer for access to public APIs and internal APIs, respectively, using IP Allowlist rules B Two-way custom TLS VPN IPSec tunneling to connect the VPC to the company's on-premises data center Virtual Private Cloud One Shared Load Balancer and One Dedicated Load Balancer for access to public APIs and internal APIs, respectively, using IP Allowlist rules C One-way custom TLS VPN IPSec tunneling to connect the VPC to the company's on-premises data center Virtual Private Cloud Two Shared Load Balancers for access to public APIs and internal APIs using IP Allowlist D rules Two-way custom TLS VPN IPSec tunneling to connect the VPC to the company's on-premises data center 12. Refer to the exhibit. An organization deploys multiple Mule applications to the same customer-hosted Mule runtime. Many of these Mule applications must expose an HTTPS endpoint on the same port, using a server-side certificate that rotates often. When deploying these Mule applications, what is the most effective way to package the HTTP Listener configuration and package or store the server-side certificate in order to minimize the disruption caused by periodic certificate rotation? Package the HTTP Listener configuration in a Mule domain project, referencing it from all Mule applications that must expose an HTTPS endpoint. A Store the server-side certificate in a shared file system location in the Mule runtime's classpath, outside of the Mule domain project or any Mule application. Package the HTTP Listener configuration in a Mule domain project, referencing it from all Mule applications that must expose an HTTPS endpoint. B Package the server-side certificate in the same Mule domain project. Package the HTTP Listener configuration in all Mule applications that must expose an HTTPS endpoint. C Package the server-side certificate in a new Mule domain project. Package the HTTP Listener configuration in a Mule domain project, referencing it from all Mule applications that must expose an HTTPS endpoint. D Package the server-side certificate in all Mule applications that must expose an HTTPS endpoint. 13. A developer at an insurance company has developed a Mule application that has two modules as dependencies for two different operations. These two modules use the same library Joda-Time to return a DateTimeFormatter class. One of the module uses Joda-Time version 2.9.5 and the other one uses Joda-Time version 2.1.1. The DateTimeFormatter class lives in the same package in both versions, but the different implementations of each version make the classes incompatible. First Module public DateTimeFormatter getCreateTimestampDateTimeFormatter() { // Here DateTimeFormatter is from joda-time 2.9.5 return DateTimeFormat.forPattern("yyyyMMdd"); } Second Module public DateTimeFormatter getUpdateTimestampDateTimeFormatter() { // Here DateTimeFormatter is from joda-time 2.1.1 return DateTimeFormat.forPattern("yyyyMMddHH24mm"); } Given the details of these two modules, what will happen when the Mule application is deployed? It will only load one of the versions; the module that needs the unloaded version of the A package will behave differently and be prone to errors such as ClassCastException or NoSuchMethodException It will load both module versions and, when each individual operation is executed, it will B not run into any errors The deployment will fail because the two modules try to return the same class C It will only load the latest version of Joda-Time; older versions of Joda-Time applications D will throw a ClassLoaderException error 14. A Mule application is designed to periodically synchronize 1 million records from a source system to a SaaS target system using a Batch Job scope. The current application design includes using the default Batch Job scope to process records while managing high throughput requirements. However, what actually happens is the application takes too long to process records even with the application deployed to a customer-hosted cluster of two Mule runtime 4.3 instances. What must occur to achieve the required high throughput, considering the Mule runtimes' CPU and memory requirements are met with no expected contentions from other applications running under the same cluster? Change the application design and increase the Batch Job scope concurrency and the A records block size Modify the cluster Mule runtimes UBER thread pool strategy with a high concurrency in B the conf/scheduler-pools.conf files Modify the cluster Mule runtimes concurrency by changing the memory allocation in the C conf/wrapper.conf files Scale the cluster Mule runtimes horizontally by adding a third instance needed to D support high rate of records processing SECTION 3: Designing architecture using integration paradigms 15. A client system sends marketing-related data to a legacy system within the company data center. The Center for Enablement team has identified that this marketing data has no reuse by any other system. How should the APIs be designed most efficiently using API-led connectivity? Create an Experience API, route the data to the System API, and insert the data in the A legacy system Create a System API, call the System API from the client application, and insert the data B into the legacy system Create a Process API, route the request to the System API, and insert the data in the C legacy system Create an Experience API to take the data from the client, forward the message to a D Process API in the Common Data Model, and invoke a System API to insert the data into the legacy system 16. In an organization, there are multiple backend systems that contain customer-related data. There are multiple client systems that request the customer data from only one or more backend systems. How can the integration between the source and target systems be designed to maximize efficiency? Create a single Experience API with one endpoint for all consumers. Receive the request, transform it into a Common Data Model, and then send it to the A Process API. Have a single Process API that will route it to different System APIs using content-based routing. Create a single Experience API and expose multiple endpoints. Have separate Process APIs to route the request to the different System APIs and send B back the response. Create multiple Experience APIs exposed to the different end users. Have separate Process APIs to route the request to the different System APIs and send C back the response. Create a single Experience API with one endpoint for all consumers. Receive the request and transform it into a Common Data Model. D Have a single Process API that will route it to a single System API. The System API is designed to have multiple connections to multiple end systems. 17. Refer to the exhibit. A telecommunications company receives orders (bill payments) from customers who submit a simple HTML form (no JavaScript or Web Assembly). Currently the process is synchronous and the customer is notified after everything is complete. The requirement is that the customer is notified of payment (charging the customer's credit card) through the response to the browser, but the customer can also be notified when the order is applied to the customer's account at a later time. Due to an increase in customers, the system has been unable to handle the load and the solution has been experiencing performance and reliability issues. Which request point could be replaced with an event-driven API using a JMS queue to help mitigate the performance issues, increase the fault tolerance, and meet the requirements? A 5 B 4 C 1 D 2 18. An external web UI application currently accepts occasional HTTP requests from client web browsers to change (insert, update, or delete) inventory pricing information in an inventory system's database. Each inventory pricing change must be transformed and then synchronized with multiple customer experience systems in near real-time (in under 10 seconds). New customer experience systems are expected to be added in the future. The database is used heavily and limits the number of SELECT queries that can be made to the database to 10 requests per hour per user. ​ ow can inventory pricing changes synchronize with the various customer experience systems in H near real-time using an integration mechanism that is scalable, decoupled, reusable, and maintainable? Add a trigger to the inventory-pricing database table so that for each change to the inventory pricing database, a stored procedure is called that makes a REST call to a Mule application. Write the Mule application to publish each Mule event as a message to an Anypoint MQ A exchange. Write other Mule applications to subscribe to the Anypoint MQ exchange, transform each received message, and then update the Mule application's corresponding customer experience system(s). Write a Mule application with a Database On Table Row event source configured for the inventory pricing database, with the watermark attribute set to an appropriate database column. B In the same flow, use a Scatter-Gather to call each customer experience system's REST API with transformed inventory pricing records. Write a Mule application with a Database On Table Row event source configured for the inventory pricing database, with the ID attribute set to an appropriate database column. In the same flow, use a Batch Job scope to publish transformed inventory pricing records to an Anypoint MQ queue. C Write other Mule applications to subscribe to the Anypoint MQ queue, transform each received message, and then update the Mule application's corresponding customer experience system(s). Replace the external web UI application with a Mule application to accept HTTP requests from client web browsers. D In the same Mule application, use a Batch Job scope to test if the database request will succeed, aggregate pricing changes within a short time window, and then update both the inventory pricing database and each customer experience system using a Parallel For Each scope. 19. A Mule application is running on a customer-hosted Mule runtime in an organization's network. The Mule application acts as a producer of asynchronous Mule events. Each Mule event must be broadcast to all interested external consumers outside the Mule application. The Mule events should be published in a way that is guaranteed in normal situations and also minimizes duplicate delivery in less-frequent failure scenarios. The organizational firewall is configured to only allow outbound traffic on ports 80 and 443. Some external event consumers are within the organizational network, while others are located outside the firewall. Which Anypoint Platform service facilitates publishing these Mule events to all external consumers while addressing the desired reliability goals? A Anypoint MQ B CloudHub VM queues C CloudHub Shared Load Balancer D Anypoint Exchange 20. Which statement is true about the network connections when a Mule application uses a JMS connector to interact with a JMS provider (message broker)? The JMS connector supports both sending and receiving JMS messages over the protocol A determined by the JMS provider For the Mule application to receive JMS messages, the JMS provider initiates a network B connection to the Mule application's JMS connector and then the JMS provider pushes messages along this connection The Advanced Message Queuing Protocol (AMQP) can be used by the JMS connector to C portably establish connections to various types of JMS providers To complete sending a JMS message, the JMS connector must establish a network D connection with the JMS message recipient SECTION 4: Designing and developing Mule applications 21. An organization's release engineer wants to override secure properties in a CloudHub production environment. Properties can be updated in the Properties tab in Runtime Manager, but the password is not being hidden even after the application is restarted or redeployed. What could be the reason? A The secureProperties key in the mule-artifact.json file does not list properties Properties need to be prefixed with a secure keyword when entered in the Properties B tab C Properties do not exist in the prod properties file D In a secure-prod.yaml file properties are not marked secure 22. An external REST client periodically sends an array of records in a single POST request to a Mule application's API endpoint. The Mule application must validate each record of the request against a JSON schema before sending it to a downstream system in the same order that it was received in the array. Record processing will take place inside a router or scope that calls a child flow. The child flow has its own error handling defined. Any validation or communication failures should not prevent further processing of the remaining records. Which router or scope should be used in the parent flow, and which type of error handler should be used in the child flow in order to meet these requirements? For Each scope in the parent flow A On Error Continue error handler in the child flow Until Successful router in the parent flow B On Error Propagate error handler in the child flow Parallel For Each scope in the parent flow C On Error Propagate error handler in the child flow Choice router in the parent flow D On Error Continue error handler in the child flow 23. An organization has defined a common object model in Java to mediate the communication between different Mule applications in a consistent way. A Mule application is being built to use this common object model to process responses from a SOAP API and a REST API and then write the processed results to an order management system. The developers want Anypoint Studio to utilize these common objects to assist in creating mappings for various transformation steps in the Mule application. What is the most straightforward way to utilize these common objects to map between the inbound and outbound systems in the Mule application? A Use the Transform Message component B Use JAXB (XML) and Jackson (JSON) data bindings C Use Idempotent Message Validator components D Use the Java module 24. A Mule Process API is being designed to provide product usage details. The Mule application must join together the responses from an Inventory API and a Product Sales History API with the least latency. How should each API request be called in the Mule application to minimize overall latency? A In a separate route of a Scatter-Gather B Call each API request in a separate Mule flow C Call each API request in a Batch Step within a Batch Job D In a separate lookup call from a DataWeave reduce function 25. An organization plans to use the Salesforce Connector as an intermediate layer for applications that need access to Salesforce events such as adding, changing, or deleting objects, topics, documents, and channels. What are two features to keep in mind when using the Salesforce Connector for this integration? (Choose two.) A REST API B gRPC C GraphQL D Streaming API E Chatter API 26. An organization is trying to invoke REST APIs as part of its integration with external systems, which requires OAuth 2.0 tokens for authorization. How should authorization tokens be acquired in a Mule application? A Use HTTP Connector's Authentication Feature B Write custom Java code for handling authorization tokens C Implement Scheduler-based flow for retrieving/saving OAuth 2.0 tokens in Object Store D Configure OAuth 2.0 in Client Management in Anypoint Platform 27. A large enterprise is building APIs to connect to their 300 systems of records across all of their departments. These systems have a variety of data formats to exchange with the APIs, and the Solution Architect plans to use the application/dw format for data transformations. What are two facts that the Integration Architect must be aware of when using the application/dw format for transformations? (Choose two.) The application/dw format can impact performance and is not recommended in a A production environment B The application/dw configuration property must be set to "onlyData=true" when reading or writing data in the application/dw format C The application/dw format is the only native format that never runs into an Out Of Memory Error D The application/dw format improves performance and is recommended for all production environments E The application/dw format stores input data from an entire file in-memory if the file is 10MB or less 28. What are two considerations when designing Mule APIs and integrations that leverage an enterprise-wide common data model (CDM)? (Choose two.) A All data types required by the APIs are not typically defined by the CDM B The CDM typically does not model experience-level APIs C Changes made to the data model do not impact the implementations of the APIs D The CDM typically does not model process-level APIs E The CDM models multiple definitions of a given data type based on separate domains 29. A Mule application receives a JSON request, and it uses the validation module extensively to perform certain validations like isNotEmpty, isEmail, and isNotElapsed. It throws an error if any of these validations fails. A new requirement is added that says a validation error should be thrown only if all above individual validations fail, and then an aggregation of individual errors should be returned. Which MuleSoft component supports this new requirement? A Use VALIDATION:ANY scope wrapper enclosing all individual validations B Use VALIDATION:ALL scope wrapper enclosing all individual validations C Add try-catch with on-error-continue wrapper over each individual validation D Add try-catch with on-error-propagate wrapper over each individual validation SECTION 5: Designing automated tests for Mule applications 30. A developer is developing an MUnit test suite for a Mule application. This application must access third-party vendor SOAP services. In the CI/CD pipeline, access to third-party vendor services is restricted. Without MUnits, a successful run and coverage report score is less than the threshold, and builds will fail. Which solution can be implemented to execute MUnits successfully? A In MUnits, mock a SOAP service invocation and provide a mock response for those calls B For the CI/CD pipeline, add a skip clause in the flow for invoking SOAP services C In the CI/CD pipeline, create and deploy mock SOAP services D In MUnits, invoke a dummy SOAP service to send a mock response for those calls 31. An MUnit case is written for a Main Flow that consists of a Listener, a set payload, a set variable, a Transform message, and a logger and error handler. The case is passed but with a coverage of 80 percent. What could be the reason for not covering the remaining 20 percent, and how can coverage be achieved? A The error handler; use error handler in MUnit test suite B The Listener; use Mock when in MUnit test suite C The error handler; use Mock when in MUnit test suite D The Listener; send a dummy payload in MUnit test suite 32. A company is tracking the number of patient COVID-19 tests given across a region, and the number of records handled by the system is in the millions. Test results must be accessible to doctors in offices, hospitals, and urgent-care facilities within three seconds of the request, particularly for patients at high risk. Given this information, which test supports the system for the risk assessment? A Performance test B Integration test C Unit test D User acceptance test SECTION 6: Designing integration solutions to meet persistence requirements 33. Refer to the exhibit. In this Mule application, the retrieveFile flow's event source reads a CSV file from a remote SFTP server and then publishes each record in the CSV file to a VM queue. The processCustomerRecord flow's VM Listener receives messages from the same VM queue and then processes each message separately. This Mule application is deployed to multiple CloudHub workers with persistent queues enabled. How are messages routed to the CloudHub workers as messages are received by the VM Listener? Each message is routed to one of the available CloudHub workers in a A non-deterministic, non-round-robin fashion, thereby approximately balancing messages among the CloudHub workers Each message is routed to the same CloudHub worker that retrieved the file, thereby B binding all messages to only that one CloudHub worker Each message is duplicated to all of the CloudHub workers, thereby sharing each C message with all the CloudHub workers D Each message is routed to one of the CloudHub workers in a deterministic round-robin fashion, thereby exactly balancing messages among the CloudHub workers 34. Refer to the exhibit. An application is deployed in CloudHub and uses the VM Connector with a TRANSIENT queues configuration. Which action is also required to ensure zero messages are lost in case the CloudHub worker crashes? Check the option for Persistent Queue and scale out to two workers in the application A setting page in Runtime Manager; no need to change the VM Queue configuration B Publish the message to a dead-letter queue in case of any system error Change VM Queue configuration in the implementation from TRANSIENT to PERSISTENT; C no need to change any settings in the Runtime Manager for the application D Scale-up the worker in the Runtime Manager settings for the application 35. A company is designing a Mule application named Inventory that uses a persistent Object Store. The Inventory Mule application is deployed to CloudHub and is configured to use Object Store v2. Another Mule application named Cleanup is being developed to delete values from the Inventory Mule application's persistent Object Store. The Cleanup Mule application will also be deployed to CloudHub. What is the most direct way for the Cleanup Mule application to delete values from the Inventory Mule application's persistent Object Store with the least latency? Use the Object Store v2 REST API configured to access the Inventory Mule application's A persistent Object Store Use a VM connector configured to directly access the persistent queue of the Inventory B Mule application's persistent Object Store Use an Object Store connector configured to access the Inventory Mule application's C persistent Object Store Use an Anypoint MQ connector configured to directly access the Inventory Mule D application's persistent Object Store 36. An organization is implementing a Quote of the Day API that caches today's quote. Which scenario can use the CloudHub Object Store v2 via the Object Store Connector to persist the cache's state? When there is one CloudHub deployment of the API implementation to three CloudHub A workers/replicas, where all three CloudHub workers/replicas must share the cache state When there is one deployment of the API implementation to CloudHub and another B deployment to a customer-hosted Mule runtime, where both deployments must share the cache state When there are two CloudHub deployments of the API implementation that must share C the cache state, where the API implementations are deployed to two different CloudHub VPNs within the same business group When there are two CloudHub deployments of the API implementation that must share D the cache state, where each API implementation is deployed from a different Anypoint Platform business group to the same CloudHub region 37. Refer to the exhibit. A Mule application is deployed to a multi-node Mule runtime cluster. The Mule application uses the Competing Consumers pattern among its cluster replicas to receive JMS messages from a JMS queue. To process each received JMS message, the following steps are performed in a flow. Step 1: The JMS Correlation ID header is read from the received JMS message. Step 2: The Mule application invokes an idempotent SOAP web service over HTTPS, passing the JMS Correlation ID as one parameter in the SOAP request. Step 3: The response from the SOAP web service also returns the same JMS Correlation ID. Step 4: The JMS Correlation ID received from the SOAP web service is validated to be identical to the JMS Correlation ID received in Step 1. Step 5: The Mule application creates a response JMS message, setting the JMS Correlation ID message header to the validated JMS Correlation ID and publishes that message to a response JMS queue. Where should the Mule application store the JMS Correlation ID values received in Step 1 and Step 3 so that the validation in Step 4 can be performed and the overall Mule application can be highly available, fault-tolerant, performant, and maintainable? A Both Correlation ID values should be stored as Mule event variables or attributes B Both Correlation ID values should be stored in a persistent Object Store The Correlation ID value in Step 1 should be stored in a persistent Object Store C The Correlation ID value in Step 3 should be stored as Mule event variables or attributes D Both Correlation ID values should be stored in a nonpersistent Object Store 38. Refer to the exhibit. A company is tracking the number of patient COVID-19 tests across the city. Test results must be accessible to doctors in offices, hospitals, and urgent-care facilities. Due to the importance of the service, in particular for patients at high risk, the company is requested to improve the responsiveness of the Test Result API, shown in the image below, to retrieve the patient's result. How can these data and functional requirements be met? A Add a cache scope in the Test Result API GET /testResult operation implementation B Apply an HTTP Caching Policy to the entire Test Result API C Scale-out the number of workers for the current application in Runtime Manager D Add a new request parameter for patentAtRisk to give high priority to this type of call in the GET /testResult operation SECTION 7: Designing integration solutions to meet reliability requirements 39. An airline's passenger reservations center is designing an integration solution that combines invocations of three different System APIs (bookFlight, bookHotel, and bookCar) in a business transaction. Each System API makes calls to a single database. The entire business transaction must be rolled back when at least one of the APIs fails. What is the most direct way to integrate these APIs in near real-time that provides the best balance of consistency, performance, and reliability? Implement local transactions in each API implementation A Coordinate between the API implementations using a Saga pattern Apply various compensating actions depending on where a failure occurs Implement an eXtended Architecture (XA) transaction manager in a Mule application using a Saga pattern B Connect each API implementation with the Mule application using XA transactions Apply various compensating actions depending on where a failure occurs Implement local transactions within each API implementation Configure each API implementation to also participate in the same eXtended C Architecture (XA) transaction Implement caching in each API implementation to improve performance Implement eXtended Architecture (XA) transactions between the API implementations D Coordinate between the API implementations using a Saga pattern Implement caching in each API implementation to improve performance 40. What are two valid considerations when implementing a reliability pattern? (Choose two.) A It requires using an XA transaction to bridge message sources when multiple managed resources need to be enlisted within the same transaction B It has performance implications C It provides high performance D It is not possible to have multiple message sources within the same transaction while implementing reliability pattern E It does not support VM queues in an HA cluster 41. In a Mule application, a flow contains two JMS Consume operations that are used to connect to a JMS broker and consume messages from two JMS destinations. The Mule application then joins the two consumed JMS messages together. The JMS broker does not implement high availability and periodically experiences scheduled outages of up to 10 minutes for routine maintenance. How should the Mule flow be built so it can recover from the expected outages? A Configure a reconnection strategy for the JMS connector B Enclose the two JMS operations in a Try scope with an On Error Continue error handler C Enclose the two JMS operations in an Until Successful scope D Configure a transaction for the JMS connector 42. An organization has a mission-critical application that processes some of its valuable real-time transactions. The application needs to be highly available, and the organization does not have any cost constraints. But, it expects minimal downtime. Which high-availability option supports the organization's requirements? A Active-Active B Warm Standby C Hot Standby - Active-Passive D Cold Standby 43. An organization is designing a Mule application to support an all-or-nothing transaction between several database operations and some other connectors so that all operations automatically roll back if there is a problem with any of the connectors. Besides the database connector, what other Anypoint connector can be used in the Mule application to participate in the all-or-nothing transaction? A JMS B Object Store C Anypoint MQ D SFTP SECTION 8: Designing integration solutions to meet performance requirements 44. An organization plans to leverage the MuleSoft open-source Serialization API to serialize or de-serialize objects into a byte array. Which two considerations must be kept in mind while using the Serialization API? (Choose two.) A The API allows an InputStream as an input source B The API passes an OutputStream when serializing and streaming C The API does not provide any flexibility to specify which classloader to use D The API is not thread-safe E The API does not support configuring a Custom Serializer 45. Refer to the exhibit. A connector uses repeatable in-memory stream with these configurations: maxBufferSize = "512" initialBufferSize = "512" bufferSizeIncrement = "512" What happens if the output payload size is 1,000KB? A A runtime error is thrown B The payload is split in chunks of 512KB and each chunk is processed concurrently C The payload is read repeatedly with watermark until the entire payload is processed D The Mule runtime stops with a java.lang.OutOfMemoryError 46. An organization is designing an integration solution to replicate financial transaction data from a legacy system into a data warehouse (DWH). The DWH must contain a daily snapshot of financial transactions, to be delivered as a CSV file. Daily transaction volume exceeds tens of millions of records, with significant spikes in volume during popular shopping periods. What is the most appropriate integration style for an integration solution that meets the organization's current requirements? A Batch-triggered data integration B Splitter-Aggregator Integration Pattern C Event-driven architecture D Microservice architecture 47. A Mule application is being designed to receive a CSV file nightly that contains millions of records from an external vendor over SFTP. The records from the file must be transformed and then written to a database. Records can be inserted into the database in any order. In this use case, which combination of Mule components provides the most effective way to write these records to the database? A Use a Batch Job scope to bulk-insert records into the database B Use a Scatter-Gather router to bulk-insert records into the database C Use a Parallel For Each scope to insert records in-parallel into the database Use the DataWeave map function and an Async scope to insert records in-parallel into D the database SECTION 9: Designing integration solutions to meet security requirements 48. What limits whether a particular Anypoint Platform user can discover an asset in Anypoint Exchange? A The teams to which the user belongs B Accessibility of the asset in the API Manager C The type of the asset in Anypoint Exchange The existence of a public Anypoint Exchange portal to which the asset has been D published 49. An organization plans to leverage the Anypoint Security policies for Edge to enforce security policies on nodes deployed to its Anypoint Runtime Fabric. Which two considerations must be kept in mind to configure and use the security policies? (Choose two.) Anypoint Security for Edge entitlement must be configured for the Anypoint Platform A account B Runtime Fabric with inbound traffic must be configured C Runtime Fabric with outbound traffic must be configured HTTP limits policies are designed to protect the network nodes against malicious clients D such as DoS applications trying to flood the network to prevent legitimate traffic to APIs Web application firewall policies allow configuring an explicit list of IP addresses that can E access deployed endpoints 50. A manufacturing company has an HTTPS-enabled Mule application named Orders API that receives requests from another Mule application named Process Orders. The communication between these two Mule applications must be secured by TLS mutual authentication (two-way TLS). At a minimum, what must be stored in each truststore and keystore of these two Mule applications to properly support two-way TLS between the two Mule applications while properly protecting each Mule application's keys? Orders API truststore: The Process Orders public key Orders API keystore: The Orders API private key and public key A Process Orders truststore: The Orders API public key Process Orders keystore: The Process Orders private key and public key Orders API truststore: The Process Orders public key B Orders API keystore: The Orders API private key Process Orders truststore: The Orders API public key Orders API keystore: The Orders API private key C Process Orders truststore: The Orders API public key Process Orders keystore: The Process Orders private key Orders API truststore: The Process Orders private key D Orders API keystore: The Orders API private key and public key Process Orders truststore: The Orders API private key Process Orders keystore: The Process Orders private key and public key 51. A software company is creating a Mule application that will be deployed to CloudHub. The Mule application has a property named dbPassword that stores a database user's password. The organization's security standards indicate that the dbPassword property must be hidden from every Anypoint Platform user after the value is set in the Runtime Manager Properties tab. Which configuration in the Mule application helps hide the dbPassword property value in Runtime Manager? Add the dbPassword property to the secureProperties section of the mule-artifact.json A file Use secure::dbPassword as the property placeholder name and store the cleartext B (unencrypted) value in a secure properties placeholder file C Store the encrypted dbPassword value in a secure properties placeholder file D Add the dbPassword property to the secureProperties section of the pom.xml file 52. An organization uses MuleSoft extensively and has about 2,000 employees. Many of them work on MuleSoft APIs. The organization has approximately 500 APIs in production. The organization's leadership strictly discourages direct API modification (for example, stop/start/delete in production); however, there have been a few instances where modifications in production occurred. Now leadership wants to know every instance when this occurred in the past year and include timestamps and user IDs. What is the easiest way to retrieve this information? A Invoke Audit Log Query Platform API and using a combination of filters such as timeframe and actionType to extract a user list B Submit a support ticket to the MuleSoft product team to create a custom report C Invoke the Runtime Manager Platform API for each production API and check access_history one-by-one D Use MuleSoft audit logs; however, the audit logs only store data for six months SECTION 10: Applying DevOps practices and operating integration solutions 53. A manufacturing organization has implemented a continuous integration (CI) lifecycle that promotes Mule applications through code, build, and test stages. To standardize the organization's CI journey, a new dependency control approach is being designed to store artifacts that include information such as dependencies, versioning, and build promotions. To implement these process improvements, the organization requires developers to maintain all dependencies related to Mule application code in a shared location. Which system should the organization use in a shared location to standardize all dependencies related to Mule application code? A A binary artifact repository B A MuleSoft-managed repository at repository.mulesoft.org C API Community Manager D The Anypoint Object Store service at cloudhub.io 54. An organization is automating its deployment process to increase the reliability of its builds and general development process by automating the running of tests during its builds. Which tool is responsible for automating its test execution? A MUnit Maven plugin B Mule Maven plugin C MUnit D Anypoint CLI 55. An automation engineer must write scripts to automate the steps of the API lifecycle, including steps to create, publish, deploy, and manage APIs and their implementations in Anypoint Platform. Which Anypoint Platform feature can be most easily used to automate the execution of all these actions in scripts without needing to directly invoke the Anypoint Platform REST APIs? A Anypoint CLI B Mule Maven plugin C Custom-developed Postman scripts D GitHub actions 56. An organization is automating the deployment of several Mule applications to a single customer-hosted Mule runtime. There is also a corporate regulatory requirement to have all payload data and usage data reside in the organization's network. The automation will be invoked from one of the organization's internal systems and should not involve connecting to Runtime Manager in Anypoint Platform. Which Anypoint Platform component(s) and REST API(s) are required to configure the automated deployment of the Mule applications? A A Runtime Manager agent installed in the Mule runtime The Runtime Manager agent REST API to deploy the Mule applications B An Anypoint Monitoring agent installed in the Mule runtime The Anypoint Monitoring REST API to deploy the Mule applications C The Runtime Manager REST API (without any agents) to deploy the Mule applications directly to the Mule runtime D The Anypoint Monitoring REST API (without any agents) to deploy Mule applications to the Mule runtime using Anypoint Monitoring 57. Refer to the exhibit. A shopping cart checkout process consists of a web store back end that sends a sequence of HTTPS POST requests to an Experience API, which in turn invokes a Process API using HTTPS. The web store back end executes in a Java EE application server. All API implementations are Mule applications executing in a customer-hosted Mule runtime. End-to-end correlation of all HTTP requests and responses belonging to each checkout instance is required. This is to be done through a common correlation ID so that all log entries written by the web store back end, Experience API implementation, and Process API implementation include the same correlation ID for all requests and responses belonging to the same checkout instance. What is the most efficient way (using the least amount of custom coding or configuration) for the web store back end and the implementations of the Experience API and Process API to participate in end-to-end correlation of the API invocations for each checkout instance? The web store back end generates a new correlation ID value at the start of checkout and sets it on the X-Correlation-ID HTTP request header in each API invocation A belonging to that checkout. No special code or configuration is included in the Experience API or Process API implementations to generate and manage the correlation ID. The web store back end sends a correlation ID value in the HTTP request body in the way required by the Experience API. B The Experience API and Process API implementations must be coded to receive the custom correlation ID in the HTTP requests and propagate it in suitable HTTP request headers. The Experience API implementation generates a correlation ID for each incoming HTTP request and passes it to the web store back end in the HTTP response, which includes it C in all subsequent API invocations to the Experience API. The Experience API implementation must also be coded to propagate the correlation ID to the Process API in a suitable HTTP request header. The web store back end, being a Java EE application, automatically uses the thread-local correlation ID generated by the Java EE application server and automatically transmits D that to the Experience API using standard HTTP headers. No special code or configuration is included in the web store back end, Experience API implementation, or Process API implementation to generate and manage the correlation ID. 58. An organization will deploy Mule applications to CloudHub. Business requirements mandate that all application logs be stored only in an external Splunk consolidated logging service and not in CloudHub. In order to most easily store Mule application logs only in Splunk, how must Mule application logging be configured in Runtime Manager, and where should the log4j2 Splunk appender be defined? A Disable CloudHub logging in Runtime Manager. Define the Splunk appender in each Mule application's log4j2.xml file. Disable CloudHub logging in Runtime Manager. B Submit a ticket to MuleSoft Support with Splunk appender information so that CloudHub can automatically forward logs to the specified Splunk appender. Disable CloudHub logging in Runtime Manager. C Define the Splunk appender in one global log4j2.xml file that is uploaded once to Runtime Manager to support all Mule application deployments. Keep the default logging configuration in Runtime Manager. D Define the Splunk appender in the Logging section of Runtime Manager in each application so that it overwrites the default logging configuration. 59. An organization uses a set of customer-hosted Mule runtimes that are managed using the MuleSoft-hosted control plane. What is a condition that can be alerted on from Anypoint Runtime Manager without any custom components or custom coding? When a Mule runtime on a given customer-hosted server is experiencing high memory A consumption during certain periods B When a Mule runtime license installed on a Mule runtime is about to expire When an SSL certificate used by one of the deployed Mule applications is about to C expire D When a Mule runtime's customer-hosted server is about to run out of disk space 60. An organization has deployed both Mule and non-Mule API implementations to integrate its customer and order management systems. All the APIs are available to REST clients on the public internet. The organization wants to monitor these APIs by running health checks, for example, to determine if an API can properly accept and process requests. The organization does not have subscriptions to any external monitoring tools and also does not want to extend its IT footprint. Which Anypoint Platform feature monitors the availability of both the Mule and the non-Mule API implementations? A API Functional Monitoring B API Manager C Runtime Manager D Anypoint Visualizer Scoring Table To receive credit for the question, all answers in the table below must be selected. SECTION 1 SECTION 2 SECTION 3 SECTION 4 SECTION 5 Q Ans. Q Ans. Q Ans. Q Ans. Q Ans. 1 A 6 A 15 A 21 A 30 A 2 A, B 7 A 16 A 22 A 31 A 3 A 8 A 17 A 23 A 32 A 4 A 9 A 18 A 24 A 5 A 10 A 19 A 25 A, B 11 A 20 A 26 A 12 A 27 A, B 13 A 28 A, B 14 A 29 SECTION TOTALS SEC. 1 SEC. 2 SEC. 3 SEC. 4 SEC. 5 /5 /9 /6 /9 /3 SECTION PERCENTAGE SEC. 1 SEC. 2 SEC. 3 SEC. 4 SEC. 5 SECTION 6 SECTION 7 SECTION 8 SECTION 9 SECTION 10 Q Ans. Q Ans. Q Ans. Q Ans. Q Ans. 33 A 39 A 44 A, B 48 A 53 A 34 A 40 A, B 45 A 49 A, B 54 A 35 A 41 A 46 A 50 A 55 A 36 A 42 A 47 A 51 A 56 A 37 A 43 A 52 A 57 A 38 A 58 A 59 A 60 A SECTION TOTALS SEC. 6 SEC. 7 SEC. 8 SEC. 9 SEC. 10 /6 /5 /4 /5 /8 SECTION PERCENTAGE SEC. 6 SEC. 7 SEC. 8 SEC. 9 SEC. 10 TOTAL CORRECT ANSWERS: / 60 TOTAL CORRECT ANSWERS %: / 100% To pass the practice exam, you need a score of at least 70% (42 correct answers). Before attempting the certification exam, review the training for any section where your score was below 70%.

Use Quizgecko on...
Browser
Browser