🎧 New: AI-Generated Podcasts Turn your study notes into engaging audio conversations. Learn more

ai-102_8319acd3608c (1).pdf

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Document Details

EntrancedNeptune

Uploaded by EntrancedNeptune

Microsoft

Tags

artificial intelligence azure microsoft certification

Full Transcript

Certy IQ Premium exam material Get certification quickly with the CertyIQ Premium exam material. Everything you need to prepare, learn & pass your certification exam easily. Lifetime free updates First attempt guaranteed success. https://www.CertyIQ.com Microsoft (AI-102) Designing and Implementing...

Certy IQ Premium exam material Get certification quickly with the CertyIQ Premium exam material. Everything you need to prepare, learn & pass your certification exam easily. Lifetime free updates First attempt guaranteed success. https://www.CertyIQ.com Microsoft (AI-102) Designing and Implementing a Microsoft Azure AI Solution Total: 244 Questions Link: https://certyiq.com/papers?provider=microsoft&exam=ai-102 Question: 1 CertyIQ DRAG DROP You have 100 chatbots that each has its own Language Understanding model. Frequently, you must add the same phrases to each model. You need to programmatically update the Language Understanding models to include the new phrases. How should you complete the code? To answer, drag the appropriate values to the correct targets. Each value may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point. Select and Place: Answer: Explanation: Box 1: AddPhraseListAsync Example: Add phraselist feature var phraselistId = await client.Features.AddPhraseListAsync(appId, versionId, new PhraselistCreateObject EnabledForAllModels = false, IsExchangeable = true, Name = "QuantityPhraselist", Phrases = "few,more,extra" ); Box 2: PhraselistCreateObject Reference: https://docs.microsoft.com/en-us/azure/cognitive-services/luis/client-libraries-rest-api Question: 2 CertyIQ DRAG DROP You plan to use a Language Understanding application named app1 that is deployed to a container. App1 was developed by using a Language Understanding authoring resource named lu1. App1 has the versions shown in the following table. You need to create a container that uses the latest deployable version of app1. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Select and Place: Answer: Explanation: Step 1: Select v1.1 of app1. A trained or published app packaged as a mounted input to the container with its associated App ID. Step 2: Export the model using the Export for containers (GZIP) option. Export versioned app's package from LUIS portal The versioned app's package is available from the Versions list page. 1. Sign on to the LUIS portal. 2. Select the app in the list. 3. Select Manage in the app's navigation bar. 4. Select Versions in the left navigation bar. 5. Select the checkbox to the left of the version name in the list. 6. Select the Export item from the contextual toolbar above the list. 7. Select Export for container (GZIP). 8. The package is downloaded from the browser. Step 3: Run a contain and mount the model file. Run the container, with the required input mount and billing settings. Reference: https://docs.microsoft.com/en-us/azure/cognitive-services/luis/luis-container-howto Question: 3 CertyIQ You need to build a chatbot that meets the following requirements: ✑ Supports chit-chat, knowledge base, and multilingual models ✑ Performs sentiment analysis on user messages ✑ Selects the best language model automatically What should you integrate into the chatbot? A. QnA Maker, Language Understanding, and Dispatch B. Translator, Speech, and Dispatch C. Language Understanding, Text Analytics, and QnA Maker D. Text Analytics, Translator, and Dispatch Answer: C Explanation: Language Understanding: An AI service that allows users to interact with your applications, bots, and IoT devices by using natural language. QnA Maker is a cloud-based Natural Language Processing (NLP) service that allows you to create a natural conversational layer over your data. It is used to find the most appropriate answer for any input from your custom knowledge base (KB) of information. Text Analytics: Mine insights in unstructured text using natural language processing (NLP)"no machine learning expertise required. Gain a deeper understanding of customer opinions with sentiment analysis. The Language Detection feature of the Azure Text Analytics REST API evaluates text input Incorrect Answers: A, B, D: Dispatch uses sample utterances for each of your bot's different tasks (LUIS, QnA Maker, or custom), and builds a model that can be used to properly route your user's request to the right task, even across multiple bots. Reference: https://azure.microsoft.com/en-us/services/cognitive-services/text-analytics/ https://docs.microsoft.com/en-u s/azure/cognitive-services/qnamaker/overview/overview Question: 4 CertyIQ Your company wants to reduce how long it takes for employees to log receipts in expense reports. All the receipts are in English. You need to extract top-level information from the receipts, such as the vendor and the transaction total. The solution must minimize development effort. Which Azure service should you use? A. Custom Vision B. Personalizer C. Form Recognizer D. Computer Vision Answer: C Explanation: Azure Form Recognizer is a cognitive service that lets you build automated data processing software using machine learning technology. Identify and extract text, key/value pairs, selection marks, tables, and structure from your documents"the service outputs structured data that includes the relationships in the original file, bounding boxes, confidence and more. Form Recognizer is composed of custom document processing models, prebuilt models for invoices, receipts, IDs and business cards, and the layout model. Reference: https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer Question: 5 CertyIQ HOTSPOT You need to create a new resource that will be used to perform sentiment analysis and optical character recognition (OCR). The solution must meet the following requirements: ✑ Use a single key and endpoint to access multiple services. ✑ Consolidate billing for future services that you might use. ✑ Support the use of Computer Vision in the future. How should you complete the HTTP request to create the new resource? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: Answer: Explanation: Box 1: PUT Sample Request: PUT https://management.azure.com/subscriptions/00000000-0000-0000-0000000000000000/resourceGroups/test-rg/providers/ Microsoft.DeviceUpdate/accounts/contoso?api-version=2020-03-01-preview Incorrect Answers: PATCH is for updates. Box 2: CognitiveServices Microsoft Azure Cognitive Services provide us to use its pre-trained models for various Business Problems related to Machine Learning. List of Different Services are: ✑ Decision ✑ Language (includes sentiment analysis) ✑ Speech ✑ Vision (includes OCR) ✑ Web Search Reference: https://docs.microsoft.com/en-us/rest/api/deviceupdate/resourcemanager/accounts/create https://www.anal yticsvidhya.com/blog/2020/12/microsoft-azure-cognitive-services-api-for-ai-development/ Question: 6 CertyIQ You are developing a new sales system that will process the video and text from a public-facing website. You plan to monitor the sales system to ensure that it provides equitable results regardless of the user's location or background. Which two responsible AI principles provide guidance to meet the monitoring requirements? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point. A. transparency B. fairness C. inclusiveness D. reliability and safety E. privacy and security Answer: BC Explanation: BC is the answer. https://learn.microsoft.com/en-us/azure/cloud-adoption-framework/innovate/best-practices/trustedai#fairness Fairness is a core ethical principle that all humans aim to understand and apply. This principle is even more important when AI systems are being developed. Key checks and balances need to make sure that the system's decisions don't discriminate or run a gender, race, sexual orientation, or religion bias toward a group or individual. https://learn.microsoft.com/en-us/azure/cloud-adoption-framework/innovate/best-practices/trustedai#inclusiveness Inclusiveness mandates that AI should consider all human races and experiences, and inclusive design practices can help developers to understand and address potential barriers that could unintentionally exclude people. Where possible, speech-to-text, text-to-speech, and visual recognition technology should be used to empower people with hearing, visual, and other impairments. Question: 7 CertyIQ DRAG DROP You plan to use containerized versions of the Anomaly Detector API on local devices for testing and in on-premises datacenters. You need to ensure that the containerized deployments meet the following requirements: ✑ Prevent billing and API information from being stored in the command-line histories of the devices that run the container. ✑ Control access to the container images by using Azure role-based access control (Azure RBAC). Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. NOTE: More than one order of answer choices is correct. You will receive credit for any of the correct orders you select. Select and Place: Answer: Explanation: Step 1: Pull the Anomaly Detector container image. Step 2: Create a custom Dockerfile Step 3: Build the image Step 4: Push the image to an Azure container registry. Question: 8 CertyIQ HOTSPOT You plan to deploy a containerized version of an Azure Cognitive Services service that will be used for text analysis. You configure https://contoso.cognitiveservices.azure.com as the endpoint URI for the service, and you pull the latest version of the Text Analytics Sentiment Analysis container. You need to run the container on an Azure virtual machine by using Docker. How should you complete the command? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: Answer: Explanation: Box 1: mcr.microsoft.com/azure-cognitive-services/textanalytics/sentiment To run the Sentiment Analysis v3 container, execute the following docker run command. docker run --rm -it -p 5000:5000 --memory 8g --cpus 1 \ mcr.microsoft.com/azure-cognitive-services/textanalytics/sentiment \ Eula=accept \ Billing= ENDPOINT_URI \ ApiKey= API_KEY is the endpoint for accessing the Text Analytics API. https://.cognitiveservices.azure.com Box 2: https://contoso.cognitiveservices.azure.com ENDPOINT_URI is the endpoint for accessing the Text Analytics API: https://.cognitiveservices.a The endpoint for accessing the Text Analytics API. zure.com Reference: https://docs.microsoft.com/en-us/azure/cognitive-services/text-analytics/how-tos/text-analytics-how-to-inst all-containers?tabs=sentiment Question: 9 CertyIQ You have the following C# method for creating Azure Cognitive Services resources programmatically. You need to call the method to create a free Azure resource in the West US Azure region. The resource will be used to generate captions of images automatically. Which code should you use? A. create_resource(client, "res1", "ComputerVision", "F0", "westus") B. create_resource(client, "res1", "CustomVision.Prediction", "F0", "westus") C. create_resource(client, "res1", "ComputerVision", "S0", "westus") D. create_resource(client, "res1", "CustomVision.Prediction", "S0", "westus") Answer: A Explanation: A, as there is free tier available for Computer Vision service. - Free - Web/Container - 20 per minute - 5,000 free transactions per month Computer vision has free tier offering generating image captions (I have tried it), and the customer vision doesn't directly support generate captions on the image but returns some info about the given image specifically in the object detection part, under a very specific condition that you have pretrained the model on your own images which is not stated in the question. Question: 10 You successfully run the following HTTP request. POST https://management.azure.com/subscriptions/18c51a87-3a69-47a8-aedca54745f708a1/resourceGroups/RG1/providers/ Microsoft.CognitiveServices/accounts/contoso1/regenerateKey?api-version=2017-04-18 Body "keyName": "Key2" CertyIQ What is the result of the request? A. A key for Azure Cognitive Services was generated in Azure Key Vault. B. A new query key was generated. C. The primary subscription key and the secondary subscription key were rotated. D. The secondary subscription key was reset. Answer: D Explanation: Regenerates the secondary account key for the specified Cognitive Services account. https://docs.microsoft.com/en-us/rest/api/cognitiveservices/accountmanagement/accounts/regenerate-key Question: 11 CertyIQ You build a custom Form Recognizer model. You receive sample files to use for training the model as shown in the following table. Which three files can you use to train the model? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point. A. File1 B. File2 C. File3 D. File4 E. File5 F. File6 Answer: ACF Explanation: Input requirements Form Recognizer works on input documents that meet these requirements: Format must be JPG, PNG, PDF (text or scanned), or TIFF. Text-embedded PDFs are best because there's no possibility of error in character extraction and location. File size must be less than 50 MB. File 2 and 5 are excluded. New service limits now goes up to 500MB so... File 1, 3, and 6 are correct for "training the model", however if MSFT remove the word "training" from the question - be careful. Reference: https://docs.microsoft.com/en-gb/learn/modules/work-form-recognizer/3-get-started https://docs.microsoft.com/en-us/azure/applied-ai-services/form-recognizer/service-limits?tabs=v21 https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/overview Question: 12 CertyIQ A customer uses Azure Cognitive Search. The customer plans to enable a server-side encryption and use customer-managed keys (CMK) stored in Azure. What are three implications of the planned change? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point. A. The index size will increase. B. Query times will increase. C. A self-signed X.509 certificate is required. D. The index size will decrease. E. Query times will decrease. F. Azure Key Vault is required. Answer: ABF Explanation: A. The index size will increase. B. Query times will increase. F. Azure Key Vault is required https://docs.microsoft.com/en-us/azure/search/search-security-overview#customer-managed-keys-cmk Customer-managed keys (CMK) Customer-managed keys require an additional billable service, Azure Key Vault, which can be in a different region, but under the same subscription, as Azure Cognitive Search. Enabling CMK encryption will increase index size and degrade query performance. Based on observations to date, you can expect to see an increase of 30%-60% in query times, although actual performance will vary depending on the index definition and types of queries. Because of this performance impact, we recommend that you only enable this feature on indexes that really require it. Question: 13 You are developing a new sales system that will process the video and text from a public-facing website. You plan to notify users that their data has been processed by the sales system. CertyIQ Which responsible AI principle does this help meet? A. transparency B. fairness C. inclusiveness D. reliability and safety Answer: A Explanation: The correct answer is A, transparency: "When an AI application relies on personal data, such as a facial recognition system that takes images of people to recognize them; you should make it clear to the user how their data is used and retained, and who has access to it." from: https://docs.microsoft.com/en-us/learn/paths/prepare-for-ai-engineering/ Question: 14 CertyIQ Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You create a web app named app1 that runs on an Azure virtual machine named vm1. Vm1 is on an Azure virtual network named vnet1. You plan to create a new Azure Cognitive Search service named service1. You need to ensure that app1 can connect directly to service1 without routing traffic over the public internet. Solution: You deploy service1 and a public endpoint to a new virtual network, and you configure Azure Private Link. Does this meet the goal? A. Yes B. No Answer: B Explanation: Answer is no. you should create a private link with private endpoint The Azure Private Link should use a private endpoint, not a public endpoint. Private Link service can be accessed from approved private endpoints in any public region. Reference: https://docs.microsoft.com/en-us/azure/private-link/private-link-overview Question: 15 CertyIQ Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You create a web app named app1 that runs on an Azure virtual machine named vm1. Vm1 is on an Azure virtual network named vnet1. You plan to create a new Azure Cognitive Search service named service1. You need to ensure that app1 can connect directly to service1 without routing traffic over the public internet. Solution: You deploy service1 and a public endpoint, and you configure an IP firewall rule. Does this meet the goal? A. Yes B. No Answer: B Explanation: Correct Answer is B. No. this scenario routes over public internet, to do this without touching public internet you would use a private endpoint on a vnet then private link to access it. Instead deploy service1 and a private (not public) endpoint to a new virtual network, and you configure Azure Private Link. Reference: https://docs.microsoft.com/en-us/azure/private-link/private-link-overview Question: 16 CertyIQ Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You create a web app named app1 that runs on an Azure virtual machine named vm1. Vm1 is on an Azure virtual network named vnet1. You plan to create a new Azure Cognitive Search service named service1. You need to ensure that app1 can connect directly to service1 without routing traffic over the public internet. Solution: You deploy service1 and a public endpoint, and you configure a network security group (NSG) for vnet1. Does this meet the goal? A. Yes B. No Answer: B Explanation: Instead deploy service1 and a private (not public) endpoint to a new virtual network, and you configure Azure Private Link. Reference: https://docs.microsoft.com/en-us/azure/private-link/private-link-overview Question: 17 CertyIQ You plan to perform predictive maintenance. You collect IoT sensor data from 100 industrial machines for a year. Each machine has 50 different sensors that generate data at one-minute intervals. In total, you have 5,000 time series datasets. You need to identify unusual values in each time series to help predict machinery failures. Which Azure service should you use? A. Anomaly Detector B. Cognitive Search C. Form Recognizer D. Custom Vision Answer: A Explanation: A is the answer. https://learn.microsoft.com/en-us/azure/cognitive-services/anomaly-detector/overview Anomaly Detector is an AI service with a set of APIs, which enables you to monitor and detect anomalies in your time series data with little machine learning (ML) knowledge, either batch validation or real-time inference. Question: 18 CertyIQ HOTSPOT You are developing a streaming Speech to Text solution that will use the Speech SDK and MP3 encoding. You need to develop a method to convert speech to text for streaming MP3 data. How should you complete the code? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: Answer: Explanation: GetCompressedFormat https://docs.microsoft.com/enus/dotnet/api/microsoft.cognitiveservices.speech.audio.audiostreamformat.getcompressedformat? view=azure-dotnet SpeechRecognizer Reference: https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/how-to-use-codec-compressedaudio-input-streams?tabs=debian&pivots=programming- language-csharp https://docs.microsoft.com/enus/dotnet/api/microsoft.cognitiveservices.speech.audio.audiostreamformat.getcompressedformat? view=azure-dotnet Question: 19 CertyIQ HOTSPOT You are developing an internet-based training solution for remote learners. Your company identifies that during the training, some learners leave their desk for long periods or become distracted. You need to use a video and audio feed from each learner's computer to detect whether the learner is present and paying attention. The solution must minimize development effort and identify each learner. Which Azure Cognitive Services service should you use for each requirement? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: Answer: Explanation: From Video feed - Face Facial Expression from - Face Audio Feed is - Speech https://learn.microsoft.com/en-us/azure/cognitive-services/computer-vision/overview-identity#facedetection-and-analysis Face detection is required as a first step in all the other scenarios. The Detect API detects human faces in an image and returns the rectangle coordinates of their locations. It also returns a unique ID that represents the stored face data. This is used in later operations to identify or verify faces. Optionally, face detection can extract a set of face-related attributes, such as head pose, age, emotion, facial hair, and glasses. These attributes are general predictions, not actual classifications. Some attributes are useful to ensure that your application is getting high-quality face data when users add themselves to a Face service. For example, your application could advise users to take off their sunglasses if they're wearing sunglasses Reference: https://docs.microsoft.com/en-us/azure/cognitive-services/what-are-cognitive-services Question: 20 CertyIQ You plan to provision a QnA Maker service in a new resource group named RG1. In RG1, you create an App Service plan named AP1. Which two Azure resources are automatically created in RG1 when you provision the QnA Maker service? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point. A. Language Understanding B. Azure SQL Database C. Azure Storage D. Azure Cognitive Search E. Azure App Service Answer: DE Explanation: D and E are correct answer. QnA Maker service is being retired on 31st March, 2025. A newer version of this capability is now available as a part of Azure Cognitive Service for Language called question answering. To use this service, you need to provision a Language resource. For question answering capability within the Language service, see question answering and its pricing page. Beginning 1st October, 2022, you won’t be able to create any new QnA Maker resources. For information on migrating your existing QnA Maker knowledge bases to question answering, consult the migration guide. Reference: https://docs.microsoft.com/en-us/azure/cognitive-services/qnamaker/how-to/set-up-qnamaker-serviceazure?tabs=v1#delete-azure-resources Question: 21 CertyIQ You are building a language model by using a Language Understanding (classic) service. You create a new Language Understanding (classic) resource. You need to add more contributors. What should you use? A. a conditional access policy in Azure Active Directory (Azure AD) B. the Access control (IAM) page for the authoring resources in the Azure portal C. the Access control (IAM) page for the prediction resources in the Azure portal Answer: B Explanation: In the Azure portal, find your Language Understanding (LUIS) authoring resource. It has the type LUIS.Authoring. In the resource's Access Control (IAM) page, add the role of contributor for the user that you want to contribute. For detailed steps, see Assign Azure roles using the Azure portal." Reference: https://docs.microsoft.com/en-us/azure/cognitive-services/luis/luis-how-to-collaborate Question: 22 CertyIQ Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You have an Azure Cognitive Search service. During the past 12 months, query volume steadily increased. You discover that some search query requests to the Cognitive Search service are being throttled. You need to reduce the likelihood that search query requests are throttled. Solution: You migrate to a Cognitive Search service that uses a higher tier. Does this meet the goal? A. Yes B. No Answer: A Explanation: Migrating to a higher tier in Azure Cognitive Search can provide more resources, such as increased storage, throughput, and replicas, which can help reduce the likelihood of search query requests being throttled. A simple fix to most throttling issues is to throw more resources at the search service (typically replicas for query-based throttling, or partitions for indexing-based throttling). However, increasing replicas or partitions adds cost, which is why it is important to know the reason why throttling is occurring at all. Reference: https://docs.microsoft.com/en-us/azure/search/search-performance-analysis Question: 23 CertyIQ DRAG DROP You need to develop an automated call handling system that can respond to callers in their own language. The system will support only French and English. Which Azure Cognitive Services service should you use to meet each requirement? To answer, drag the appropriate services to the correct requirements. Each service may be used once, more than once, or not at all. You may need to drag the split bat between panes or scroll to view content. NOTE: Each correct selection is worth one point. Select and Place: Answer: Explanation: 1. Speech to Text with AutoDetectSourceLanguageConfig. It can't be Text Analytics because the input is callers' voice. 2. - Text to Speech: the output is voice. Speech-to-Text : https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/speech-to-text Text-to-Speech : https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/text-to-speech Both support common languages, including French. https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/language-support? tabs=speechtotext Question: 24 CertyIQ You have receipts that are accessible from a URL. You need to extract data from the receipts by using Form Recognizer and the SDK. The solution must use a prebuilt model. Which client and method should you use? A. the FormRecognizerClient client and the StartRecognizeContentFromUri method B. the FormTrainingClient client and the StartRecognizeContentFromUri method C. the FormRecognizerClient client and the StartRecognizeReceiptsFromUri method D. the FormTrainingClient client and the StartRecognizeReceiptsFromUri method Answer: C Explanation: C is the answer. https://learn.microsoft.com/en-us/dotnet/api/azure.ai.formrecognizer.formrecognizerclient?view=azuredotnet The client to use to connect to the Form Recognizer Azure Cognitive Service to recognize information from forms and images and extract it into structured data. It provides the ability to analyze receipts, business cards, and invoices, to recognize form content, and to extract fields from custom forms with models trained on custom form types. Question: 25 CertyIQ You have a collection of 50,000 scanned documents that contain text. You plan to make the text available through Azure Cognitive Search. You need to configure an enrichment pipeline to perform optical character recognition (OCR) and text analytics. The solution must minimize costs. What should you attach to the skillset? A. a new Computer Vision resource B. a free (Limited enrichments) Cognitive Services resource C. an Azure Machine Learning Designer pipeline D. a new Cognitive Services resource that uses the S0 pricing tier Answer: D Explanation: D is the answer. https://learn.microsoft.com/en-us/azure/search/cognitive-search-attach-cognitive-services?tabs=portal When configuring an optional AI enrichment pipeline in Azure Cognitive Search, you can enrich a limited number of documents free of charge. For larger and more frequent workloads, you should attach a billable multi-service Cognitive Services resource. A multi-service resource references "Cognitive Services" as the offering, rather than individual services, with access granted through a single API key. This key is specified in a skillset and allows Microsoft to charge you for using these APIs: - Computer Vision for image analysis and optical character recognition (OCR) - Language service for language detection, entity recognition, sentiment analysis, and key phrase extraction - Translator for machine text translation Question: 26 CertyIQ Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You have an Azure Cognitive Search service. During the past 12 months, query volume steadily increased. You discover that some search query requests to the Cognitive Search service are being throttled. You need to reduce the likelihood that search query requests are throttled. Solution: You add indexes. Does this meet the goal? A. Yes B. No Answer: B Explanation: Instead, you could migrate to a Cognitive Search service that uses a higher tier. Note: A simple fix to most throttling issues is to throw more resources at the search service (typically replicas for query-based throttling, or partitions for indexing- based throttling). However, increasing replicas or partitions adds cost, which is why it is important to know the reason why throttling is occurring at all. Reference: https://docs.microsoft.com/en-us/azure/search/search-performance-analysis Question: 27 CertyIQ Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You have an Azure Cognitive Search service. During the past 12 months, query volume steadily increased. You discover that some search query requests to the Cognitive Search service are being throttled. You need to reduce the likelihood that search query requests are throttled. Solution: You enable customer-managed key (CMK) encryption. Does this meet the goal? A. Yes B. No Answer: B Explanation: Customer-managed key (CMK) encryption does not affect throttling. Instead, you could migrate to a Cognitive Search service that uses a higher tier. Note: A simple fix to most throttling issues is to throw more resources at the search service (typically replicas for query-based throttling, or partitions for indexing- based throttling). However, increasing replicas or partitions adds cost, which is why it is important to know the reason why throttling is occurring at all. Reference: https://docs.microsoft.com/en-us/azure/search/search-performance-analysis Question: 28 CertyIQ Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You create a web app named app1 that runs on an Azure virtual machine named vm1. Vm1 is on an Azure virtual network named vnet1. You plan to create a new Azure Cognitive Search service named service1. You need to ensure that app1 can connect directly to service1 without routing traffic over the public internet. Solution: You deploy service1 and a private endpoint to vnet1. Does this meet the goal? A. Yes B. No Answer: A Explanation: A private endpoint is a network interface that uses a private IP address from your virtual network. This network interface connects you privately and securely to a service powered by Azure Private Link. By enabling a private endpoint, you're bringing the service into your virtual network. The service could be an Azure service such as: ✑ Azure Storage ✑ Azure Cosmos DB ✑ Azure SQL Database ✑ Your own service using a Private Link Service. Reference: https://docs.microsoft.com/en-us/azure/private-link/private-endpoint-overview Question: 29 You have a Language Understanding resource named lu1. You build and deploy an Azure bot named bot1 that uses lu1. You need to ensure that bot1 adheres to the Microsoft responsible AI principle of inclusiveness. How should you extend bot1? CertyIQ A. Implement authentication for bot1. B. Enable active learning for lu1. C. Host lu1 in a container. D. Add Direct Line Speech to bot1. Answer: D Explanation: Inclusiveness: AI systems should empower everyone and engage people. Direct Line Speech is a robust, end-to-end solution for creating a flexible, extensible voice assistant. It is powered by the Bot Framework and its Direct Line Speech channel, that is optimized for voice-in, voice-out interaction with bots. Incorrect: Not B: The Active learning suggestions feature allows you to improve the quality of your knowledge base by suggesting alternative questions, based on user- submissions, to your question and answer pair. You review those suggestions, either adding them to existing questions or rejecting them. Reference: https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/direct-line-speech Question: 30 CertyIQ HOTSPOT You are building an app that will process incoming email and direct messages to either French or English language support teams. Which Azure Cognitive Services API should you use? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: Answer: Explanation: Box1: https://eastus.api.cognitive.microsoft.com Box2: /text/analytics/v3.1/languages Reference: https://learn.microsoft.com/en-us/rest/api/cognitiveservices-textanalytics/3.0/languages/languages? tabs=HTTP. Question: 31 CertyIQ You have an Azure Cognitive Search instance that indexes purchase orders by using Form Recognizer. You need to analyze the extracted information by using Microsoft Power BI. The solution must minimize development effort. What should you add to the indexer? A. a projection group B. a table projection C. a file projection D. an object projection Answer: B Explanation: To analyze the extracted information from the Azure Cognitive Search index with Microsoft Power BI, you should add a table projection to the indexing. This will allow you to present the data in a tabular format that can be easily imported and analyzed by Power BI with minimal development effort. So, the correct answer is: B. a table projection Question: 32 CertyIQ Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You have an Azure Cognitive Search service. During the past 12 months, query volume steadily increased. You discover that some search query requests to the Cognitive Search service are being throttled. You need to reduce the likelihood that search query requests are throttled. Solution: You add replicas. Does this meet the goal? A. Yes B. No Answer: A Explanation: A is the answer. Quote "In Cognitive Search, replicas are copies of your index." at https://learn.microsoft.com/enus/azure/search/search-reliability https://learn.microsoft.com/en-us/azure/search/search-performance-analysis#throttling-behaviors Throttling occurs when the search service is at capacity. Throttling can occur during queries or indexing. From the client side, an API call results in a 503 HTTP response when it has been throttled. During indexing, there's also the possibility of receiving a 207 HTTP response, which indicates that one or more items failed to index. This error is an indicator that the search service is getting close to capacity. A simple fix to most throttling issues is to throw more resources at the search service (typically replicas for query-based throttling, or partitions for indexing-based throttling). However, increasing replicas or partitions adds cost, which is why it's important to know the reason why throttling is occurring at all. Investigating the conditions that cause throttling will be explained in the next several sections. Reference: https://docs.microsoft.com/en-us/azure/search/search-performance-analysis Question: 33 CertyIQ SIMULATION You need to create a Text Analytics service named Text12345678, and then enable logging for Text12345678. The solution must ensure that any changes to Text12345678 will be stored in a Log Analytics workspace. To complete this task, sign in to the Azure portal. Answer: See explanation below. Explanation: Step 1: Sign in to the QnA portal. Step 2: Create an Azure Cognitive multi-service resource: Step 3: On the Create page, provide the following information. Name: Text12345678 - Step 4: Configure additional settings for your resource as needed, read and accept the conditions (as applicable), and then select Review + create. Step 5: Navigate to the Azure portal. Then locate and select The Text Analytics service resource Text12345678 (which you created in Step 4). Step 6: Next, from the left-hand navigation menu, locate Monitoring and select Diagnostic settings. This screen contains all previously created diagnostic settings for this resource. Step 7: Select + Add diagnostic setting. Step 8: When prompted to configure, select the storage account and OMS workspace that you'd like to use to store you diagnostic logs. Note: If you don't have a storage account or OMS workspace, follow the prompts to create one. Step 9: Select Audit, RequestResponse, and AllMetrics. Then set the retention period for your diagnostic log data. If a retention policy is set to zero, events for that log category are stored indefinitely. Step 10: Click Save. It can take up to two hours before logging data is available to query and analyze. So don't worry if you don't see anything right away. Reference: https://docs.microsoft.com/en-us/azure/cognitive-services/cognitive-services-apis-create-account https://doc s.microsoft.com/en-us/azure/cognitive-services/diagnostic-logging Question: 34 CertyIQ SIMULATION You need to create a search service named search12345678 that will index a sample Azure Cosmos DB database named hotels-sample. The solution must ensure that only English language fields are retrievable. To complete this task, sign in to the Azure portal. Answer: See explanation below. Explanation: Part 1: Create a search service search12345678 Step 1: Sign in to the QnA portal. Step 2: Create an Azure Cognitive multi-service resource: Step 3: On the Create page, provide the following information. Name: search12345678 - Step 4: Click Review + create Part 2: Start the Import data wizard and create a data source Step 5: Click Import data on the command bar to create and populate a search index. Step 6: In the wizard, click Connect to your data > Samples > hotels-sample. This data source is built-in. If you were creating your own data source, you would need to specify a name, type, and connection information. Once created, it becomes an "existing data source" that can be reused in other import operations. Step 7: Continue to the next page. Step 8: Skip the "Enrich content" page Step 9: Configure index. Make sure English is selected for the fields. Step 10: Continue and finish the wizard. Reference: https://docs.microsoft.com/en-us/azure/cognitive-services/cognitive-services-apis-create-account https://doc s.microsoft.com/en-us/azure/search/search-get-started-portal Question: 35 SIMULATION You plan to create a solution to generate captions for images that will be read from Azure Blob Storage. You need to create a service in Azure Cognitive Services for the solution. The service must be named captions12345678 and must use the Free pricing tier. To complete this task, sign in to the Azure portal. Answer: See explanation below. Explanation: CertyIQ Part 1: Create a search service captions12345678 Step 1: Sign in to the QnA portal. Step 2: Create an Azure Cognitive multi-service resource: Step 3: On the Create page, provide the following information. Name: captions12345678ֲ¨ Pricing tier: Free - Step 4: Click Review + create (Step 5: Create a data source In Connect to your data, choose Azure Blob Storage. Choose an existing connection to the storage account and container you created. Give the data source a name, and use default values for the rest.) Reference: https://docs.microsoft.com/en-us/azure/search/search-create-service-portal https://docs.microsoft.com/en-u s/azure/search/cognitive-search-quickstart-ocr Question: 36 CertyIQ SIMULATION You need to create a Form Recognizer resource named fr12345678. Use the Form Recognizer sample labeling tool at https://fott-2-1.azurewebsites.net/ to analyze the invoice located in the C:\Resources\Invoices folder. Save the results as C:\Resources\Invoices\Results.json. To complete this task, sign in to the Azure portal and open the Form Recognizer sample labeling tool. Answer: See explanation below. Explanation: Step 1: Sign in to the Azure Portal. Step 2: Navigate to the Form Recognizer Sample Tool (at https://fott-2-1.azurewebsites.net) Step 3: On the sample tool home page select Use prebuilt model to get data. Step 4: Select the Form Type you would like to analyze from the dropdown window. Step 5: In the Source: URL field, paste the selected URL and select the Fetch button. Step 6: In the Choose file for analysis use the file in the C:\Resources\Invoices folder and select the Fetch button. Step 7: Select Run analysis. The Form Recognizer Sample Labeling tool will call the Analyze Prebuilt API and analyze the document. Step 8: View the results - see the key-value pairs extracted, line items, highlighted text extracted and tables detected. Step 9: Save the results as C:\Resources\Invoices\Results.json. ----------------------------------------------------------------1. Create a form recognizer service as part of azure ai service 2. browse to https://fott-2-1.azurewebsites.net/ 3. select prebuilt model for invoices 4. choose local file because the file is a local disk c: and insert the path 5. come back to the azure portal and copy endpoint and key from the relative page of the form recognizer service 6. come back to https://fott-2-1.azurewebsites.net/prebuilts-analyze 7. past endpoint and key 8. run analysis 9. download 10. choose json format and the destionation indicated ----------------------------------------------------------------Reference: https://docs.microsoft.com/en-us/azure/applied-ai-services/form-recognizer/quickstarts/try-sample-labeltool Question: 37 CertyIQ You have a factory that produces food products. You need to build a monitoring solution for staff compliance with personal protective equipment (PPE) requirements. The solution must meet the following requirements: * Identify staff who have removed masks or safety glasses. * Perform a compliance check every 15 minutes. * Minimize development effort. * Minimize costs. Which service should you use? A. Face B. Computer Vision C. Azure Video Analyzer for Media (formerly Video Indexer) Answer: A Explanation: Face API is an AI service that analyzes faces in images. Embed facial recognition into your apps for a seamless and highly secured user experience. No machinelearning expertise is required. Features include face detection that perceives facial features and attributes " such as a face mask, glasses, or face location " in an image, and identification of a person by a match to your private repository or via photo ID. Reference: https://azure.microsoft.com/en-us/services/cognitive-services/face/ Question: 38 CertyIQ You have an Azure Cognitive Search solution and a collection of blog posts that include a category field. You need to index the posts. The solution must meet the following requirements: * Include the category field in the search results. * Ensure that users can search for words in the category field. * Ensure that users can perform drill down filtering based on category. Which index attributes should you configure for the category field? A. searchable, sortable, and retrievable B. searchable, facetable, and retrievable C. retrievable, filterable, and sortable D. retrievable, facetable, and key Answer: B Explanation: B Retrievable: Include the category field in the search results. Searchable: Ensure that users can search for words in the category field. Facetable: Ensure that users can perform drill down filtering based on category. Reference: https://learn.microsoft.com/en-us/rest/api/searchservice/create-index#-field-definitionshttps://learn.microsoft.com/en-us/rest/api/searchservice/create-index#-field-definitions- retrievable Indicates whether the field can be returned in a search result. - searchable Indicates whether the field is full-text searchable and can be referenced in search queries. - facetable Indicates whether to enable the field to be referenced in facet queries. Question: 39 CertyIQ SIMULATION Use the following login credentials as needed: To enter your username, place your cursor in the Sign in box and click on the username below. To enter your password, place your cursor in the Enter password box and click on the password below. Azure Username: [email protected] Azure Password: XXXXXXXXXXXX The following information is for technical support purposes only: Lab Instance: 12345678 Task You plan to build an API that will identify whether an image includes a Microsoft Surface Pro or Surface Studio. You need to deploy a service in Azure Cognitive Services for the API. The service must be named AAA12345678 and must be in the East US Azure region. The solution must use the Free pricing tier. To complete this task, sign in to the Azure portal. Answer: See explanation below. Explanation: [email protected] = [email protected] Step 1: In the Azure dashboard, click Create a resource. Step 2: In the search bar, type "Cognitive Services." You'll get information about the cognitive services resource and a legal notice. Click Create. Step 3: You'll need to specify the following details about the cognitive service (refer to the image below for a completed example of this page): Subscription: choose your paid or trial subscription, depending on how you created your Azure account. Resource group: click create new to create a new resource group or choose an existing one. Region: choose the Azure region for your cognitive service. Choose: East US Azure region. Name: choose a name for your cognitive service. Enter: AAA12345678 Pricing Tier: Select: Free pricing tier Step 4: Review and create the resource, and wait for deployment to complete. Then go to the deployed resource. Note: The Computer Vision Image Analysis service can extract a wide variety of visual features from your images. For example, it can determine whether an image contains adult content, find specific brands or objects, or find human faces. Tag visual features Identify and tag visual features in an image, from a set of thousands of recognizable objects, living things, scenery, and actions. When the tags are ambiguous or not common knowledge, the API response provides hints to clarify the context of the tag. Tagging isn't limited to the main subject, such as a person in the foreground, but also includes the setting (indoor or outdoor), furniture, tools, plants, animals, accessories, gadgets, and so on. Try out the image tagging features quickly and easily in your browser using Vision Studio. Reference: https://docs.microsoft.com/en-us/learn/modules/analyze-images-computer-vision/3-analyze-images https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/overview-image-analysis Question: 40 CertyIQ SIMULATION Use the following login credentials as needed: To enter your username, place your cursor in the Sign in box and click on the username below. To enter your password, place your cursor in the Enter password box and click on the password below. Azure Username: [email protected] Azure Password: XXXXXXXXXXXX The following information is for technical support purposes only: Lab Instance: 12345678 Task You need to build an API that uses the service in Azure Cognitive Services named AAA12345678 to identify whether an image includes a Microsoft Surface Pro or Surface Studio. To achieve this goal, you must use the sample images in the C:\Resources\Images folder. To complete this task, sign in to the Azure portal. Answer: See explanation below. Explanation: [email protected] = [email protected] Step 1: In the Azure dashboard, click Create a resource. Step 2: In the search bar, type "Cognitive Services." You'll get information about the cognitive services resource and a legal notice. Click Create. Step 3: You'll need to specify the following details about the cognitive service (refer to the image below for a completed example of this page): Subscription: choose your paid or trial subscription, depending on how you created your Azure account. Resource group: click create new to create a new resource group or choose an existing one. Region: choose the Azure region for your cognitive service. Choose: East US Azure region. Name: choose a name for your cognitive service. Enter: AAA12345678 Pricing Tier: Select: Free pricing tier Step 4: Review and create the resource, and wait for deployment to complete. Then go to the deployed resource. Note: The Computer Vision Image Analysis service can extract a wide variety of visual features from your images. For example, it can determine whether an image contains adult content, find specific brands or objects, or find human faces. Tag visual features Identify and tag visual features in an image, from a set of thousands of recognizable objects, living things, scenery, and actions. When the tags are ambiguous or not common knowledge, the API response provides hints to clarify the context of the tag. Tagging isn't limited to the main subject, such as a person in the foreground, but also includes the setting (indoor or outdoor), furniture, tools, plants, animals, accessories, gadgets, and so on. Try out the image tagging features quickly and easily in your browser using Vision Studio. Reference: https://docs.microsoft.com/en-us/learn/modules/analyze-images-computer-vision/3-analyze-images https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/overview-image-analysis Question: 41 CertyIQ SIMULATION Use the following login credentials as needed: To enter your username, place your cursor in the Sign in box and click on the username below. To enter your password, place your cursor in the Enter password box and click on the password below. Azure Username: [email protected] Azure Password: XXXXXXXXXXXX The following information is for technical support purposes only: Lab Instance: 12345678 Task You need to get insights from a video file located in the C:\Resources\Video\Media.mp4 folder. Save the insights to the C:\Resources\Video\Insights.json folder. To complete this task, sign in to the Azure Video Analyzer for Media at https://www.videoindexer.ai/ by using [email protected] Answer: See explanation below. Explanation: [email protected] = [email protected] Step 1: Login Browse to the Azure Video Indexer website and sign in. URL: https://www.videoindexer.ai/ Login [email protected] Step 2: Create a project from your video You can create a new project directly from a video in your account. 1. Go to the Library tab of the Azure Video Indexer website. 2. Open the video that you want to use to create your project. On the insights and timeline page, select the Video editor button. Folder: C:\Resources\Video\Media.mp4 This takes you to the same page that you used to create a new project. Unlike the new project, you see the timestamped insights segments of the video, that you had started editing previously. Step 3: Save the insights to the C:\Resources\Video\Insights.json folder. Reference: https://docs.microsoft.com/en-us/azure/azure-video-indexer/use-editor-create-project Question: 42 CertyIQ SIMULATION Use the following login credentials as needed: To enter your username, place your cursor in the Sign in box and click on the username below. To enter your password, place your cursor in the Enter password box and click on the password below. Azure Username: [email protected] Azure Password: XXXXXXXXXXXX The following information is for technical support purposes only: Lab Instance: 12345678 Task You plan to analyze stock photography and automatically generate captions for the images. You need to create a service in Azure to analyze the images. The service must be named caption12345678 and must be in the East US Azure region. The solution must use the Free pricing tier. In the C:\Resources\Caption\Params.json folder, enter the value for Key 1 and the endpoint for the new service. To complete this task, sign in to the Azure portal. Answer: ‫‹ ן‬See explanation below. Explanation: [email protected] = [email protected] Step 1: Provision a Cognitive Services resource If you don't already have one in your subscription, you'll need to provision a Cognitive Services resource. 1. Open the Azure portal at https://portal.azure.com, and sign in using the Microsoft account associated with your Azure subscription. 2. Select the Create a resource button, search for cognitive services, and create a Cognitive Services resource with the following settings: Subscription: Your Azure subscription Resource group: Choose or create a resource group (if you are using a restricted subscription, you may not have permission to create a new resource group - use the one provided) Region: East US Azure region Name: caption12345678 Pricing tier: Free F0 3. Select the required checkboxes and create the resource. Wait for deployment to complete, and then view the deployment details. 4. When the resource has been deployed, go to it and view its Keys and Endpoint page. You will need the endpoint and one of the keys from this page in the next procedure. Step 2: Save Key and Endpoint values in Params.json Open the configuration file, C:\Resources\Caption\Params.json. and update the configuration values it contains to reflect the endpoint and an authentication key for your cognitive services resource. Save your changes. Reference: https://microsoftlearning.github.io/AI-102-AIEngineer/Instructions/15-computer-vision.html Question: 43 CertyIQ SIMULATION Use the following login credentials as needed: To enter your username, place your cursor in the Sign in box and click on the username below. To enter your password, place your cursor in the Enter password box and click on the password below. Azure Username: [email protected] Azure Password: XXXXXXXXXXXX The following information is for technical support purposes only: Lab Instance: 12345678 Task You plan to build an application that will use caption12345678. The application will be deployed to a virtual network named VNet1. You need to ensure that only virtual machines on VNet1 can access caption12345678. To complete this task, sign in to the Azure portal. Answer: See explanation below. Explanation: [email protected] = [email protected] Step 1: Create private endpoint for your web app 1. In the left-hand menu, select All Resources > caption12345678 - the name of your web app. 2. In the web app overview, select Settings > Networking. 3. In Networking, select Private endpoints. 4. Select + Add in the Private Endpoint connections page. 5. Enter or select the following information in the Add Private Endpoint page: Name: Enter caption12345678. Subscription Select your Azure subscription. Virtual network Select VNet1. Subnet: Integrate with private DNS zone: Select Yes. 6. Select OK. Reference: https://docs.microsoft.com/en-us/azure/private-link/tutorial-private-endpoint-webapp-portal Question: 44 SIMULATION Use the following login credentials as needed: To enter your username, place your cursor in the Sign in box and click on the username below. To enter your password, place your cursor in the Enter password box and click on the password below. CertyIQ Azure Username: [email protected] Azure Password: XXXXXXXXXXXX The following information is for technical support purposes only: Lab Instance: 12345678 Task You need to ensure that a user named [email protected] can regenerate the subscription keys of AAA12345678. The solution must use the principle of least privilege. To complete this task, sign in to the Azure portal. Answer: See explanation below. Explanation: [email protected] = [email protected] Cognitive Services Contributor Lets you create, read, update, delete and manage keys of Cognitive Services. 1. Sign in to the Azure portal (https://portal.azure.com/) using your account credentials. 2. In the left-hand navigation menu, click on "All services" and search for "Subscriptions." Click on the "Subscriptions" service to open the list of your Azure subscriptions. 3. Find the subscription with the ID "AAA12345678" and click on it to open the subscription details page. 4. In the left-hand navigation menu of the subscription details page, click on "Access control (IAM)." 5. Click on the "+ Add" button to add a new role assignment. This will open the "Add role assignment" pane. 6. In the "Role" dropdown menu, search for and select the "User Access Administrator" role. This role allows a user to manage access to Azure resources, including the ability to manage subscription keys, while adhering to the principle of least privilege. 7. In the "Select" field, type "[email protected]" and select the user from the list of suggestions. 8. Click on the "Save" button to complete the role assignment process. Question: 45 You have an Azure IoT hub that receives sensor data from machinery. You need to build an app that will perform the following actions: Perform anomaly detection across multiple correlated sensors. Identify the root cause of process stops. Send incident alerts. The solution must minimize development time. Which Azure service should you use? A. Azure Metrics Advisor B. Form Recognizer C. Azure Machine Learning CertyIQ D. Anomaly Detector Answer: A Explanation: A. Azure Metrics Advisor Azure Metrics Advisor is a service that provides an end-to-end anomaly detection platform, which includes data ingestion, anomaly detection, root cause analysis, and alerting. It is designed to monitor and detect anomalies in time-series data, diagnose incidents, and provide insights. Question: 46 CertyIQ You have an app that analyzes images by using the Computer Vision API. You need to configure the app to provide an output for users who are vision impaired. The solution must provide the output in complete sentences. Which API call should you perform? A. readInStreamAsync B. analyzeImagesByDomainInStreamAsync C. tagImageInStreamAsync D. describeImageInStreamAsync Answer: D Explanation: The API call you should perform to provide an output in complete sentences for users who are vision impaired is describeImageInStreamAsync. The describe feature of the Computer Vision API generates a human-readable sentence to describe the contents of an image. This is particularly useful for accessibility purposes, as it allows visually impaired users to understand what is in an image without needing to see it. The describe feature can also be customized to provide additional details or context, if desired. Therefore, the correct answer is D. describeImageInStreamAsync. Question: 47 CertyIQ DRAG DROP You have a Custom Vision service project that performs object detection. The project uses the General domain for classification and contains a trained model. You need to export the model for use on a network that is disconnected from the internet. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Answer: Explanation: 1. Change Domains to General (compact) 2. Retain model 3. Export model https://learn.microsoft.com/en-us/azure/cognitive-services/custom-vision-service/export-your-model - In the Domains section, select one of the compact domains. Select Save Changes to save the changes. - From the top of the page, select Train to retrain using the new domain. - Go to the Performance tab and select Export. https://learn.microsoft.com/en-us/azure/cognitive-services/Custom-Vision-Service/export-your-modelthe model must be retrained after changing the domain to compact. Question: 48 CertyIQ You are building an AI solution that will use Sentiment Analysis results from surveys to calculate bonuses for customer service staff. You need to ensure that the solution meets the Microsoft responsible AI principles. What should you do? A.Add a human review and approval step before making decisions that affect the staff's financial situation. B.Include the Sentiment Analysis results when surveys return a low confidence score. C.Use all the surveys, including surveys by customers who requested that their account be deleted and their data be removed. D.Publish the raw survey data to a central location and provide the staff with access to the location. Answer: A Explanation: To ensure that the AI solution meets the Microsoft responsible AI principles, you should: A. Add a human review and approval step before making decisions that affect the staff's financial situation. This option aligns with the responsible AI principle of fairness and accountability. By adding a human review and approval step, you ensure that the decisions affecting staff bonuses are reviewed by humans who can consider factors beyond just the sentiment analysis results. It adds an element of transparency, accountability, and fairness to the process, reducing the risk of biased or unfair decisions. Question: 49 CertyIQ You have an Azure subscription that contains a Language service resource named ta1 and a virtual network named vnet1. You need to ensure that only resources in vnet1 can access ta1. What should you configure? A.a network security group (NSG) for vnet1 B.Azure Firewall for vnet1 C.the virtual network settings for ta1 D.a Language service container for ta1 Answer: C Explanation: C is the answer. https://learn.microsoft.com/en-us/azure/cognitive-services/cognitive-services-virtual-networks? tabs=portal#grant-access-from-a-virtual-network You can configure Cognitive Services resources to allow access only from specific subnets. The allowed subnets may belong to a VNet in the same subscription, or in a different subscription, including subscriptions belonging to a different Azure Active Directory tenant. Question: 50 CertyIQ You are developing a monitoring system that will analyze engine sensor data, such as rotation speed, angle, temperature, and pressure. The system must generate an alert in response to atypical values. What should you include in the solution? A.Application Insights in Azure Monitor B.metric alerts in Azure Monitor C.Multivariate Anomaly Detection D.Univariate Anomaly Detection Answer: C Explanation: The Multivariate Anomaly Detection APIs further enable developers by easily integrating advanced AI for detecting anomalies from groups of metrics, without the need for machine learning knowledge or labeled data. https://learn.microsoft.com/en-us/azure/cognitive-services/anomaly-detector/overview#multivariateanomaly-detection Question: 51 CertyIQ You have an app named App1 that uses an Azure Cognitive Services model to identify anomalies in a time series data stream. You need to run App1 in a location that has limited connectivity. The solution must minimize costs. What should you use to host the model? A.Azure Kubernetes Service (AKS) B.Azure Container Instances C.a Kubernetes cluster hosted in an Azure Stack Hub integrated system D.the Docker Engine Answer: B Explanation: Here we need not only to minimize the costs but minimize the usage of the network due to an app hosted in a location with limited connectivity. So it's preferable to host the model in the same location/network where the app is. And to do that the solution is to containerize the model, and host locally using a docker engine. ACI is still hosted on Azure and you need to have reasonable internet connectivity to make your solution work. Instead you should run the container on premise using Docker or other means and specify the API key and the endpoint while launching the container instance for billing purpose. https://learn.microsoft.com/en-us/azure/cognitive-services/cognitive-services-container-support Question: 52 HOTSPOT You have an Azure Cognitive Search resource named Search1 that is used by multiple apps. You need to secure Search1. The solution must meet the following requirements: Prevent access to Search1 from the internet. Limit the access of each app to specific queries. What should you do? To answer, select the appropriate options in the answer area. CertyIQ NOTE: Each correct selection is worth one point. Answer: Explanation: 1. Create a private endpoint 2. Use Azure roles https://learn.microsoft.com/en-us/azure/search/service-create-private-endpoint#why-use-a-private-endpointfor-secure-access Private Endpoints for Azure Cognitive Search allow a client on a virtual network to securely access data in a search index over a Private Link. The private endpoint uses an IP address from the virtual network address space for your search service. Network traffic between the client and the search service traverses over the virtual network and a private link on the Microsoft backbone network, eliminating exposure from the public internet. https://learn.microsoft.com/en-us/azure/search/search-security-rbac?tabs=config-svc-portal%2Crolesportal%2Ctest-portal%2Ccustom-role-portal%2Cdisable-keys-portal#grant-access-to-a-single-index In some scenarios, you may want to limit application's access to a single resource, such as an index. The portal doesn't currently support role assignments at this level of granularity, but it can be done with PowerShell or the Azure CLI. Question: 53 CertyIQ You are building a solution that will detect anomalies in sensor data from the previous 24 hours. You need to ensure that the solution scans the entire dataset, at the same time, for anomalies. Which type of detection should you use? A.batch B.streaming C.change points Answer: A Explanation: A is correct. https://learn.microsoft.com/en-us/azure/cognitive-services/anomaly-detector/overview#univariate-anomalydetection Batch detection Use your time series to detect any anomalies that might exist throughout your data. This operation generates a model using your entire time series data, with each point analyzed with the same model. https://learn.microsoft.com/en-us/azure/cognitive-services/anomaly-detector/overview Question: 54 CertyIQ DRAG DROP You are building an app that will scan confidential documents and use the Language service to analyze the contents. You provision an Azure Cognitive Services resource. You need to ensure that the app can make requests to the Language service endpoint. The solution must ensure that confidential documents remain on-premises. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Answer: Explanation: -Provision an on-prem Kubernetes cluster that is isolated from Internet - Pull an image from MCR - Run the container and specify an API Key and Endpoint URL of the Cognitive Services resource https://learn.microsoft.com/en-us/azure/cognitive-services/containers/disconnected-containers Containers enable you to run Cognitive Services APIs in your own environment, and are great for your specific security and data governance requirements. Disconnected containers enable you to use several of these APIs disconnected from the internet. https://learn.microsoft.com/en-us/azure/cognitive-services/containers/disconnected-container-faq#how-do-idownload-the-disconnected-containers These containers are hosted on the Microsoft Container Registry and available for download on Microsoft Artifact Registry and Docker Hub. You won't be able to run the container if your Azure subscription has not been approved after completion of the request form. Question: 55 CertyIQ HOTSPOT You have an Azure subscription that has the following configurations: Subscription ID: 8d3591aa-96b8-4737-ad09-00f9b1ed35ad Tenant ID: 3edfe572-cb54-3ced-ae12-c5c177f39a12 You plan to create a resource that will perform sentiment analysis and optical character recognition (OCR). You need to use an HTTP request to create the resource in the subscription. The solution must use a single key and endpoint. How should you complete the request? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Answer: Explanation: 1. subscriptions/8d3591aa-96b8-4737-ad09-00f9b1ed35ad 2. Microsoft.CognitiveServices https://learn.microsoft.com/en-us/azure/cognitive-services/cognitive-services-apis-create-account? tabs=multiservice%2Canomaly-detector%2Clanguage-service%2Ccomputer-vision%2Cwindows#types-ofcognitive-services-resources You can access Azure Cognitive Services through two different resources: A multi-service resource, or a single-service one. - Multi-service resource: Access multiple Azure Cognitive Services with a single key and endpoint. Consolidates billing from the services you use. Question: 56 You have an Azure subscription that contains an Anomaly Detector resource. You deploy a Docker host server named Server1 to the on-premises network. You need to host an instance of the Anomaly Detector service on Server1. CertyIQ Which parameter should you include in the docker run command? A.Fluentd B.Billing C.Http Proxy D.Mounts Answer: B Explanation: The Eula, Billing, and ApiKey options must be specified to run the container; otherwise, the container won't start. For more information, see Billing. The ApiKey value is the Key from the Keys and Endpoints page in the LUIS portal and is also available on the Azure Cognitive Services resource keys page. Example: $ docker run --rm -it -p 5000:5000 --memory 4g --cpus 2 --mount type=bind,src=c:\demo\container,target=/input --mount type=bind,src=C:\demo\container,target=/output mcr.microsoft.com/azure-cognitive-services/luis Eula=accept Billing=https://westus.api.cognitive.microsoft.com/luis/v2.0 ApiKey= ___YOUR_API_KEY___ https://learn.microsoft.com/en-us/azure/cognitive-services/luis/luis-container-configuration#exampledocker-run-commands Question: 57 CertyIQ You are building an app that will use the Speech service. You need to ensure that the app can authenticate to the service by using a Microsoft Azure Active Directory (Azure AD), part of Microsoft Entra, token. Which two actions should you perform? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point. A.Enable a virtual network service endpoint. B.Configure a custom subdomain. C.Request an X.509 certificate. D.Create a private endpoint. E.Create a Conditional Access policy. Answer: BE Explanation: https://learn.microsoft.com/en-us/azure/ai-services/speech-service/how-to-configure-azure-ad-auth? tabs=portal&pivots=programming-languagecsharp#:~:text=For%20Azure%20AD%20authentication%20with,the%20Azure%20portal%20or%20PowerShell. Question: 58 CertyIQ HOTSPOT You have an Azure OpenAI resource named AI1 that hosts three deployments of the GPT 3.5 model. Each deployment is optimized for a unique workload. You plan to deploy three apps. Each app will access AI1 by using the REST API and will use the deployment that was optimized for the app's intended workload. You need to provide each app with access to AI1 and the appropriate deployment. The solution must ensure that only the apps can access AI1. What should you use to provide access to AI1, and what should each app use to connect to its appropriate deployment? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Answer: Question: 59 CertyIQ You have a Microsoft OneDrive folder that contains a 20-GB video file named File1.avi. You need to index File1.avi by using the Azure Video Indexer website. What should you do? A.Upload File1.avi to the www.youtube.com webpage, and then copy the URL of the video to the Azure AI Video Indexer website. B.Download File1.avi to a local computer, and then upload the file to the Azure AI Video Indexer website. C.From OneDrive, create a download link, and then copy the link to the Azure AI Video Indexer website. D.From OneDrive, create a sharing link for File1.avi, and then copy the link to the Azure AI Video Indexer website. Answer: C Question: 60 CertyIQ HOTSPOT You are developing an application that will use the Computer Vision client library. The application has the following code. For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area: Answer: Explanation: Box 1: No Box 2: Yes The ComputerVision.analyzeImageInStreamAsync operation extracts a rich set of visual features based on the image content. Box 3 : yes https://learn.microsoft.com/enus/java/api/com.microsoft.azure.cognitiveservices.vision.computervision.computervision?view=azure-javalegacy#com-microsoft-azure-cognitiveservices-vision-computervision-computervisionanalyzeimageinstreamasync(byte-()-analyzeimageinstreamoptionalparameter) his operation extracts a rich set of visual features based on the image content. Two input methods are supported (1) Uploading an image or (2) specifying an image URL. Question: 61 CertyIQ You are developing a method that uses the Computer Vision client library. The method will perform optical character recognition (OCR) in images. The method has the following code. During testing, you discover that the call to the GetReadResultAsync method occurs before the read operation is complete. You need to prevent the GetReadResultAsync method from proceeding until the read operation is complete. Which two actions should you perform? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point. A. Remove the Guid.Parse(operationId) parameter. B. Add code to verify the results.Status value. C. Add code to verify the status of the txtHeaders.Status value. D. Wrap the call to GetReadResultAsync within a loop that contains a delay. Answer: BD Explanation: Example code : do results = await client.GetReadResultAsync(Guid.Parse(operationId)); while ((results.Status == OperationStatusCodes.Running || results.Status == OperationStatusCodes.NotStarted)); Reference: https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/dotnet/ComputerVision/C omputerVisionQuickstart.cs Question: 62 CertyIQ HOTSPOT You have a Computer Vision resource named contoso1 that is hosted in the West US Azure region. You need to use contoso1 to make a different size of a product photo by using the smart cropping feature. How should you complete the API URL? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: Answer: Explanation: Selection 1: https://westus.api.cognitive.microsoft.com Selecting 2: Generate Thumbnail Althought Using the API for generating thumbnail feature is available through both the Get Thumbnail and Get Area of Interest APIs both leveraging smart cropping, the ask is only to resize the entire image. https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/concept-generating-thumbnails Generate Thumbnail This operation generates a thumbnail image with the user-specified width and height. POST https://westus.api.cognitive.microsoft.com/vision/v3.1/generateThumbnail? width=500&height=500&smartCropping=True Ocp-Apim-Subscription-Key: API key Reference: https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v32/operations/56f91f2e778daf14a499f21b https://docs.microsoft.com/en-us/azure/cognitiveservices/computer-vision/concept-generating-thumbnails#examples Question: 63 CertyIQ DRAG DROP You are developing a webpage that will use the Azure Video Analyzer for Media (previously Video Indexer) service to display videos of internal company meetings. You embed the Player widget and the Cognitive Insights widget into the page. You need to configure the widgets to meet the following requirements: ✑ Ensure that users can search for keywords. ✑ Display the names and faces of people in the video. ✑ Show captions in the video in English (United States). How should you complete the URL for each widget? To answer, drag the appropriate values to the correct targets. Each value may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point. Select and Place: Answer: Explanation: 1. people, keywords / search 2. true / en-US https://learn.microsoft.com/en-us/azure/azure-video-indexer/video-indexer-embed-widgets#cognitiveinsights-widget - widgets Allows you to control the insights that you want to render. - controls Allows you to control the controls that you want to render. https://learn.microsoft.com/en-us/azure/azure-video-indexer/video-indexer-embed-widgets#player-widget - showCaptions Makes the player load with the captions already enabled. - captions Fetches the caption in the specified language during the widget loading to be available on the Captions menu Question: 64 CertyIQ DRAG DROP You train a Custom Vision model to identify a company's products by using the Retail domain. You plan to deploy the model as part of an app for Android phones. You need to prepare the model for deployment. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Select and Place: Answer: Explanation: 1 Change the model domain 2 Retrain the model 3 Export the model "Convert to a compact domain" for action #1 and #2 "Export your model" for action #3 Reference: https://docs.microsoft.com/en-us/azure/cognitive-services/custom-vision-service/export-your-model Question: 65 CertyIQ HOTSPOT You are developing an application to recognize employees' faces by using the Face Recognition API. Images of the faces will be accessible from a URI endpoint. The application has the following code. For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area: Answer: Explanation: Yes Yes Yes "Free-tier subscription quota: 1,000 person groups. Each holds up to 1,000 persons. S0-tier subscription quota: 1,000,000 person groups. Each holds up to 10,000 persons." Reference: https://docs.microsoft.com/en-us/azure/cognitive-services/face/face-api-how-to-topics/use-persondirectory Question: 66 DRAG DROP You have a Custom Vision resource named acvdev in a development environment. CertyIQ You have a Custom Vision resource named acvprod in a production environment. In acvdev, you build an object detection model named obj1 in a project named proj1. You need to move obj1 to acvprod. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Select and Place: Answer: Explanation: 1. GetProjects on acvDEV 2. ExportProjects on acvDEV 3. ImportProjects on avcPROD Reference: https://docs.microsoft.com/en-us/azure/cognitive-services/custom-vision-service/copy-move-projects Question: 67 CertyIQ DRAG DROP You are developing an application that will recognize faults in components produced on a factory production line. The components are specific to your business. You need to use the Custom Vision API to help detect common faults. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Select and Place: Answer: Explanation: Step 1: Create a project Create a new project. Step 2: Upload and tag the images Choose training images. Then upload and tag the images. Step 3: Train the classifier model. Train the classifier Reference: https://docs.microsoft.com/en-us/azure/cognitive-services/custom-vision-service/getting-started-build-a-clas sifier Question: 68 CertyIQ HOTSPOT You are building a model that will be used in an iOS app. You have images of cats and dogs. Each image contains either a cat or a dog. You need to use the Custom Vision service to detect whether the images is of a cat or a dog. How should you configure the project in the Custom Vision portal? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: Answer: Explanation: Box 1: Classification Incorrect Answers: An object detection project is for detecting which objects, if any, from a set of candidates are present in an image. Box 2: Multiclass A multiclass classification project is for classifying images into a set of tags, or target labels. An image can be assigned to one tag only. Incorrect Answers: A multilabel classification project is similar, but each image can have multiple tags assigned to it. Box 3: General (compact) https://learn.microsoft.com/en-us/azure/cognitive-services/custom-vision-service/getting-started-build-aclassifier - Select Classification under Project Types. Then, under Classification Types, choose either Multilabel or Multiclass, depending on your use case. Multilabel classification applies any number of your tags to an image (zero or more), while multiclass classification sorts images into single categories (every image you submit will be sorted into the most likely tag). You'll be able to change the classification type later if you want to. Reference: https://cran.r-project.org/web/packages/AzureVision/vignettes/customvision.html Question: 69 CertyIQ You have an Azure Video Analyzer for Media (previously Video Indexer) service that is used to provide a search interface over company videos on your company's website. You need to be able to search for videos based on who is present in the video. What should you do? A. Create a person model and associate the model to the videos. B. Create person objects and provide face images for each object. C. Invite the entire staff of the company to Video Indexer. D. Edit the faces in the videos. E. Upload names to a language model. Answer: A Explanation: Video Indexer supports multiple Person models per account. Once a model is created, you can use it by providing the model ID of a specific Person model when uploading/indexing or reindexing a video. Training a new face for a video updates the specific custom model that the video was associated with. Note: Video Indexer supports face detection and celebrity recognition for video content. The celebrity recognition feature covers about one million faces based on commonly requested data source such as IMDB, Wikipedia, and top LinkedIn influencers. Faces that aren't recognized by the celebrity recognition feature are detected but left unnamed. Once you label a face with a name, the face and name get added to your account's Person model. Video Indexer will then recognize this face in your future videos and past videos. Reference: https://docs.microsoft.com/en-us/azure/media-services/video-indexer/customize-person-model-with-api Question: 70 You use the Custom Vision service to build a classifier. After training is complete, you need to evaluate the classifier. Which two metrics are available for review? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point. A. recall B. F-score C. weighted accuracy D. precision E. area under the curve (AUC) Answer: AD Explanation: CertyIQ Custom Vision provides three metrics regarding the performance of your model: precision, recall, and AP. https://learn.microsoft.com/en-us/azure/cognitive-services/custom-vision-service/getting-started-build-aclassifier#evaluate-the-classifier After training has completed, the model's performance is estimated and displayed. The Custom Vision Service uses the images that you submitted for training to calculate precision and recall. Precision and recall are two different measurements of the effectiveness of a classifier: - Precision indicates the fraction of identified classifications that were correct. For example, if the model identified 100 images as dogs, and 99 of them were actually of dogs, then the precision would be 99%. - Recall indicates the fraction of actual classifications that were correctly identified. For example, if there were actually 100 images of apples, and the model identified 80 as apples, the recall would be 80%. Reference: https://www.tallan.com/blog/2020/05/19/azure-custom-vision/ Question: 71 CertyIQ DRAG DROP You are developing a call to the Face API. The call must find similar faces from an existing list named employeefaces. The employeefaces list contains 60,000 images. How should you complete the body of the HTTP request? To answer, drag the appropriate values to the correct targets. Each value may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point. Select and Place: Answer: Explanation: Box 1: LargeFaceListID LargeFaceList: Add a face to a specified large face list, up to 1,000,000 faces. Note: Given query face's faceId, to search the similar-looking faces from a faceId array, a face list or a large face list. A "faceListId" is created by FaceList - Create containing persistedFaceIds that will not expire. And a "largeFaceListId" is created by LargeFaceList - Create containing persistedFaceIds that will also not expire. Incorrect Answers: Not "faceListId": Add a face to a specified face list, up to 1,000 faces. Box 2: matchFace Find similar has two working modes, "matchPerson" and "matchFace". "matchPerson" is the default mode that it tries to find faces of the same person as possible by using internal same-person thresholds. It is useful to find a known person's other photos. Note that an empty list will be returned if no faces pass the internal thresholds. "matchFace" mode ignores same-person thresholds and returns ranked similar faces anyway, even the similarity is low. It can be used in the cases like searching celebrity-looking faces. Reference: https://docs.microsoft.com/en-us/rest/api/faceapi/face/findsimilar Question: 72 CertyIQ DRAG DROP You are developing a photo application that will find photos of a person based on a sample image by using the Face API. You need to create a POST request to find the photos. How should you complete the request? To answer, drag the appropriate values to the correct targets. Each value may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point. Select and Place: Answer: Explanation: Box 1: findsimilars others do not match the given request body and make no sense anyway. Box 2: matchPerson Find similar has two working modes, "matchPerson" and "matchFace". "matchPerson" is the default mode that it tries to find faces of the same person as possible by using internal same-person thresholds. It is useful to find a known person's other photos. Note that an empty list will be returned if no faces pass the internal thresholds. "matchFace" mode ignores same-person thresholds and returns ranked similar faces anyway, even the similarity is low. It can be used in the cases like searching celebrity-looking faces. Reference: https://docs.microsoft.com/en-us/rest/api/faceapi/face/detectwithurl https://docs.microsoft.com/enus/rest/api/faceapi/face/findsimilar Question: 73 CertyIQ HOTSPOT You develop a test method to verify the results retrieved from a call to the Computer Vision API. The call is used to analyze the existence of company logos in images. The call returns a collection of brands named brands. You have the following code segment. For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area: Answer: Explanation: Box 1: Yes - Box 2: Yes Coordinates of a rectangle in the API refer to the top left corner. Box 3: No - X Gets or sets the x-coordinate of the upper-left corner of this Rectangle structure. Y Gets or sets the y-coordinate of the upper-left corner of this Rectangle structure. see this link: https://docs.microsoft.com/en-us/dotnet/api/system.drawing.rectangle?view=net-5.0 Reference: https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/concept-brand-detection Question: 74 HOTSPOT You develop an application that uses the Face API. You need to add multiple images to a person group. How should you complete the code? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: CertyIQ Answer: Explanation: Box 1: Stream The File.OpenRead(String) method opens an existing file for reading. Example: Open the stream and read it back. using (FileStream fs = File.OpenRead(path)) Box 2: add face from stream async Question: 75 CertyIQ Your company uses an Azure Cognitive Services solution to detect faces in uploaded images. The method to detect the faces uses the following code. You discover that the solution frequently fails to detect faces in blurred images and in images that contain sideways faces. You need to increase the likelihood that the solution can detect faces in blurred images and images that contain sideways faces. What should you do? A. Use a different version of the Face API. B. Use the Computer Vision service instead of the Face service. C. Use the Identify method instead of the Detect method. D. Change the detection model. Answer: D Explanation: Evaluate different models. The best way to compare the performances of the detection models is to use them on a sample dataset. We recommend calling the Face - Detect API on a variety of images, especially images of many faces or of faces that are difficult to see, using each detection model. Pay attention to the number of faces that each model returns. The different face detection models are optimized for different tasks. See the following table for an overview of the differences. Reference: https://docs.microsoft.com/en-us/azure/cognitive-services/face/face-api-how-to-topics/specify-detection-mo del Question: 76 CertyIQ You have the following Python function for creating Azure Cognitive Services resources programmatically. def create_resource (resource_name, kind, account_tier, location) : parameters = CognitiveServicesAccount(sku=Sku(name=account_tier), kind=kind, location=location, properties= ) result = client.accounts.create(resource_group_name, resource_name, parameters) You need to call the function to create a free Azure resource in the West US Azure region. The resource will be used to generate captions of images automatically. Which code should you use? A. create_resource("res1", "ComputerVision", "F0", "westus") B. create_resource("res1", "CustomVision.Prediction", "F0", "westus") C. create_resource("res1", "ComputerVision", "S0", "westus") D. create_resource("res1", "CustomVision.Prediction", "S0", "westus") Answer: A Explanation: A is the answer. https://learn.microsoft.com/en-us/azure/cognitive-services/cognitive-services-apis-create-account-clientlibrary?pivots=programming-language-python#create-a-cognitive-services-resource-python To create and subscribe to a new Cognitive Services resource, use the Create function. This function adds a new billable resource to the resource group you pass in. When you create your new resource, you'll need to know the "kind" of service you want to use, along with its pricing tier (or SKU) and an Azure location. The following function takes all of these arguments and creates a resource. Question: 77 CertyIQ You are developing a method that uses the Computer Vision client library. The method will perform optical character recognition (OCR) in images. The method has the following code. During testing, you discover that the call to the GetReadResultAsync method occurs before the read operation is complete. You need to prevent the GetReadResultAsync method from proceeding until the read operation is complete. Which two actions should you perform? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point. A. Remove the operation_id parameter. B. Add code to verify the read_results.status value. C. Add code to verify the status of the read_operation_location value. D. Wrap the call to get_read_result within a loop that contains a delay. Answer: BD Explanation: B. Add code to verify the read_results.status value. D. Wrap the call to get_read_result within a loop that contains a delay. Explanation: In order to prevent the GetReadResultAsync method from proceeding until the read operation is complete, we need to check the status of the read operation and wait until it's completed. To do this, we can add code to verify the status of the read_results.status value. If the status is not "succeeded", we can add a delay and then retry the operation until it's complete. This can be achieved by wrapping the call to get_read_result within a loop that contains a delay. Removing the operation_id parameter or adding code to verify the status of the read_operation_location value will not solve the issue of waiting for the read operation to complete before proceeding with the GetReadResultAsync method. Question: 78 CertyIQ HOTSPOT You are building an app that will enable users to upload images. The solution must meet the following requirements: * Automatically suggest alt text for the images. * Detect inappropriate images and block them. * Minimize development effort. You need to recommend a computer vision endpoint for each requirement. What should you recommend? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: Answer: Explanation: 1. https://westus.api.cognitive.microsoft.com/vision/v3.2/analyze/?visualFeatures=Adult,Description 2. https://westus.api.cognitive.microsoft.com/vision/v3.2/analyze/?visualFeatures=Adult,Description https://learn.microsoft.com/en-us/azure/cognitive-services/computer-vision/concept-describing-images Computer Vision can analyze an image and generate a human-readable phrase that describes its contents. The algorithm returns several descriptions based on different visual features, and each description is given a confidence score. The final output is a list of descriptions ordered from highest to lowest confidence. Question: 79 CertyIQ You need to build a solution that will use optical character recognition (OCR) to scan sensitive documents by using the Computer Vision API. The solution must NOT be deployed to the public cloud. What should you do? A. Build an on-premises web app to query the Computer Vision endpoint. B. Host the Computer Vision endpoint in a container on an on-premises server. C. Host an exported Open Neural Network Exchange (ONNX) model on an on-premises server. D. Build an Azure web app to query the Computer Vision endpoint. Answer: B Explanation: One option to manage your Computer Vision containers on-premises is to use Kubernetes and Helm. Three primary parameters for all Cognitive Services containers are required. The Microsoft Software License Terms must be present with a value of accept. An Endpoint URI and API key are also needed. Incorrect: Not D: This Computer Vision endpoint would be available for the public, unless it is secured. Reference: https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/deploy-computer-vision-on-premi ses Question: 80 CertyIQ You have an Azure Cognitive Search solution and a collection of handwritten letters stored as JPEG files. You plan to index the collection. The solution must ensure that queries can be performed on the contents of the letters. You need to create an indexer that has a skillset. Which skill should you include? A.image analysis B.optical character recognition (OCR) C.key phrase extraction D.document extraction Answer: B Explanation: To ensure that queries can be performed on the contents of the letters, the skill that should be included in the indexer is optical character recognition (OCR).Option B, optical character recognition (OCR), is a technology that can recognize text within an image and convert it into machine-readable text. This skill will enable the search engine to read the handwritten letters and convert them into searchable text that can be indexed by Azure Cognitive Search.Option A, image analysis, is a useful skill for analyzing images to extract metadata, but it does not directly enable text recognition.Option C, key phrase extraction, extracts important phrases and concepts from text, but it requires the text to be already recognized and extracted by OCR or other text extraction techniques.Option D, document extraction, is a skill that extracts specific pieces of information from documents, but it does not address the challenge of recognizing and extracting text from handwritten letters. Question: 81 CertyIQ HOTSPOT You have a library that contains thousands of images. You need to tag the images as photographs, drawings, or clipart. Which service endpoint and response property should you use? To answer, select th

Use Quizgecko on...
Browser
Browser