AWS Certified AI Practitioner AIF Exam B PDF
Document Details
Uploaded by DelicateJasper1204
Tags
Related
- AWS Certified Cloud Practitioner Exam Preparation PDF
- AWS Certified Solutions Architect Official Study Guide_ Associate Exam.pdf
- AWS Certified Solutions Architect Official Study Guide_ Associate Exam ch 2.pdf
- AWS Certified Developer - Associate (DVA-C02) Exam Guide PDF
- AWS-Certified-Developer-Associate_Sample-Questions.pdf
- AWS Certified AI Practitioner AIF-C01 Practice Questions PDF
Summary
This document contains past paper questions and answers from the AWS Certified AI Practitioner AIF-C01 exam. It includes multiple choice questions covering various topics related to AI technologies.
Full Transcript
***[AWS Certified AI Practitioner AIF-C01 ]*** Exam B *[Question \#46Topic 1]* **A company wants to use large language models (LLMs) with Amazon Bedrock to develop a chat interface for the company\'s product manuals. The manuals are stored as PDF files.** **Which solution meets these requirement...
***[AWS Certified AI Practitioner AIF-C01 ]*** Exam B *[Question \#46Topic 1]* **A company wants to use large language models (LLMs) with Amazon Bedrock to develop a chat interface for the company\'s product manuals. The manuals are stored as PDF files.** **Which solution meets these requirements MOST cost-effectively?** A. Use prompt engineering to add one PDF file as context to the user prompt when the prompt is submitted to Amazon Bedrock. B. Use prompt engineering to add all the PDF files as context to the user prompt when the prompt is submitted to Amazon Bedrock. C. Use all the PDF documents to fine-tune a model with Amazon Bedrock. Use the fine-tuned model to process user prompts. D. Upload PDF documents to an Amazon Bedrock knowledge base. Use the knowledge base to provide context when users submit prompts to Amazon Bedrock. (Correct) *[Question \#47Topic 1]* **A social media company wants to use a large language model (LLM) for content moderation. The company wants to evaluate the LLM outputs for bias and potential discrimination against specific groups or individuals.** **Which data source should the company use to evaluate the LLM outputs with the LEAST administrative effort?** A. User-generated content B. Moderation logs C. Content moderation guidelines D. Benchmark datasets (Correct) *[Question \#48Topic 1]* **A company wants to use a pre-trained generative AI model to generate content for its marketing campaigns. The company needs to ensure that the generated content aligns with the company\'s brand voice and messaging requirements.** **Which solution meets these requirements?** A. Optimize the model\'s architecture and hyperparameters to improve the model\'s overall performance. B. Increase the model\'s complexity by adding more layers to the model\'s architecture. C. Create effective prompts that provide clear instructions and context to guide the model\'s generation. (Correct) D. Select a large, diverse dataset to pre-train a new generative model. *[Question \#49Topic 1]* **A loan company is building a generative AI-based solution to offer new applicants discounts based on specific business criteria. The company wants to build and use an AI model responsibly to minimize bias that could negatively affect some customers.** **Which actions should the company take to meet these requirements? (Choose two.)** A. Detect imbalances or disparities in the data. (Correct) B. Ensure that the model runs frequently. C. Evaluate the model\'s behavior so that the company can provide transparency to stakeholders. (Correct) D. Use the Recall-Oriented Understudy for Gisting Evaluation (ROUGE) technique to ensure that the model is 100% accurate. E. Ensure that the model\'s inference time is within the accepted limits. *[Question \#50Topic 1]* **A company is using an Amazon Bedrock base model to summarize documents for an internal use case. The company trained a custom model to improve the summarization quality.** **Which action must the company take to use the custom model through Amazon Bedrock?** A. Purchase Provisioned Throughput for the custom model. B. Deploy the custom model in an Amazon SageMaker endpoint for real-time inference. C. Register the model with the Amazon SageMaker Model Registry. D. Grant access to the custom model in Amazon Bedrock. (Correct) *[Question \#51Topic 1]* **A company needs to choose a model from Amazon Bedrock to use internally. The company must identify a model that generates responses in a style that the company\'s employees prefer.** **What should the company do to meet these requirements?** A. Evaluate the models by using built-in prompt datasets. B. Evaluate the models by using a human workforce and custom prompt datasets. (Correct) C. Use public model leaderboards to identify the model. D. Use the model InvocationLatency runtime metrics in Amazon CloudWatch when trying models. *[Question \#52Topic 1]* **A student at a university is copying content from generative AI to write essays.** **Which challenge of responsible generative AI does this scenario represent?** A. Toxicity B. Hallucinations C. Plagiarism (Correct) D. Privacy *[Question \#53Topic 1]* **A company needs to build its own large language model (LLM) based on only the company\'s private data. The company is concerned about the environmental effect of the training process.** **Which Amazon EC2 instance type has the LEAST environmental effect when training LLMs?** A. Amazon EC2 C series B. Amazon EC2 G series C. Amazon EC2 P series D. Amazon EC2 Trn series (Correct) *[Question \#54Topic 1]* **A company wants to build an interactive application for children that generates new stories based on classic stories. The company wants to use Amazon Bedrock and needs to ensure that the results and topics are appropriate for children.** **Which AWS service or feature will meet these requirements?** A. Amazon Rekognition B. Amazon Bedrock playgrounds C. Guardrails for Amazon Bedrock (Correct) D. Agents for Amazon Bedrock *[Question \#55Topic 1]* **A company is building an application that needs to generate synthetic data that is based on existing data.** **Which type of model can the company use to meet this requirement?** A. Generative adversarial network (GAN) (Correct) B. XGBoost C. Residual neural network D. WaveNet *[Question \#56Topic 1]* **A digital devices company wants to predict customer demand for memory hardware. The company does not have coding experience or knowledge of ML algorithms and needs to develop a data-driven predictive model. The company needs to perform analysis on internal data and external data.** **Which solution will meet these requirements?** A. Store the data in Amazon S3. Create ML models and demand forecast predictions by using Amazon SageMaker built-in algorithms that use the data from Amazon S3. B. Import the data into Amazon SageMaker Data Wrangler. Create ML models and demand forecast predictions by using SageMaker built-in algorithms. C. Import the data into Amazon SageMaker Data Wrangler. Build ML models and demand forecast predictions by using an Amazon Personalize Trending-Now recipe. D. Import the data into Amazon SageMaker Canvas. Build ML models and demand forecast predictions by selecting the values in the data from SageMaker Canvas. (Correct) *[Question \#57Topic 1]* **A company has installed a security camera. The company uses an ML model to evaluate the security camera footage for potential thefts. The company has discovered that the model disproportionately flags people who are members of a specific ethnic group.** **Which type of bias is affecting the model output?** A. Measurement bias B. Sampling bias (Correct) C. Observer bias D. Confirmation bias *[Question \#58Topic 1]* **A company is building a customer service chatbot. The company wants the chatbot to improve its responses by learning from past interactions and online resources.** **Which AI learning strategy provides this self-improvement capability?** A. Supervised learning with a manually curated dataset of good responses and bad responses B. Reinforcement learning with rewards for positive customer feedback (Correct) C. Unsupervised learning to find clusters of similar customer inquiries D. Supervised learning with a continuously updated FAQ database *[Question \#59Topic 1]* **An AI practitioner has built a deep learning model to classify the types of materials in images. The AI practitioner now wants to measure the model performance.** **Which metric will help the AI practitioner evaluate the performance of the model?** A. Confusion matrix (Correct) B. Correlation matrix C. R2 score D. Mean squared error (MSE) *[Question \#60Topic 1]* **A company has built a chatbot that can respond to natural language questions with images. The company wants to ensure that the chatbot does not return inappropriate or unwanted images.** **Which solution will meet these requirements?** A. Implement moderation APIs. (Correct) B. Retrain the model with a general public dataset. C. Perform model validation. D. Automate user feedback integration. *[Question \#61Topic 1]* **An AI practitioner is using an Amazon Bedrock base model to summarize session chats from the customer service department. The AI practitioner wants to store invocation logs to monitor model input and output data.** **Which strategy should the AI practitioner use?** A. Configure AWS CloudTrail as the logs destination for the model. B. Enable invocation logging in Amazon Bedrock. (Correct) C. Configure AWS Audit Manager as the logs destination for the model. D. Configure model invocation logging in Amazon EventBridge. *[Question \#62Topic 1]* **A company is building an ML model to analyze archived data. The company must perform inference on large datasets that are multiple GBs in size. The company does not need to access the model predictions immediately.** **Which Amazon SageMaker inference option will meet these requirements?** A. Batch transform (Correct) B. Real-time inference C. Serverless inference D. Asynchronous inference *[Question \#63Topic 1]* **Which term describes the numerical representations of real-world objects and concepts that AI and natural language processing (NLP) models use to improve understanding of textual information?** A. Embeddings (Correct) B. Tokens C. Models D. Binaries *[Question \#64Topic 1]* **A research company implemented a chatbot by using a foundation model (FM) from Amazon Bedrock. The chatbot searches for answers to questions from a large database of research papers.** **After multiple prompt engineering attempts, the company notices that the FM is performing poorly because of the complex scientific terms in the research papers.** **How can the company improve the performance of the chatbot?** A. Use few-shot prompting to define how the FM can answer the questions. B. Use domain adaptation fine-tuning to adapt the FM to complex scientific terms. (Correct) C. Change the FM inference parameters. D. Clean the research paper data to remove complex scientific terms. *[Question \#65Topic 1]* **A company wants to use a large language model (LLM) on Amazon Bedrock for sentiment analysis. The company needs the LLM to produce more consistent responses to the same input prompt.** **Which adjustment to an inference parameter should the company make to meet these requirements?** A. Decrease the temperature value. (Correct) B. Increase the temperature value. C. Decrease the length of output tokens. D. Increase the maximum generation length. *[Question \#66Topic 1]* **A company wants to develop a large language model (LLM) application by using Amazon Bedrock and customer data that is uploaded to Amazon S3. The company\'s security policy states that each team can access data for only the team\'s own customers.** **Which solution will meet these requirements?** A. Create an Amazon Bedrock custom service role for each team that has access to only the team\'s customer data. (Correct) B. Create a custom service role that has Amazon S3 access. Ask teams to specify the customer name on each Amazon Bedrock request. C. Redact personal data in Amazon S3. Update the S3 bucket policy to allow team access to customer data. D. Create one Amazon Bedrock role that has full Amazon S3 access. Create IAM roles for each team that have access to only each team\'s customer folders. *[Question \#67Topic 1]* **A medical company deployed a disease detection model on Amazon Bedrock. To comply with privacy policies, the company wants to prevent the model from including personal patient information in its responses. The company also wants to receive notification when policy violations occur.** **Which solution meets these requirements?** A. Use Amazon Macie to scan the model\'s output for sensitive data and set up alerts for potential violations. B. Configure AWS CloudTrail to monitor the model\'s responses and create alerts for any detected personal information. C. Use Guardrails for Amazon Bedrock to filter content. Set up Amazon CloudWatch alarms for notification of policy violations. (Correct) D. Implement Amazon SageMaker Model Monitor to detect data drift and receive alerts when model quality degrades. *[Question \#68Topic 1]* **A company manually reviews all submitted resumes in PDF format. As the company grows, the company expects the volume of resumes to exceed the company\'s review capacity. The company needs an automated system to convert the PDF resumes into plain text format for additional processing.** **Which AWS service meets this requirement?** A. Amazon Textract (Correct) B. Amazon Personalize C. Amazon Lex D. Amazon Transcribe *[Question \#69Topic 1]* **An education provider is building a question and answer application that uses a generative AI model to explain complex concepts. The education provider wants to automatically change the style of the model response depending on who is asking the question. The education provider will give the model the age range of the user who has asked the question.** **Which solution meets these requirements with the LEAST implementation effort?** A. Fine-tune the model by using additional training data that is representative of the various age ranges that the application will support. B. Add a role description to the prompt context that instructs the model of the age range that the response should target. (Correct) C. Use chain-of-thought reasoning to deduce the correct style and complexity for a response suitable for that user. D. Summarize the response text depending on the age of the user so that younger users receive shorter responses. *[Question \#70Topic 1]* **Which strategy evaluates the accuracy of a foundation model (FM) that is used in image classification tasks?** A. Calculate the total cost of resources used by the model. B. Measure the model\'s accuracy against a predefined benchmark dataset. (Correct) C. Count the number of layers in the neural network. D. Assess the color accuracy of images processed by the model. *[Question \#71Topic 1]* **An accounting firm wants to implement a large language model (LLM) to automate document processing. The firm must proceed responsibly to avoid potential harms.** **What should the firm do when developing and deploying the LLM? (Choose two.)** A. Include fairness metrics for model evaluation. (Correct) B. Adjust the temperature parameter of the model. C. Modify the training data to mitigate bias. (Correct) D. Avoid overfitting on the training data. E. Apply prompt engineering techniques. *[Question \#72Topic 1]* **A company is building an ML model. The company collected new data and analyzed the data by creating a correlation matrix, calculating statistics, and visualizing the data.** **Which stage of the ML pipeline is the company currently in?** A. Data pre-processing B. Feature engineering C. Exploratory data analysis (Correct) D. Hyperparameter tuning *[Question \#73Topic 1]* **A company has documents that are missing some words because of a database error. The company wants to build an ML model that can suggest potential words to fill in the missing text.** **Which type of model meets this requirement?** A. Topic modeling B. Clustering models C. Prescriptive ML models D. BERT-based models (Correct) *[Question \#74Topic 1]* **A company wants to display the total sales for its top-selling products across various retail locations in the past 12 months.** **Which AWS solution should the company use to automate the generation of graphs?** A. Amazon Q in Amazon EC2 B. Amazon Q Developer C. Amazon Q in Amazon QuickSight (Correct) D. Amazon Q in AWS Chatbot *[Question \#75Topic 1]* **A company is building a chatbot to improve user experience. The company is using a large language model (LLM) from Amazon Bedrock for intent detection. The company wants to use few-shot learning to improve intent detection accuracy.** **Which additional data does the company need to meet these requirements?** A. Pairs of chatbot responses and correct user intents B. Pairs of user messages and correct chatbot responses C. Pairs of user messages and correct user intents (Correct) D. Pairs of user intents and correct chatbot responses *[Question \#76Topic 1]* **A company is using few-shot prompting on a base model that is hosted on Amazon Bedrock. The model currently uses 10 examples in the prompt. The model is invoked once daily and is performing well. The company wants to lower the monthly cost.** **Which solution will meet these requirements?** A. Customize the model by using fine-tuning. B. Decrease the number of tokens in the prompt. (Correct) C. Increase the number of tokens in the prompt. D. Use Provisioned Throughput. *[Question \#77Topic 1]* **An AI practitioner is using a large language model (LLM) to create content for marketing campaigns. The generated content sounds plausible and factual but is incorrect.** **Which problem is the LLM having?** A. Data leakage B. Hallucination (Correct) C. Overfitting D. Underfitting *[Question \#78Topic 1]* **An AI practitioner trained a custom model on Amazon Bedrock by using a training dataset that contains confidential data. The AI practitioner wants to ensure that the custom model does not generate inference responses based on confidential data.** **How should the AI practitioner prevent responses based on confidential data?** A. Delete the custom model. Remove the confidential data from the training dataset. Retrain the custom model. (Correct) B. Mask the confidential data in the inference responses by using dynamic data masking. C. Encrypt the confidential data in the inference responses by using Amazon SageMaker. D. Encrypt the confidential data in the custom model by using AWS Key Management Service (AWS KMS). *[Question \#79Topic 1]* **A company has built a solution by using generative AI. The solution uses large language models (LLMs) to translate training manuals from English into other languages. The company wants to evaluate the accuracy of the solution by examining the text generated for the manuals.** **Which model evaluation strategy meets these requirements?** A. Bilingual Evaluation Understudy (BLEU) (Correct) B. Root mean squared error (RMSE) C. Recall-Oriented Understudy for Gisting Evaluation (ROUGE) D. F1 score *[Question \#80Topic 1]* **A large retailer receives thousands of customer support inquiries about products every day. The customer support inquiries need to be processed and responded to quickly. The company wants to implement Agents for Amazon Bedrock.** **What are the key benefits of using Amazon Bedrock agents that could help this retailer?** A. Generation of custom foundation models (FMs) to predict customer needs B. Automation of repetitive tasks and orchestration of complex workflows (Correct) C. Automatically calling multiple foundation models (FMs) and consolidating the results D. Selecting the foundation model (FM) based on predefined criteria and metrics *[Question \#81Topic 1]* **Which option is a benefit of ongoing pre-training when fine-tuning a foundation model (FM)?** A. Helps decrease the model\'s complexity B. Improves model performance over time (Correct) C. Decreases the training time requirement D. Optimizes model inference time *[Question \#82Topic 1]* **What are tokens in the context of generative AI models?** A. Tokens are the basic units of input and output that a generative AI model operates on, representing words, subwords, or other linguistic units. (Correct) B. Tokens are the mathematical representations of words or concepts used in generative AI models. C. Tokens are the pre-trained weights of a generative AI model that are fine-tuned for specific tasks. D. Tokens are the specific prompts or instructions given to a generative AI model to generate output. *[Question \#83Topic 1]* **A company wants to assess the costs that are associated with using a large language model (LLM) to generate inferences. The company wants to use Amazon Bedrock to build generative AI applications.** **Which factor will drive the inference costs?** A. Number of tokens consumed (Correct) B. Temperature value C. Amount of data used to train the LLM D. Total training time *[Question \#84Topic 1]* **A company is using Amazon SageMaker Studio notebooks to build and train ML models. The company stores the data in an Amazon S3 bucket. The company needs to manage the flow of data from Amazon S3 to SageMaker Studio notebooks.** **Which solution will meet this requirement?** A. Use Amazon Inspector to monitor SageMaker Studio. B. Use Amazon Macie to monitor SageMaker Studio. C. Configure SageMaker to use a VPC with an S3 endpoint. (Correct) D. Configure SageMaker to use S3 Glacier Deep Archive. *[Question \#85Topic 1]* **A company has a foundation model (FM) that was customized by using Amazon Bedrock to answer customer queries about products. The company wants to validate the model\'s responses to new types of queries. The company needs to upload a new dataset that Amazon Bedrock can use for validation.** **Which AWS service meets these requirements?** A. Amazon S3 (Correct) B. Amazon Elastic Block Store (Amazon EBS) C. Amazon Elastic File System (Amazon EFS) D. AWS Snowcone