Podcast
Questions and Answers
A company is training a foundation model (FM). The company wants to increase the accuracy of the model up to a specific acceptance level. Which solution will meet these requirements?
A company is training a foundation model (FM). The company wants to increase the accuracy of the model up to a specific acceptance level. Which solution will meet these requirements?
A company is building a large language model (LLM) question answering chatbot. The company wants to decrease the number of actions call center employees need to take to respond to customer questions. Which business objective should the company use to evaluate the effect of the LLM chatbot?
A company is building a large language model (LLM) question answering chatbot. The company wants to decrease the number of actions call center employees need to take to respond to customer questions. Which business objective should the company use to evaluate the effect of the LLM chatbot?
Which functionality does Amazon SageMaker Clarify provide?
Which functionality does Amazon SageMaker Clarify provide?
A company is developing a new model to predict the prices of specific items. The model performed well on the training dataset. When the company deployed the model to production, the model's performance decreased significantly. What should the company do to mitigate this problem?
A company is developing a new model to predict the prices of specific items. The model performed well on the training dataset. When the company deployed the model to production, the model's performance decreased significantly. What should the company do to mitigate this problem?
Signup and view all the answers
An ecommerce company wants to build a solution to determine customer sentiments based on written customer reviews of products. Which AWS services meet these requirements? (Choose two.)
An ecommerce company wants to build a solution to determine customer sentiments based on written customer reviews of products. Which AWS services meet these requirements? (Choose two.)
Signup and view all the answers
A company wants to use large language models (LLMs) with Amazon Bedrock to develop a chat interface for the company's product manuals. The manuals are stored as PDF files. Which solution meets these requirements MOST cost-effectively?
A company wants to use large language models (LLMs) with Amazon Bedrock to develop a chat interface for the company's product manuals. The manuals are stored as PDF files. Which solution meets these requirements MOST cost-effectively?
Signup and view all the answers
A social media company wants to use a large language model (LLM) for content moderation. The company wants to evaluate the LLM outputs for bias and potential discrimination against specific groups or individuals. Which data source should the company use to evaluate the LLM outputs with the LEAST administrative effort?
A social media company wants to use a large language model (LLM) for content moderation. The company wants to evaluate the LLM outputs for bias and potential discrimination against specific groups or individuals. Which data source should the company use to evaluate the LLM outputs with the LEAST administrative effort?
Signup and view all the answers
A company wants to use a pre-trained generative AI model to generate content for its marketing campaigns. The company needs to ensure that the generated content aligns with the company's brand voice and messaging requirements. Which solution meets these requirements?
A company wants to use a pre-trained generative AI model to generate content for its marketing campaigns. The company needs to ensure that the generated content aligns with the company's brand voice and messaging requirements. Which solution meets these requirements?
Signup and view all the answers
A loan company is building a generative AI-based solution to offer new applicants discounts based on specific business criteria. The company wants to build and use an AI model responsibly to minimize bias that could negatively affect some customers. Which actions should the company take to meet these requirements? (Choose two.)
A loan company is building a generative AI-based solution to offer new applicants discounts based on specific business criteria. The company wants to build and use an AI model responsibly to minimize bias that could negatively affect some customers. Which actions should the company take to meet these requirements? (Choose two.)
Signup and view all the answers
A company is using an Amazon Bedrock base model to summarize documents for an internal use case. The company trained a custom model to improve the summarization quality. Which action must the company take to use the custom model through Amazon Bedrock?
A company is using an Amazon Bedrock base model to summarize documents for an internal use case. The company trained a custom model to improve the summarization quality. Which action must the company take to use the custom model through Amazon Bedrock?
Signup and view all the answers
A company needs to choose a model from Amazon Bedrock to use internally. The company must identify a model that generates responses in a style that the company's employees prefer. What should the company do to meet these requirements?
A company needs to choose a model from Amazon Bedrock to use internally. The company must identify a model that generates responses in a style that the company's employees prefer. What should the company do to meet these requirements?
Signup and view all the answers
A student at a university is copying content from generative AI to write essays. Which challenge of responsible generative AI does this scenario represent?
A student at a university is copying content from generative AI to write essays. Which challenge of responsible generative AI does this scenario represent?
Signup and view all the answers
A company needs to build its own large language model (LLM) based on only the company's private data. The company is concerned about the environmental effect of the training process. Which Amazon EC2 instance type has the LEAST environmental effect when training LLMs?
A company needs to build its own large language model (LLM) based on only the company's private data. The company is concerned about the environmental effect of the training process. Which Amazon EC2 instance type has the LEAST environmental effect when training LLMs?
Signup and view all the answers
A company wants to build an interactive application for children that generates new stories based on classic stories. The company wants to use Amazon Bedrock and needs to ensure that the results and topics are appropriate for children. Which AWS service or feature will meet these requirements?
A company wants to build an interactive application for children that generates new stories based on classic stories. The company wants to use Amazon Bedrock and needs to ensure that the results and topics are appropriate for children. Which AWS service or feature will meet these requirements?
Signup and view all the answers
A company is building an application that needs to generate synthetic data that is based on existing data. Which type of model can the company use to meet this requirement?
A company is building an application that needs to generate synthetic data that is based on existing data. Which type of model can the company use to meet this requirement?
Signup and view all the answers
A digital devices company wants to predict customer demand for memory hardware. The company does not have coding experience or knowledge of ML algorithms and needs to develop a data-driven predictive model. The company needs to perform analysis on internal data and external data. Which solution will meet these requirements?
A digital devices company wants to predict customer demand for memory hardware. The company does not have coding experience or knowledge of ML algorithms and needs to develop a data-driven predictive model. The company needs to perform analysis on internal data and external data. Which solution will meet these requirements?
Signup and view all the answers
A company has installed a security camera. The company uses an ML model to evaluate the security camera footage for potential thefts. The company has discovered that the model disproportionately flags people who are members of a specific ethnic group. Which type of bias is affecting the model output?
A company has installed a security camera. The company uses an ML model to evaluate the security camera footage for potential thefts. The company has discovered that the model disproportionately flags people who are members of a specific ethnic group. Which type of bias is affecting the model output?
Signup and view all the answers
A company is building a customer service chatbot. The company wants the chatbot to improve its responses by learning from past interactions and online resources. Which AI learning strategy provides this self-improvement capability?
A company is building a customer service chatbot. The company wants the chatbot to improve its responses by learning from past interactions and online resources. Which AI learning strategy provides this self-improvement capability?
Signup and view all the answers
An AI practitioner has built a deep learning model to classify the types of materials in images. The AI practitioner now wants to measure the model performance. Which metric will help the AI practitioner evaluate the performance of the model?
An AI practitioner has built a deep learning model to classify the types of materials in images. The AI practitioner now wants to measure the model performance. Which metric will help the AI practitioner evaluate the performance of the model?
Signup and view all the answers
A company has built a chatbot that can respond to natural language questions with images. The company wants to ensure that the chatbot does not return inappropriate or unwanted images. Which solution will meet these requirements?
A company has built a chatbot that can respond to natural language questions with images. The company wants to ensure that the chatbot does not return inappropriate or unwanted images. Which solution will meet these requirements?
Signup and view all the answers
An AI practitioner is using an Amazon Bedrock base model to summarize session chats from the customer service department. The AI practitioner wants to store invocation logs to monitor model input and output data. Which strategy should the AI practitioner use?
An AI practitioner is using an Amazon Bedrock base model to summarize session chats from the customer service department. The AI practitioner wants to store invocation logs to monitor model input and output data. Which strategy should the AI practitioner use?
Signup and view all the answers
A company is building an ML model to analyze archived data. The company must perform inference on large datasets that are multiple GBs in size. The company does not need to access the model predictions immediately. Which Amazon SageMaker inference option will meet these requirements?
A company is building an ML model to analyze archived data. The company must perform inference on large datasets that are multiple GBs in size. The company does not need to access the model predictions immediately. Which Amazon SageMaker inference option will meet these requirements?
Signup and view all the answers
Which term describes the numerical representations of real-world objects and concepts that AI and natural language processing (NLP) models use to improve understanding of textual information?
Which term describes the numerical representations of real-world objects and concepts that AI and natural language processing (NLP) models use to improve understanding of textual information?
Signup and view all the answers
A research company implemented a chatbot by using a foundation model (FM) from Amazon Bedrock. The chatbot searches for answers to questions from a large database of research papers. After multiple prompt engineering attempts, the company notices that the FM is performing poorly because of the complex scientific terms in the research papers. How can the company improve the performance of the chatbot?
A research company implemented a chatbot by using a foundation model (FM) from Amazon Bedrock. The chatbot searches for answers to questions from a large database of research papers. After multiple prompt engineering attempts, the company notices that the FM is performing poorly because of the complex scientific terms in the research papers. How can the company improve the performance of the chatbot?
Signup and view all the answers
A company wants to use a large language model (LLM) on Amazon Bedrock for sentiment analysis. The company needs the LLM to produce more consistent responses to the same input prompt. Which adjustment to an inference parameter should the company make to meet these requirements?
A company wants to use a large language model (LLM) on Amazon Bedrock for sentiment analysis. The company needs the LLM to produce more consistent responses to the same input prompt. Which adjustment to an inference parameter should the company make to meet these requirements?
Signup and view all the answers
A company wants to develop a large language model (LLM) application by using Amazon Bedrock and customer data that is uploaded to Amazon S3. The company's security policy states that each team can access data for only the team's own customers. Which solution will meet these requirements?
A company wants to develop a large language model (LLM) application by using Amazon Bedrock and customer data that is uploaded to Amazon S3. The company's security policy states that each team can access data for only the team's own customers. Which solution will meet these requirements?
Signup and view all the answers
A medical company deployed a disease detection model on Amazon Bedrock. To comply with privacy policies, the company wants to prevent the model from including personal patient information in its responses. The company also wants to receive notification when policy violations occur. Which solution meets these requirements?
A medical company deployed a disease detection model on Amazon Bedrock. To comply with privacy policies, the company wants to prevent the model from including personal patient information in its responses. The company also wants to receive notification when policy violations occur. Which solution meets these requirements?
Signup and view all the answers
A company manually reviews all submitted resumes in PDF format. As the company grows, the company expects the volume of resumes to exceed the company's review capacity. The company needs an automated system to convert the PDF resumes into plain text format for additional processing. Which AWS service meets this requirement?
A company manually reviews all submitted resumes in PDF format. As the company grows, the company expects the volume of resumes to exceed the company's review capacity. The company needs an automated system to convert the PDF resumes into plain text format for additional processing. Which AWS service meets this requirement?
Signup and view all the answers
An education provider is building a question and answer application that uses a generative AI model to explain complex concepts. The education provider wants to automatically change the style of the model response depending on who is asking the question. The education provider will give the model the age range of the user who has asked the question. Which solution meets these requirements with the LEAST implementation effort?
An education provider is building a question and answer application that uses a generative AI model to explain complex concepts. The education provider wants to automatically change the style of the model response depending on who is asking the question. The education provider will give the model the age range of the user who has asked the question. Which solution meets these requirements with the LEAST implementation effort?
Signup and view all the answers
Which strategy evaluates the accuracy of a foundation model (FM) that is used in image classification tasks?
Which strategy evaluates the accuracy of a foundation model (FM) that is used in image classification tasks?
Signup and view all the answers
An accounting firm wants to implement a large language model (LLM) to automate document processing. The firm must proceed responsibly to avoid potential harms. What should the firm do when developing and deploying the LLM? (Choose two.)
An accounting firm wants to implement a large language model (LLM) to automate document processing. The firm must proceed responsibly to avoid potential harms. What should the firm do when developing and deploying the LLM? (Choose two.)
Signup and view all the answers
A company is building an ML model. The company collected new data and analyzed the data by creating a correlation matrix, calculating statistics, and visualizing the data. Which stage of the ML pipeline is the company currently in?
A company is building an ML model. The company collected new data and analyzed the data by creating a correlation matrix, calculating statistics, and visualizing the data. Which stage of the ML pipeline is the company currently in?
Signup and view all the answers
A company has documents that are missing some words because of a database error. The company wants to build an ML model that can suggest potential words to fill in the missing text. Which type of model meets this requirement?
A company has documents that are missing some words because of a database error. The company wants to build an ML model that can suggest potential words to fill in the missing text. Which type of model meets this requirement?
Signup and view all the answers
A company wants to display the total sales for its top-selling products across various retail locations in the past 12 months. Which AWS solution should the company use to automate the generation of graphs?
A company wants to display the total sales for its top-selling products across various retail locations in the past 12 months. Which AWS solution should the company use to automate the generation of graphs?
Signup and view all the answers
A company is building a chatbot to improve user experience. The company is using a large language model (LLM) from Amazon Bedrock for intent detection. The company wants to use few-shot learning to improve intent detection accuracy. Which additional data does the company need to meet these requirements?
A company is building a chatbot to improve user experience. The company is using a large language model (LLM) from Amazon Bedrock for intent detection. The company wants to use few-shot learning to improve intent detection accuracy. Which additional data does the company need to meet these requirements?
Signup and view all the answers
A company is using few-shot prompting on a base model that is hosted on Amazon Bedrock. The model currently uses 10 examples in the prompt. The model is invoked once daily and is performing well. The company wants to lower the monthly cost. Which solution will meet these requirements?
A company is using few-shot prompting on a base model that is hosted on Amazon Bedrock. The model currently uses 10 examples in the prompt. The model is invoked once daily and is performing well. The company wants to lower the monthly cost. Which solution will meet these requirements?
Signup and view all the answers
An AI practitioner is using a large language model (LLM) to create content for marketing campaigns. The generated content sounds plausible and factual but is incorrect. Which problem is the LLM having?
An AI practitioner is using a large language model (LLM) to create content for marketing campaigns. The generated content sounds plausible and factual but is incorrect. Which problem is the LLM having?
Signup and view all the answers
An AI practitioner trained a custom model on Amazon Bedrock by using a training dataset that contains confidential data. The AI practitioner wants to ensure that the custom model does not generate inference responses based on confidential data. How should the AI practitioner prevent responses based on confidential data?
An AI practitioner trained a custom model on Amazon Bedrock by using a training dataset that contains confidential data. The AI practitioner wants to ensure that the custom model does not generate inference responses based on confidential data. How should the AI practitioner prevent responses based on confidential data?
Signup and view all the answers
A company has built a solution by using generative AI. The solution uses large language models (LLMs) to translate training manuals from English into other languages. The company wants to evaluate the accuracy of the solution by examining the text generated for the manuals. Which model evaluation strategy meets these requirements?
A company has built a solution by using generative AI. The solution uses large language models (LLMs) to translate training manuals from English into other languages. The company wants to evaluate the accuracy of the solution by examining the text generated for the manuals. Which model evaluation strategy meets these requirements?
Signup and view all the answers
A large retailer receives thousands of customer support inquiries about products every day. The customer support inquiries need to be processed and responded to quickly. The company wants to implement Agents for Amazon Bedrock. What are the key benefits of using Amazon Bedrock agents that could help this retailer?
A large retailer receives thousands of customer support inquiries about products every day. The customer support inquiries need to be processed and responded to quickly. The company wants to implement Agents for Amazon Bedrock. What are the key benefits of using Amazon Bedrock agents that could help this retailer?
Signup and view all the answers
Which option is a benefit of ongoing pre-training when fine-tuning a foundation model (FM)?
Which option is a benefit of ongoing pre-training when fine-tuning a foundation model (FM)?
Signup and view all the answers
What are tokens in the context of generative AI models?
What are tokens in the context of generative AI models?
Signup and view all the answers
A company wants to assess the costs that are associated with using a large language model (LLM) to generate inferences. The company wants to use Amazon Bedrock to build generative AI applications. Which factor will drive the inference costs?
A company wants to assess the costs that are associated with using a large language model (LLM) to generate inferences. The company wants to use Amazon Bedrock to build generative AI applications. Which factor will drive the inference costs?
Signup and view all the answers
A company is using Amazon SageMaker Studio notebooks to build and train ML models. The company stores the data in an Amazon S3 bucket. The company needs to manage the flow of data from Amazon S3 to SageMaker Studio notebooks. Which solution will meet this requirement?
A company is using Amazon SageMaker Studio notebooks to build and train ML models. The company stores the data in an Amazon S3 bucket. The company needs to manage the flow of data from Amazon S3 to SageMaker Studio notebooks. Which solution will meet this requirement?
Signup and view all the answers
A company has a foundation model (FM) that was customized by using Amazon Bedrock to answer customer queries about products. The company wants to validate the model's responses to new types of queries. The company needs to upload a new dataset that Amazon Bedrock can use for validation. Which AWS service meets these requirements?
A company has a foundation model (FM) that was customized by using Amazon Bedrock to answer customer queries about products. The company wants to validate the model's responses to new types of queries. The company needs to upload a new dataset that Amazon Bedrock can use for validation. Which AWS service meets these requirements?
Signup and view all the answers
Flashcards
Increasing model accuracy
Increasing model accuracy
To enhance a foundation model's accuracy, increasing the training epochs is recommended.
Chatbot objective
Chatbot objective
Evaluating a chatbot includes assessing the average call duration to reduce employee actions.
Amazon SageMaker Clarify
Amazon SageMaker Clarify
This service identifies potential bias during data preparation for machine learning processes.
Model performance issue
Model performance issue
Signup and view all the flashcards
Amazon services for sentiment analysis
Amazon services for sentiment analysis
Signup and view all the flashcards
Cost-effective PDF interaction
Cost-effective PDF interaction
Signup and view all the flashcards
Evaluating LLM outputs
Evaluating LLM outputs
Signup and view all the flashcards
Generative AI content alignment
Generative AI content alignment
Signup and view all the flashcards
Addressing bias in AI models
Addressing bias in AI models
Signup and view all the flashcards
Custom model for summarization
Custom model for summarization
Signup and view all the flashcards
Model style preference
Model style preference
Signup and view all the flashcards
Plagiarism in AI use
Plagiarism in AI use
Signup and view all the flashcards
Environmentally friendly training
Environmentally friendly training
Signup and view all the flashcards
Children's story application
Children's story application
Signup and view all the flashcards
Synthetic data generation
Synthetic data generation
Signup and view all the flashcards
Analyzing data for ML
Analyzing data for ML
Signup and view all the flashcards
ML model predictions
ML model predictions
Signup and view all the flashcards
Model performance evaluation
Model performance evaluation
Signup and view all the flashcards
Content moderation solution
Content moderation solution
Signup and view all the flashcards
Few-shot learning data
Few-shot learning data
Signup and view all the flashcards
Monitoring model input/output
Monitoring model input/output
Signup and view all the flashcards
Model evaluation strategy
Model evaluation strategy
Signup and view all the flashcards
Ongoing model training benefits
Ongoing model training benefits
Signup and view all the flashcards
Tokens in AI models
Tokens in AI models
Signup and view all the flashcards
Inference cost factors
Inference cost factors
Signup and view all the flashcards
Data management in SageMaker
Data management in SageMaker
Signup and view all the flashcards
Study Notes
Exam B Questions and Answers
- Question 1: Increasing model accuracy in a foundation model (FM) training requires increasing the epochs.
- Question 2: Evaluating the impact of an LLM chatbot on customer service efficiency is determined by the website engagement rate.
- Question 3: Amazon SageMaker Clarify identifies potential bias during data preparation for ML models.
- Question 4: Decreasing model performance in production can be addressed by increasing the volume of data used in training.
- Question 5: AWS services for customer sentiment analysis are Amazon Lex and Amazon Comprehend.
- Question 6: An effective solution for using LLMs with PDF manuals is to upload the PDF documents to an Amazon Bedrock knowledge base.
- Question 7: Benchmark datasets offer the easiest way to evaluate LLM bias.
- Question 8: A pre-trained generative Al model's adherence to brand voice and messaging requires clear prompts with context to guide its generation.
- Question 9: Avoiding bias in Al-based solutions requires detecting imbalances in data and evaluating the model's behavior.
- Question 10: Deploying a custom model within Amazon Bedrock involves deploying the custom model in an Amazon SageMaker endpoint.
- Question 11: Evaluating models for employee preferences involves evaluating models using a human workforce and custom prompt datasets.
- Question 12: Copying content from generative AI to write essays represents plagiarism.
- Question 13: The Amazon EC2 Trn series has the lowest environmental impact for training LLMs.
- Question 14: Using Guardrails for Amazon Bedrock will make sure the results and topics are appropriate for children.
- Question 15: Generative adversarial networks (GANs) can generate synthetic data.
- Question 16: The solution for a digital device company predicting customer demand uses Amazon SageMaker Canvas.
- Question 17: A model disproportionately flagging a specific ethnic group in security footage results in sampling bias.
- Question 18: Reinforcement learning with rewards for positive customer feedback improves chatbot responses.
- Question 19: A confusion matrix is a useful metric to evaluate the performance of a deep learning model.
- Question 20: Implementing moderation APIs will prevent an image-based chatbot from returning inappropriate content.
- Question 21: Using AWS CloudTrail as a logs destination for a Bedrock-based ML model is a good strategy.
- Question 22: Inference on large datasets that do not require immediate results is handled by Batch Transform in Amazon SageMaker.
- Question 23: Real-world object and concept representations used by AI models are known as embeddings.
- Question 24: To improve a chatbot's performance when using research papers, adapt the foundation model through domain adaptation finetuning.
- Question 25: To produce consistent outputs in sentiment analysis with an LLM, decrease the temperature value.
- Question 26: An application that restricts team access to customer data should utilize a custom service role on Amazon Bedrock for each team.
- Question 27: Using Amazon CloudWatch alarms for policy violations and Guardrails is a solution for a medical company concerned about privacy.
- Question 28: Amazon Textract converts PDF documents into their text equivalents.
- Question 29: The easiest solution to change a generative AI's response style based on user age involves adding a role description to the prompt.
- Question 30: Measuring a foundation model's accuracy is done by comparing the model's results against a benchmark dataset.
- Question 31: When training an LLM, include fairness metrics and modify training data to mitigate bias.
- Question 32: Exploratory data analysis involves calculating statistics and visualizations for new data analysis.
- Question 33: Topic modeling is a way of suggesting potential words/phrases related to missing phrases in text.
- Question 34: Amazon QuickSight is suitable for automated graph generation.
- Question 35: Training a chatbot with user intents and responses from a large language model is beneficial.
- Question 36: Decreasing the number of tokens in the prompt for a fine-tuned model will lower monthly costs, assuming performance is not affected.
- Question 37: The incorrect generation of content in an LLM application (incorrect or misleading inferences) can be attributed to hallucinations.
- Question 38: Deleting the custom model and removing confidential data from the training dataset can prevent inference responses based on confidential data.
- Question 39: Bilingual Evaluation Understudy (BLEU) is a model for evaluating translation accuracy.
- Question 40: Amazon Bedrock agents automate repetitive tasks and workflows.
- Question 41: Ongoing pre-training improves model performance over time.
- Question 42: Tokens are basic units of input and output in generative AI.
- Question 43: Cost of using large language models (LLMs) is directly related to the number of consumed tokens.
- Question 44: Configuring a VPC with an S3 endpoint allows data to flow from an S3 bucket to a SageMaker Studio notebook.
- Question 45: Amazon S3 is suitable for uploading datasets to validate a customized foundation model.
Studying That Suits You
Use AI to generate personalized quizzes and flashcards to suit your learning preferences.
Related Documents
Description
Test your understanding of foundational model (FM) training and the evaluation of AI systems. This quiz covers topics such as data bias in machine learning, customer sentiment analysis using AWS services, and effective use of LLMs. Challenge yourself with questions that explore core concepts in AI and machine learning.