Podcast
Questions and Answers
A company has a binary classification model in production. An ML engineer needs to develop a new version of the model. The new model version must maximize correct predictions of positive labels and negative labels. The ML engineer must use a metric to recalibrate the model to meet these requirements. Which metric should the ML engineer use for the model recalibration?
A company has a binary classification model in production. An ML engineer needs to develop a new version of the model. The new model version must maximize correct predictions of positive labels and negative labels. The ML engineer must use a metric to recalibrate the model to meet these requirements. Which metric should the ML engineer use for the model recalibration?
A company is using Amazon SageMaker to create ML models. The company's data scientists need fine-grained control of the ML workflows that they orchestrate. The data scientists also need the ability to visualize SageMaker jobs and workflows as a directed acyclic graph (DAG). The data scientists must keep a running history of model discovery experiments and must establish model governance for auditing and compliance verifications. Which solution will meet these requirements?
A company is using Amazon SageMaker to create ML models. The company's data scientists need fine-grained control of the ML workflows that they orchestrate. The data scientists also need the ability to visualize SageMaker jobs and workflows as a directed acyclic graph (DAG). The data scientists must keep a running history of model discovery experiments and must establish model governance for auditing and compliance verifications. Which solution will meet these requirements?
A company wants to reduce the cost of its containerized ML applications. The applications use ML models that run on Amazon EC2 instances, AWS Lambda functions, and an Amazon Elastic Container Service (Amazon ECS) cluster. The EC2 workloads and ECS workloads use Amazon Elastic Block Store (Amazon EBS) volumes to save predictions and artifacts. An ML engineer must identify resources that are being used inefficiently. The ML engineer also must generate recommendations to reduce the cost of these resources. Which solution will meet these requirements with the LEAST development effort?
A company wants to reduce the cost of its containerized ML applications. The applications use ML models that run on Amazon EC2 instances, AWS Lambda functions, and an Amazon Elastic Container Service (Amazon ECS) cluster. The EC2 workloads and ECS workloads use Amazon Elastic Block Store (Amazon EBS) volumes to save predictions and artifacts. An ML engineer must identify resources that are being used inefficiently. The ML engineer also must generate recommendations to reduce the cost of these resources. Which solution will meet these requirements with the LEAST development effort?
A company needs to create a central catalog for all the company's ML models. The models are in AWS accounts where the company developed the models initially. The models are hosted in Amazon Elastic Container Registry (Amazon ECR) repositories. Which solution will meet these requirements?
A company needs to create a central catalog for all the company's ML models. The models are in AWS accounts where the company developed the models initially. The models are hosted in Amazon Elastic Container Registry (Amazon ECR) repositories. Which solution will meet these requirements?
Signup and view all the answers
A company has developed a new ML model. The company requires online model validation on $10%$ of the traffic before the company fully releases the model in production. The company uses an Amazon SageMaker endpoint behind an Application Load Balancer (ALB) to serve the model. Which solution will set up the required online validation with the LEAST operational overhead?
A company has developed a new ML model. The company requires online model validation on $10%$ of the traffic before the company fully releases the model in production. The company uses an Amazon SageMaker endpoint behind an Application Load Balancer (ALB) to serve the model. Which solution will set up the required online validation with the LEAST operational overhead?
Signup and view all the answers
A company's ML engineer has deployed an ML model for sentiment analysis to an Amazon Sage endpoint. The ML engineer needs to explain to company stakeholders how thefor the model's predictions?
A company's ML engineer has deployed an ML model for sentiment analysis to an Amazon Sage endpoint. The ML engineer needs to explain to company stakeholders how thefor the model's predictions?
Signup and view all the answers
Study Notes
Question 81
- A company needs a new binary classification model version.
- The new model must improve accuracy for both positive and negative predictions.
- The ML engineer needs a metric to recalibrate the model.
- The correct metric is Accuracy.
Question 82
- A company uses Amazon SageMaker for ML model creation.
- Data scientists need fine-grained control of workflows, visualization as a Directed Acyclic Graph (DAG), running experiment history, and model governance.
- The solution that meets these requirements is using SageMaker Pipelines and its integration with SageMaker Studio to manage the entire ML workflows, and SageMaker ML Lineage Tracking for auditing compliant workflows.
Question 83
- A company wants to reduce costs for containerized ML applications.
- The applications utilize EC2 instances, Lambda functions, and an ECS cluster.
- Amazon EBS volumes are used for storing predictions and artifacts.
- The ML engineer must identify inefficient resources and propose cost-reduction recommendations.
- The least development effort solution is running AWS Compute Optimizer.
Question 84
- A company needs a central catalog for its ML models in various AWS accounts.
- Models are hosted in Amazon ECR repositories.
- The correct solution is using Amazon SageMaker Model Registry in a new AWS account as the central catalog, replicating models from existing repositories.
Question 85
- A company needs to validate a new ML model on 10% of traffic before deployment.
- The model is served through an Amazon SageMaker endpoint behind an ALB.
- The optimal solution with low operational overhead is using production variants to add the new model to the existing SageMaker endpoint, with a 0.1 weight for the new model. Monitor invocations using Amazon CloudWatch.
Studying That Suits You
Use AI to generate personalized quizzes and flashcards to suit your learning preferences.
Related Documents
Description
Test your knowledge on binary classification metrics and management of machine learning workflows using Amazon SageMaker. This quiz covers model accuracy, SageMaker Pipelines, and strategies for cost reduction in containerized ML applications.