
MLOps

Quiz
•
Computers
•
Professional Development
•
Hard
Lennart Lehmann
Used 2+ times
FREE Resource
5 questions
Show all answers
1.
MULTIPLE CHOICE QUESTION
3 mins • 1 pt
As a ML specialist for a credit card company, you build a model to extract information from scanned bank application documents. These forms generate datasets containing customer personally identifiable information (PII) such as credit card numbers, names, ID, etc. The resulting dataset needs to be highly accurate; your company has a very low tolerance for inaccurate bank application data.
How can you ensure the PII data remains encrypted and the credit card information is secure while ensuring highest level of quality from scanned forms?
Build a custom encryption algorithm to encrypt the data. Then store the data in your SageMaker instance in your private subnet within your VPC. Use SageMaker's DeepAR algorithm to randomize the PII. Finally, use Ground Truth to provide a human review of the image scan data.
Use an IAM policy to encrypt the data on your S3 bucket. Then use Kinesis Data Analytics to obfuscate the PII. Finally use SageMaker AutoPilot to provide a human review of the image scan data.
Use a SageMaker custom image to encrypt the data when it's loaded into SageMaker instance private subnet within your VPC. Use SageMaker principal component analysis to obfuscate the PII.
Use AWS KMS to encrypt the data on S3 and in your SageMaker environment. Then obfuscate the PII from the customer data using AWS comprehend. Finally use SageMaker Augmented AI to provide a human review of the image scan data.
2.
MULTIPLE CHOICE QUESTION
2 mins • 1 pt
You're a ML Specialits at a government agency that has gathered election data that they plan to use to predict election outcomes. Using the agile methodology, a minimum viable product for the model idea has been built using a relative data sample. Now your agency is cready to create a ML model using SageMaker Studio. The election data you plan to use for training data is stored in RDS SQL Servers.
Which option gives you the highest performing and most efficient method of loading the data into your SageMaker Studio environment?
Load the election data into your SageMaker Studio jupyter notebook using a direct connection to the SQL server database from the code in your notebook cell.
Use AWS Data Pipeline to load the election data from your SQL Servers to one of your S3 buckets. Load the data into your SageMaker Studio jupyter notebook from the S3 bucket.
Use AWS Data Pipeline to load the election data from your SQL Servers to a set of DynamoDB tables. Then connect the DynamoDB tables to you SageMaker Studio jupyter notebook from the code cells.
Use AWS DMS to load the election data from your SQL Servers to an ElastiCache cluster. Then connect your ElastiCache cluster with your SageMaker Studio jupyter notebook from the code cells.
3.
MULTIPLE CHOICE QUESTION
2 mins • 1 pt
You're a ML specialist at a medical research institute where you're using a deep learning technique to analyze brain scan images. Your team has built and trained your model using the pre-built SageMaker container images for TensorFlow. You are now attempting to deploy it for inferences in production.
You have TensorFlow 2.3.0 installed using Horovod on GPU servers using Python 3.7. Your training job runs well, but when you deploy to your container image for serving inferences, your container fails. Why is your inference container failing?
You cannot use TensorFlow in a SageMaker inference container image
The TensorFlow SageMaker inference container image only runs on CPU servers
You need to remove the Horovod operation from your inference container
You should use the DeepAR SageMaker built-in algorithm
4.
MULTIPLE CHOICE QUESTION
2 mins • 1 pt
You work as a ML specialist for a bank in their fraud detection department. You and your team have been assigned of creating a fraud detection model that can be used in an automated way. When a transaction is attempted on one of your bank's issued credit cards, the endpoint of your model will be called and it should return an inference result flagging the transaction to be fraudulent or not. Which ML service should you use to build your solution?
Lambda to receive HTTPS REST API inference requests and make inference requests to model enpoint. Deploy an XGBoost fraud detection model using SageMaker batch transform.
API Gateway to receive HTTPS REST API inference requests, Lambda to process inference requests from API gateway and make inference requests to model endpoint. Deploy a XGBoost fraud detection model using SageMaker hosting service.
API Gateway to receive HTTPS REST API inference requests and make inference requests to model endpoint. Deploy an XGBoost fraud detection model using SageMaker hosting service.
Lambda to receive HTTPS REST API inference requests and make inference requests to model endpoint. Deploy an XGBoost fraud detection model using SageMaker hosting service.
5.
MULTIPLE CHOICE QUESTION
2 mins • 1 pt
You work as an ML engineer for Insider. Your solution currently serves thousands of customer across different industries and you have one model trained per customer and in production. Now - given your Albert Einstein's abilities - you created a cutting edge model on the test dataset for an existing model, which should replace the existing model. Both models are packaged in a docker container. Since the new model has never been tested in production, you would like to send a small subset of the inference- traffic to your new model.
How can you implement the most efficient solution that allows for this?
Deploy the new model to a new endpoint and use a Lambda function to send 5% of traffic to the new endpoint, while sending the remaining 95% to the existing model.
Deploy the model to the same endpoint using an Endpoint configuration, where you specify both models with 5% traffic to the new model and 95% traffic to the existing model. Update the endpoint to serve both models.
Deploy the new model to the same endpoint using Elastic Inference. Then specify the 5% traffic to the new model and the remaining 95% traffic to the existing model. Update the endpoint to serve both models.
Get rid of the existing endpoint and deploy the model docker's container on two lambda functions, one for each model. Place them behind a Load balancer where you specified 5% traffic to the new model and 95% traffic to the existing model.
Similar Resources on Wayground
10 questions
OSI Model - Transport Layer

Quiz
•
9th Grade - Professio...
10 questions
DP-100 day 3

Quiz
•
University - Professi...
10 questions
IT ENGLISH: Research Project Topics - History of the Internet

Quiz
•
Professional Development
10 questions
Laravel 1

Quiz
•
Professional Development
10 questions
MongoDB-Mongoose101

Quiz
•
Professional Development
10 questions
Prompt Engineering for Generative AI

Quiz
•
Professional Development
10 questions
CIO Acade&my III_ M&A Tech Advisory Quiz

Quiz
•
Professional Development
10 questions
Scrum Day

Quiz
•
Professional Development
Popular Resources on Wayground
10 questions
Video Games

Quiz
•
6th - 12th Grade
10 questions
Lab Safety Procedures and Guidelines

Interactive video
•
6th - 10th Grade
25 questions
Multiplication Facts

Quiz
•
5th Grade
10 questions
UPDATED FOREST Kindness 9-22

Lesson
•
9th - 12th Grade
22 questions
Adding Integers

Quiz
•
6th Grade
15 questions
Subtracting Integers

Quiz
•
7th Grade
20 questions
US Constitution Quiz

Quiz
•
11th Grade
10 questions
Exploring Digital Citizenship Essentials

Interactive video
•
6th - 10th Grade