PMLE M4

PMLE M4

Professional Development

9 Qs

quiz-placeholder

Similar activities

Machine Learning

Machine Learning

Professional Development

10 Qs

FinTech 10-2 Time Series

FinTech 10-2 Time Series

Professional Development

11 Qs

Management Development Quiz

Management Development Quiz

Professional Development

10 Qs

Pre IBC Test Day -3

Pre IBC Test Day -3

Professional Development

11 Qs

Delivery Back to Basics

Delivery Back to Basics

Professional Development

10 Qs

Intro to HRM Quiz

Intro to HRM Quiz

Professional Development

10 Qs

RGM Training Questionnaire

RGM Training Questionnaire

Professional Development

8 Qs

bootcamp2

bootcamp2

Professional Development

10 Qs

PMLE M4

PMLE M4

Assessment

Quiz

Other

Professional Development

Hard

Created by

Mateusz Utracki

Used 1+ times

FREE Resource

9 questions

Show all answers

1.

MULTIPLE CHOICE QUESTION

1 min • 1 pt

You are developing a process for training and running your custom model in production. You need to be able to show lineage for your model and predictions. What should you do?

1. Create a Vertex AI managed dataset.

2. Use a Vertex AI training pipeline to train your model.

3. Generate batch predictions in Vertex AI.

1. Use a Vertex AI Pipelines custom training job component to train your model.

2. Generate predictions by using a Vertex AI Pipelines model batch predict component.

1. Upload your dataset to BigQuery.

2. Use a Vertex AI custom training job to train your model.

3. Generate predictions by using Vertex Al SDK custom prediction routines.

1. Use Vertex AI Experiments to train your model.

2. Register your model in Vertex AI Model Registry.

3. Generate batch predictions in Vertex AI.

2.

MULTIPLE CHOICE QUESTION

1 min • 1 pt

You want to migrate a scikit-learn classifier model to TensorFlow. You plan to train the TensorFlow classifier model using the same training set that was used to train the scikit-learn model, and then compare the performances using a common test set. You want to use the Vertex AI Python SDK to manually log the evaluation metrics of each model and compare them based on their F1 scores and confusion matrices. How should you log the metrics?

Use the aiplatform.log_classification_metrics function to log the F1 score, and use the aiplatform.log_metrics function to log the confusion matrix.

Use the aiplatform.log_classification_metrics function to log the F1 score and the confusion matrix.

Use the aiplatform.log_metrics function to log the F1 score and the confusion matrix.

Use the aiplatform.log_metrics function to log the F1 score: and use the aiplatform.log_classification_metrics function to log the confusion matrix.

3.

MULTIPLE CHOICE QUESTION

1 min • 1 pt

You have created a Vertex AI pipeline that automates custom model training. You want to add a pipeline component that enables your team to most easily collaborate when running different executions and comparing metrics both visually and programmatically. What should you do?

Add a component to the Vertex AI pipeline that logs metrics to a BigQuery table. Query the table to compare different executions of the pipeline. Connect BigQuery to Looker Studio to visualize metrics.

Add a component to the Vertex AI pipeline that logs metrics to a BigQuery table. Load the table into a pandas DataFrame to compare different executions of the pipeline. Use Matplotlib to visualize metrics.

Add a component to the Vertex AI pipeline that logs metrics to Vertex ML Metadata. Use Vertex AI Experiments to compare different executions of the pipeline. Use Vertex AI TensorBoard to visualize metrics.

Add a component to the Vertex AI pipeline that logs metrics to Vertex ML Metadata. Load the Vertex ML Metadata into a pandas DataFrame to compare different executions of the pipeline. Use Matplotlib to visualize metrics.

4.

MULTIPLE CHOICE QUESTION

2 mins • 1 pt

You are investigating the root cause of a misclassification error made by one of your models. You used Vertex AI Pipelines to train and deploy the model. The pipeline reads data from BigQuery. creates a copy of the data in Cloud Storage in TFRecord format, trains the model in Vertex AI Training on that copy, and deploys the model to a Vertex AI endpoint. You have identified the specific version of that model that misclassified, and you need to recover the data this model was trained on. How should you find that copy of the data?

Use Vertex AI Feature Store. Modify the pipeline to use the feature store, and ensure that all training data is stored in it. Search the feature store for the data used for the training.

Use the lineage feature of Vertex AI Metadata to find the model artifact. Determine the version of the model and identify the step that creates the data copy and search in the metadata for its location.

Use the logging features in the Vertex AI endpoint to determine the timestamp of the model’s deployment. Find the pipeline run at that timestamp. Identify the step that creates the data copy, and search in the logs for its location.

Find the job ID in Vertex AI Training corresponding to the training for the model. Search in the logs of that job for the data used for the training.

5.

MULTIPLE CHOICE QUESTION

1 min • 1 pt

You are developing a recommendation engine for an online clothing store. The historical customer transaction data is stored in BigQuery and Cloud Storage. You need to perform exploratory data analysis (EDA), preprocessing and model training. You plan to rerun these EDA, preprocessing, and training steps as you experiment with different types of algorithms. You want to minimize the cost and development effort of running these steps as you experiment. How should you configure the environment?

Create a Vertex AI Workbench user-managed notebook using the default VM instance, and use the %%bigquery magic commands in Jupyter to query the tables.

Create a Vertex AI Workbench managed notebook to browse and query the tables directly from the JupyterLab interface.

Create a Vertex AI Workbench user-managed notebook on a Dataproc Hub, and use the %%bigquery magic commands in Jupyter to query the tables.

Create a Vertex AI Workbench managed notebook on a Dataproc cluster, and use the spark-bigquery-connector to access

6.

MULTIPLE CHOICE QUESTION

1 min • 1 pt

You work for a rapidly growing social media company. Your team builds TensorFlow recommender models in an on-premises CPU cluster. The data contains billions of historical user events and 100,000 categorical features. You notice that as the data increases, the model training time increases. You plan to move the models to Google Cloud. You want to use the most scalable approach that also minimizes training time. What should you do?

Deploy the training jobs by using TPU VMs with TPUv3 Pod slices, and use the TPUEmbeading API

Deploy the training jobs in an autoscaling Google Kubernetes Engine cluster with CPUs

Deploy a matrix factorization model training job by using BigQuery ML

Deploy the training jobs by using Compute Engine instances with A100 GPUs, and use the tf.nn.embedding_lookup API

7.

MULTIPLE CHOICE QUESTION

1 min • 1 pt

You need to train an XGBoost model on a small dataset. Your training code requires custom dependencies. You want to minimize the startup time of your training job. How should you set up your Vertex AI custom training job?

Store the data in a Cloud Storage bucket, and create a custom container with your training application. In your training application, read the data from Cloud Storage and train the model.

Use the XGBoost prebuilt custom container. Create a Python source distribution that includes the data and installs the dependencies at runtime. In your training application, load the data into a pandas DataFrame and train the model.

Create a custom container that includes the data. In your training application, load the data into a pandas DataFrame and train the model.

Store the data in a Cloud Storage bucket, and use the XGBoost prebuilt custom container to run your training application. Create a Python source distribution that installs the dependencies at runtime. In your training application, read the data from Cloud Storage and train the model.

8.

MULTIPLE CHOICE QUESTION

1 min • 1 pt

You are pre-training a large language model on Google Cloud. This model includes custom TensorFlow operations in the training loop. Model training will use a large batch size, and you expect training to take several weeks. You need to configure a training architecture that minimizes both training time and compute costs. What should you do?

Implement 8 workers of a2-megagpu-16g machines by using tf.distribute.MultiWorkerMirroredStrategy.

Implement a TPU Pod slice with -accelerator-type=v4-l28 by using tf.distribute.TPUStrategy.

Implement 16 workers of c2d-highcpu-32 machines by using tf.distribute.MirroredStrategy.

Implement 16 workers of a2-highgpu-8g machines by using tf.distribute.MultiWorkerMirroredStrategy.

9.

MULTIPLE CHOICE QUESTION

1 min • 1 pt

You are developing an ML model to identify your company’s products in images. You have access to over one million images in a Cloud Storage bucket. You plan to experiment with different TensorFlow models by using Vertex AI Training. You need to read images at scale during training while minimizing data I/O bottlenecks. What should you do?

Load the images directly into the Vertex AI compute nodes by using Cloud Storage FUSE. Read the images by using the tf.data.Dataset.from_tensor_slices function

Create a Vertex AI managed dataset from your image data. Access the AIP_TRAINING_DATA_URI environment variable to read the images by using the tf.data.Dataset.list_files function.

Convert the images to TFRecords and store them in a Cloud Storage bucket. Read the TFRecords by using the tf.data.TFRecordDataset function.

Store the URLs of the images in a CSV file. Read the file by using the tf.data.experimental.CsvDataset function.