PDE-6

PDE-6

Professional Development

51 Qs

quiz-placeholder

Similar activities

Microsoft Azure Fundamentals AZ-900 ENG #5

Microsoft Azure Fundamentals AZ-900 ENG #5

University - Professional Development

55 Qs

Google ACE - Set 2

Google ACE - Set 2

Professional Development

53 Qs

PCA-3

PCA-3

Professional Development

50 Qs

PCA-4

PCA-4

Professional Development

50 Qs

(Part 2) Cloud Essentials Study Guide

(Part 2) Cloud Essentials Study Guide

Professional Development

48 Qs

021122

021122

Professional Development

55 Qs

AWS Practitioner 12

AWS Practitioner 12

Professional Development

48 Qs

AWS CP Exam 04

AWS CP Exam 04

Professional Development

55 Qs

PDE-6

PDE-6

Assessment

Quiz

Professional Development

Professional Development

Hard

Created by

Balamurugan R

Used 26+ times

FREE Resource

51 questions

Show all answers

1.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

You'd like to move your Teradata data warehouse to BigQuery. You're trying to move historical data to BigQuery by using the most efficient method, which requires the least programming, but local storage space in your existing data warehouse is limited. What are you supposed to do?

Use BigQuery Data Transfer Service by using the Java Database Connectivity (JDBC) driver with FastExport connection

Create a Teradata Parallel Transporter (TPT) export script to export the historical data, and import to BigQuery by using the bq commandline tool.

Use BigQuery Data Transfer Service with the Teradata Parallel Transporter (TPT) tbuild utility.

Create a script to export the historical data, and upload in batches to Cloud Storage. Set up a BigQuery Data Transfer Service instance from

Cloud Storage to BigQuery

2.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

You're part of the Data Governance Team and you're implementing security requirements. By using an encryption key controlled by your team, all of your data in BigQuery needs to be encrypted. Only on your premises will you be able to implement a mechanism for generating and storing encryption material. Hardware security module HSM. Google's managed solutions are what you need to rely on. What are you supposed to do?

Create the encryption key in the on-premises HSM, and import it into a Cloud Key Management Service (Cloud KMS) key. Associate the

created Cloud KMS key while creating the BigQuery resources.

Create the encryption key in the on-premises HSM and link it to a Cloud External Key Manager (Cloud EKM) key. Associate the created

Cloud KMS key while creating the BigQuery resources

Create the encryption key in the on-premises HSM, and import it into Cloud Key Management Service (Cloud HSM) key. Associate the

created Cloud HSM key while creating the BigQuery resources.

Create the encryption key in the on-premises HSM. Create BigQuery resources and encrypt data while ingesting them into BigQuery.

3.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

You're maintaining the ETL pipeline. You're noticing that it takes a long time for Incoming Data to be processed by the streaming pipeline on Dataflow, which is causing delays in output. You've also noticed that Dataflow has automatically optimized the pipeline graph, merging it into a single step. You'd like to see where the possible bottleneck is taking place. What are you supposed to do?

Insert a Reshuffle operation after each processing step, and monitor the execution details in the Dataflow console

Insert output sinks after each key processing step, and observe the writing throughput of each block.

Log debug information in each ParDo function, and analyze the logs at execution time.

Verify that the Dataflow service accounts have appropriate permissions to write the processed data to the output sinks.

4.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

You're running a BigQuery project in an ondemand billing model and are using the ChangeDataCaptureCDC process to capture data. Every 10 minutes, the CDC process loads 1 GB of data into a temporary table, which is then merged to a 10 TB target table. This process is You're scanning very hard and you want to explore options for enabling a predictable cost model. . You need to create a BigQuery reservation based on

utilization information gathered from BigQuery Monitoring and apply the reservation to the CDC process. What should you do?

Create a BigQuery reservation for the dataset.

Create a BigQuery reservation for the job.

Create a BigQuery reservation for the service account running the job.

Create a BigQuery reservation for the project.

5.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

To store data in a regional BigQuery database, you are developing a fault tolerant architecture. In order to recover from a corruption event in your tables that occurred within the last 7 days, you must ensure that your application is able to do so. You want to adopt managed services with the lowest RPO

and most cost-effective solution. What are you supposed to do?

Access historical data by using time travel in BigQuery.

Export the data from BigQuery into a new table that excludes the corrupted data

Create a BigQuery table snapshot on a daily basis.

Migrate your data to multi-region BigQuery buckets.

6.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

You're building a network of Dataflows that will capture noise level data from hundreds of sensors installed near construction sites in the city. Every ten seconds, the sensors are measuring noise level and transmitting that information to the pipeline when levels exceed 70dBA. You need to detect

the average noise level from a sensor when data is received for a duration of more than 30 minutes, but the window ends when no data has been received for 15 minutes. What are you supposed to do?

Use session windows with a 15-minute gap duration.

Use session windows with a 30-minute gap duration.

Use hopping windows with a 15-minute window, and a thirty-minute period.

Use tumbling windows with a 15-minute window and a fifteen-minute .withAllowedLateness operator.

7.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

You're building a data model for BigQuery, which stores the Retail Transaction Data. There is a tightly coupled, immutable relationship between your two largest tables, sales_transaction_header and sales_transaction_line. After loading, these tables are rarely modified and are frequently joined when queried. In order to improve the performance of data analysis queries, you must model the sales_transaction_header and sales_transaction_line tables. What are you supposed to do?

Create a sales_transaction table that holds the sales_transaction_header information as rows and the sales_transaction_line rows as

nested and repeated fields.

Create a sales_transaction table that holds the sales_transaction_header and sales_transaction_line information as rows, duplicating the

sales_transaction_header data for each line.

Create a sales_transaction table that stores the sales_transaction_header and sales_transaction_line data as a JSON data type.

Create separate sales_transaction_header and sales_transaction_line tables and, when querying, specify the sales_transaction_line first in

the WHERE clause.

Create a free account and access millions of resources

Create resources
Host any resource
Get auto-graded reports
or continue with
Microsoft
Apple
Others
By signing up, you agree to our Terms of Service & Privacy Policy
Already have an account?