Cloud and Data Integration Systems

Cloud and Data Integration Systems

11th Grade

62 Qs

quiz-placeholder

Similar activities

bien dit 2 chapitre 2

bien dit 2 chapitre 2

11th - 12th Grade

60 Qs

Cloud Project and Data Management Questions

Cloud Project and Data Management Questions

11th Grade - University

62 Qs

Quiz on CAT Tools and Translation

Quiz on CAT Tools and Translation

University

62 Qs

Subject Pronouns and Present Tense -AR Verb Conjugations

Subject Pronouns and Present Tense -AR Verb Conjugations

6th - 12th Grade

60 Qs

Avancemos 1.1 Test Review

Avancemos 1.1 Test Review

9th - 12th Grade

64 Qs

The Preterit Tense Regular verbs

The Preterit Tense Regular verbs

9th - 12th Grade

60 Qs

Realidades 2: 3B Vocabulary

Realidades 2: 3B Vocabulary

9th - 12th Grade

62 Qs

Cloud and Data Integration Systems

Cloud and Data Integration Systems

Assessment

Quiz

World Languages

11th Grade

Hard

Created by

Nadia Charcap

FREE Resource

62 questions

Show all answers

1.

MULTIPLE SELECT QUESTION

30 sec • 1 pt

You work for an advertising company, and you've developed a Spark ML model to predict click-through rates at advertisement blocks. You've been developing everything at your on-premises data center, and now your company is migrating to Google Cloud. Your data center will be closing soon, so a rapid lift-and-shift migration is necessary. However, the data you've been using will be migrated to migrated to BigQuery. You periodically retrain your Spark ML models, so you need to migrate existing training pipelines to Google Cloud. What should you do?

A. Use Vertex AI for training existing Spark ML models

B. Rewrite your models on TensorFlow, and start using Vertex AI

C. Use Dataproc for training existing Spark ML models, but start reading data directly from BigQuery

D. Spin up a Spark cluster on Compute Engine, and train Spark ML models on the data exported from BigQuery

2.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

You work for a global shipping company. You want to train a model on 40 TB of data to predict which ships in each geographic region are likely to cause delivery delays on any given day. The model will be based on multiple attributes collected from multiple sources. Telemetry data, including location in GeoJSON format, will be pulled from each ship and loaded every hour. You want to have a dashboard that shows how many and which ships are likely to cause delays within a region. You want to use a storage solution that has native functionality for prediction and geospatial processing. Which storage solution should you use?

A. BigQuery

B. Cloud Bigtable

C. Cloud Datastore

D. Cloud SQL for PostgreSQL

3.

MULTIPLE SELECT QUESTION

30 sec • 1 pt

You operate an IoT pipeline built around Apache Kafka that normally receives around 5000 messages per second. You want to use Google Cloud Platform to create an alert as soon as the moving average over 1 hour drops below 4000 messages per second. What should you do?

Consume the stream of data in Dataflow using Kafka IO. Set a sliding time window of 1 hour every 5 minutes. Compute the average when the window closes, and send an alert if the average is less than 4000 messages.

Consume the stream of data in Dataflow using Kafka IO. Set a fixed time window of 1 hour. Compute the average when the window closes, and send an alert if the average is less than 4000 messages.

Use Kafka Connect to link your Kafka message queue to Pub/Sub. Use a Dataflow template to write your messages from Pub/Sub to Bigtable. Use Cloud Scheduler to run a script every hour that counts the number of rows created in Bigtable in the last hour. If that number falls below 4000, send an alert.

Use Kafka Connect to link your Kafka message queue to Pub/Sub. Use a Dataflow template to write your messages from Pub/Sub to BigQuery. Use Cloud Scheduler to run a script every five minutes that counts the number of rows created in BigQuery in the last hour. If that number falls below 4000, send an alert.

4.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

You plan to deploy Cloud SQL using MySQL. You need to ensure high availability in the event of a zone failure. What should you do?

Create a Cloud SQL instance in one zone, and create a failover replica in another zone within the same region.

Create a Cloud SQL instance in one zone, and create a read replica in another zone within the same region.

Create a Cloud SQL instance in one zone, and configure an external read replica in a zone in a different region.

Create a Cloud SQL instance in a region, and configure automatic backup to a Cloud Storage bucket in the same region.

5.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

Your company is selecting a system to centralize data ingestion and delivery. You are considering messaging and data integration systems to address the requirements. The key requirements are: => The ability to seek to a particular offset in a topic, possibly back to the start of all data ever captured => Support for publish/subscribe semantics on hundreds of topics Retain per-key ordering. Which system should you choose?

Apache Kafka

Cloud Storage

Dataflow

Firebase Cloud Messaging

6.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

You are planning to migrate your current on-premises Apache Hadoop deployment to the cloud. You need to ensure that the deployment is as fault-tolerant and cost-effective as possible for long-running batch jobs. You want to use a managed service. What should you do?

Deploy a Dataproc cluster. Use a standard persistent disk and 50% preemptible workers. Store data in Cloud Storage, and change references in scripts from hdfs:// to gs://

Deploy a Dataproc cluster. Use an SSD persistent disk and 50% preemptible workers. Store data in Cloud Storage, and change references in scripts from hdfs:// to gs://

Install Hadoop and Spark on a 10-node Compute Engine instance group with standard instances. Install the Cloud Storage connector, and store the data in Cloud Storage. Change references in scripts from hdfs:// to gs://

Install Hadoop and Spark on a 10-node Compute Engine instance group with preemptible instances. Store data in HDFS. Change references in scripts from hdfs:// to gs://

7.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

Your team is working on a binary classification problem. You have trained a support vector machine (SVM) classifier with default parameters, and received an area under the Curve (AUC) of 0.87 on the validation set. You want to increase the AUC of the model. What should you do?

Perform hyperparameter tuning

Train a classifier with deep neural networks, because neural networks would always beat SVMs

Deploy the model and measure the real-world AUC; it's always higher because of generalization

Scale predictions you get out of the model (tune a scaling factor as a hyperparameter) in order to get the highest AUC

Create a free account and access millions of resources

Create resources
Host any resource
Get auto-graded reports
or continue with
Microsoft
Apple
Others
By signing up, you agree to our Terms of Service & Privacy Policy
Already have an account?

Discover more resources for World Languages