You currently have a single on-premises Kafka cluster in a data center in the us-east region that is responsible for ingesting messages from IoT devices globally. Because large parts of globe have poor internet connectivity, messages sometimes batch at the edge, come in all at once, and cause a spike in load on your Kafka cluster. This is becoming difficult to manage and prohibitively expensive. What is the Google-recommended cloud native architecture for this scenario?
CertyIQ - Google - Prof Data Eng - pt 5

Quiz
•
Computers
•
University
•
Hard
Katheryne Pierce
Used 1+ times
FREE Resource
30 questions
Show all answers
1.
MULTIPLE CHOICE QUESTION
15 mins • 1 pt
Edge TPUs as sensor devices for storing and transmitting the messages.
Cloud Dataflow connected to the Kafka cluster to scale the processing of incoming messages.
An IoT gateway connected to Cloud Pub/Sub, with Cloud Dataflow to read and process the messages from Cloud Pub/Sub.
A Kafka cluster virtualized on Compute Engine in us-east with Cloud Load Balancing to connect to the devices around the world.
2.
MULTIPLE SELECT QUESTION
45 sec • 1 pt
You decided to use Cloud Datastore to ingest vehicle telemetry data in real time. You want to build a storage system that will account for the long-term data growth, while keeping the costs low. You also want to create snapshots of the data periodically, so that you can make a point-in-time (PIT) recovery, or clone a copy of the data for Cloud Datastore in a different environment. You want to archive these snapshots for a long time. Which two methods can accomplish this? (Choose two.)
Use managed export, and store the data in a Cloud Storage bucket using Nearline or Coldline class.
Use managed export, and then import to Cloud Datastore in a separate project under a unique namespace reserved for that export
Use managed export, and then import the data into a BigQuery table created just for that export, and delete temporary export files.
Write an application that uses Cloud Datastore client libraries to read all the entities. Format the exported data into a JSON file. Apply compression before storing the data in Cloud Source Repositories.
Write an application that uses Cloud Datastore client libraries to read all the entities. Treat each entity as a BigQuery table row via BigQuery streaming insert. Assign an export timestamp for each export, and attach it as an extra column for each row. Make sure that the BigQuery table is partitioned using the export timestamp column.
3.
MULTIPLE SELECT QUESTION
15 mins • 1 pt
You need to create a data pipeline that copies time-series transaction data so that it can be queried from within BigQuery by your data science team for analysis. Every hour, thousands of transactions are updated with a new status. The size of the initial dataset is 1.5 PB, and it will grow by 3 TB per day. The data is heavily structured, and your data science team will build machine learning models based on this data. You want to maximize performance and usability for your data science team. Which two strategies should you adopt? (Choose two.)
Denormalize the data as must as possible
Preserve the structure of the data as much as possible
Use BigQuery UPDATE to further reduce the size of the dataset.
Develop a data pipeline where status updates are appended to BigQuery instead of updated.
Copy a daily snapshot of transaction data to Cloud Storage and store it as an Avro file. Use BigQuery's support for external data sources to query.
4.
MULTIPLE CHOICE QUESTION
15 mins • 1 pt
Create a Dataproc cluster with high availability. Store the data in HDFS, and perform analysis as needed.
Store the data in BigQuery. Access the data using the BigQuery Connector on Dataproc and Compute Engine.
Store the data in a regional Cloud Storage bucket. Access the bucket directly using Dataproc, BigQuery, and Compute Engine.
Store the data in a multi-regional Cloud Storage bucket. Access the data directly using Dataproc, BigQuery, and Compute Engine.
5.
MULTIPLE CHOICE QUESTION
15 mins • 1 pt
You have a petabyte of analytics data and need to design a storage and processing platform for it. You must be able to perform data warehouse-style analytics on the data in Google Cloud and expose the dataset as files for batch analysis tools in other cloud providers. What should you do?
Store and process the entire dataset in BigQuery.
Store and process the entire dataset in Bigtable.
Store the full dataset in BigQuery, and store a compressed copy of the data in a Cloud Storage bucket
Store the warm data as files in Cloud Storage, and store the active data in BigQuery. Keep this ratio as 80% warm and 20% active.
6.
MULTIPLE CHOICE QUESTION
15 mins • 1 pt
You work for a manufacturing company that sources up to 750 different components, each from a different supplier. You've collected a labeled dataset that has on average 1000 examples for each unique component. Your team wants to implement an app to help warehouse workers recognize incoming components based on a photo of the component. You want to implement the first working version of this app (as Proof-Of-Concept) within a few working days. What should you do?
Use Cloud Vision AutoML with the existing dataset
Use Cloud Vision AutoML, but reduce your dataset twice.
Use Cloud Vision API by providing custom labels as recognition hints.
Train your own image recognition model leveraging transfer learning techniques.
7.
MULTIPLE CHOICE QUESTION
15 mins • 1 pt
You are working on a niche product in the image recognition domain. Your team has developed a model that is dominated by custom C++ TensorFlow ops your team has implemented. These ops are used inside your main training loop and are performing bulky matrix multiplications. It currently takes up to several days to train a model. You want to decrease this time significantly and keep the cost low by using an accelerator on Google Cloud. What should you do?
Use Cloud TPUs without any additional adjustment to your code.
Use Cloud TPUs after implementing GPU kernel support for your customs ops.
Use Cloud GPUs after implementing GPU kernel support for your customs ops.
Stay on CPUs, and increase the size of the cluster you're training your model on.
Create a free account and access millions of resources
Similar Resources on Quizizz
30 questions
Cloud Computing - Bangkit Academy 2024 H1

Quiz
•
University
30 questions
Google Prof Cloud Archi - pt 3

Quiz
•
University
25 questions
Practice Questions DP 300

Quiz
•
University
30 questions
CertyIQ - Google - Prof Data Eng - pt 6

Quiz
•
University
30 questions
CertyIQ - Google - Prof Data Eng - pt 1

Quiz
•
University
25 questions
PHP + MySQL

Quiz
•
7th Grade - University
25 questions
PHP dan MySQL

Quiz
•
University
25 questions
Cloud Computing

Quiz
•
University
Popular Resources on Quizizz
15 questions
Character Analysis

Quiz
•
4th Grade
17 questions
Chapter 12 - Doing the Right Thing

Quiz
•
9th - 12th Grade
10 questions
American Flag

Quiz
•
1st - 2nd Grade
20 questions
Reading Comprehension

Quiz
•
5th Grade
30 questions
Linear Inequalities

Quiz
•
9th - 12th Grade
20 questions
Types of Credit

Quiz
•
9th - 12th Grade
18 questions
Full S.T.E.A.M. Ahead Summer Academy Pre-Test 24-25

Quiz
•
5th Grade
14 questions
Misplaced and Dangling Modifiers

Quiz
•
6th - 8th Grade