Cloud Project and Data Management Questions

Cloud Project and Data Management Questions

11th Grade

62 Qs

quiz-placeholder

Similar activities

Apex Unit 2 Test Study Guide

Apex Unit 2 Test Study Guide

9th - 12th Grade

61 Qs

Spanish 2 ¡A comer! Lista 1

Spanish 2 ¡A comer! Lista 1

9th - 12th Grade

66 Qs

Famille de The et Portrait

Famille de The et Portrait

9th - 12th Grade

61 Qs

3.1 Descriptive Adjectives

3.1 Descriptive Adjectives

9th - 12th Grade

62 Qs

Elementary Quechua I | Achahala 1 to 3

Elementary Quechua I | Achahala 1 to 3

University

57 Qs

Zorro - Chapter 3 Vocab

Zorro - Chapter 3 Vocab

11th Grade - University

62 Qs

Mandarin Review 2

Mandarin Review 2

KG - 12th Grade

58 Qs

Quiz on CAT Tools and Translation

Quiz on CAT Tools and Translation

University

62 Qs

Cloud Project and Data Management Questions

Cloud Project and Data Management Questions

Assessment

Quiz

World Languages

11th Grade

Hard

Created by

Nadia Charcap

FREE Resource

62 questions

Show all answers

1.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

Your organization has two Google Cloud projects, project A and project B. In project A, you have a Pub/Sub topic that receives data from confidential sources. Only the resources in project A should be able to access the data in that topic. You want to ensure that project B and any future project cannot access data in the project A topic. What should you do?

Add firewall rules in project A so only traffic from the VPC in project A is permitted.

Configure VPC Service Controls in the organization with a perimeter around project A.

Use Identity and Access Management conditions to ensure that only users and service accounts in project A can access resources in project A.

Configure VPC Service Controls in the organization with a perimeter around the VPC of project A.

2.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

You stream order data by using a Dataflow pipeline, and write the aggregated result to Memorystore. You provisioned a Memorystore for Redis instance with Basic Tier, 4 GB capacity, which is used by 40 clients for read-only access. You are expecting the number of read-only clients to increase significantly to a few hundred and you need to be able to support the demand. You want to ensure that read and write access availability is not impacted, and any changes you make can be deployed quickly. What should you do?

Create a new Memorystore for Redis instance with Standard Tier. Set capacity to 4 GB and read replica to No read replicas (high availability only). Delete the old instance.

Create a new Memorystore for Redis instance with Standard Tier. Set capacity to 5 GB and create multiple read replicas. Delete the old instance.

Create a new Memorystore for Memcached instance. Set a minimum of three nodes, and memory per node to 4 GB. Modify the Dataflow pipeline and all clients to use the Memcached instance. Delete the old instance.

Create multiple new Memorystore for Redis instances with Basic Tier (4 GB capacity). Modify the Dataflow pipeline and new clients to use all instances.

3.

MULTIPLE SELECT QUESTION

30 sec • 1 pt

You have a streaming pipeline that ingests data from Pub/Sub in production. You need to update this streaming pipeline with improved business logic. You need to ensure that the updated pipeline reprocesses the previous two days of delivered Pub/Sub messages. What should you do? (Choose two.)

A. Use the Pub/Sub subscription clean-retry-policy flag

B. Use Pub/Sub Snapshot capture two days before the deployment.

C. Create a new Pub/Sub subscription two days before the deployment.

D. Use the Pub/Sub subscription retain-acked-messages flag.

E. Use Pub/Sub Seek with a timestamp.

4.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

You currently use a SQL-based tool to visualize your data stored in BigQuery. The data visualizations require the use of outer joins and analytic functions. Visualizations must be based on data that is no less than 4 hours old. Business users are complaining that the visualizations are too slow to generate. You want to improve the performance of the visualization queries while minimizing the maintenance overhead of the data preparation pipeline. What should you do?

Create materialized views with the allow_non_incremental_definition option set to true for the visualization queries. Specify the max_staleness parameter to 4 hours and the enable_refresh parameter to true. Reference the materialized views in the data visualization tool.

Create views for the visualization queries. Reference the views in the data visualization tool.

Create a Cloud Function instance to export the visualization query results as parquet files to a Cloud Storage bucket. Use Cloud Scheduler to trigger the Cloud Function every 4 hours. Reference the parquet files in the data visualization tool.

Create materialized views for the visualization queries. Use the incremental updates capability of BigQuery materialized views to handle changed data automatically. Reference the materialized views in the data visualization tool.

5.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

You need to modernize your existing on-premises data strategy. Your organization currently uses: - Apache Hadoop clusters for processing multiple large data sets, including on-premises Hadoop Distributed File System (HDFS) for data replication. - Apache Airflow to orchestrate hundreds of ETL pipelines with thousands of job steps. You need to set up a new architecture in Google Cloud that can handle your Hadoop workloads and requires minimal changes to your existing orchestration processes. What should you do?

Use Bigtable for your large workloads, with connections to Cloud Storage to handle any HDFS use cases. Orchestrate your pipelines with Cloud Composer.

Use Dataproc to migrate Hadoop clusters to Google Cloud, and Cloud Storage to handle any HDFS use cases. Orchestrate your pipelines with Cloud Composer.

Use Dataproc to migrate Hadoop clusters to Google Cloud, and Cloud Storage to handle any HDFS use cases. Convert your ETL pipelines to Dataflow.

Use Dataproc to migrate your Hadoop clusters to Google Cloud, and Cloud Storage to handle any HDFS use cases. Use Cloud Data Fusion to visually design and deploy your ETL pipelines.

6.

MULTIPLE SELECT QUESTION

30 sec • 1 pt

You recently deployed several data processing jobs into your Cloud Composer 2 environment. You notice that some tasks are failing in Apache Airflow. On the monitoring dashboard, you see an increase in the total workers memory usage, and there were worker pod evictions. You need to resolve these errors. What should you do? (Choose two.)

A. Increase the directed acyclic graph (DAG) file parsing interval.

B. Increase the Cloud Composer 2 environment size from medium to large.

C. Increase the maximum number of workers and reduce worker concurrency.

D. Increase the memory available to the Airflow workers.

E. Increase the memory available to the Airflow trigger.

7.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

You are on the data governance team and are implementing security requirements to deploy resources. You need to ensure that resources are limited to only the europe-west3 region. You want to follow Google-recommended practices. What should you do?

Set the constraints/gcp.resourceLocations organization policy constraint to in:europe-west3-locations.

Deploy resources with Terraform and implement a variable validation rule to ensure that the region is set to the europe-west3 region for all resources.

Set the constraints/gcp.resourceLocations organization policy constraint to in:eu-locations.

Create a Cloud Function to monitor all resources created and automatically destroy the ones created outside the europe-west3 region.

Create a free account and access millions of resources

Create resources
Host any resource
Get auto-graded reports
or continue with
Microsoft
Apple
Others
By signing up, you agree to our Terms of Service & Privacy Policy
Already have an account?

Discover more resources for World Languages