Estudio data 2

Estudio data 2

2nd Grade

58 Qs

quiz-placeholder

Similar activities

Y9 Mid-Term 1

Y9 Mid-Term 1

1st - 5th Grade

54 Qs

untitled

untitled

1st - 9th Grade

60 Qs

Comptia CH 6

Comptia CH 6

KG - 12th Grade

58 Qs

نظم CH5

نظم CH5

KG - 3rd Grade

60 Qs

Основы логического проэктирования 2парт

Основы логического проэктирования 2парт

2nd Grade

60 Qs

Networking and Data Communication Quiz

Networking and Data Communication Quiz

2nd Grade

53 Qs

Windows 10

Windows 10

1st - 6th Grade

62 Qs

IDT Study Guide

IDT Study Guide

KG - University

60 Qs

Estudio data 2

Estudio data 2

Assessment

Quiz

Computers

2nd Grade

Hard

Created by

Yuliana Lopez

FREE Resource

58 questions

Show all answers

1.

MULTIPLE SELECT QUESTION

45 sec • 1 pt

51. As your organization expands its usage of GCP, many teams have started to create their own projects. Projects are further multiplied to accommodate different stages of deployments and target audiences. Each project requires unique access control configurations. The central IT team needs to have access to all projects. Furthermore, data from Cloud Storage buckets and BigQuery datasets must be shared for use in other projects in an ad hoc way. You want to simplify access control management by minimizing the number of policies. Which two steps should you take? (Choose two.)

A Only use service accounts when sharing data for Cloud Storage buckets and BigQuery datasets.

B For each Cloud Storage bucket or BigQuery dataset, decide which projects need access. Find all the active members who have access to these projects, and create a Cloud IAM policy to grant access to all these users.

C Use Cloud Deployment Manager to automate access provision.

D Introduce resource hierarchy to leverage access control policy inheritance.

E Create distinct groups for various teams, and specify groups in Cloud IAM policies.

2.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

52. You want to automate execution of a multi-step data pipeline running on Google Cloud. The pipeline includes Cloud Dataproc and Cloud Dataflow jobs that have multiple dependencies on each other. You want to use managed services where possible, and the pipeline will run every day. Which tool should you use?

A Cloud Composer

B cron

C Cloud Scheduler

D Workflow Templates on Cloud Dataproc

3.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

53. You work for a shipping company that has distribution centers where packages move on delivery lines to route them properly. The company wants to add cameras to the delivery lines to detect and track any visual damage to the packages in transit. You need to create a way to automate the detection of damaged packages and flag them for human review in real time while the packages are in transit. Which solution should you choose?

A Use BigQuery machine learning to be able to train the model at scale, so you can analyze the packages in batches.

B Use TensorFlow to create a model that is trained on your corpus of images. Create a Python notebook in Cloud Datalab that uses this model so you can analyze for damaged packages.

C Train an AutoML model on your corpus of images, and build an API around that model to integrate with the package tracking applications.

D Use the Cloud Vision API to detect for damage, and raise an alert through Cloud Functions. Integrate the package tracking applications with this function.

4.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

1.ou need to choose a database to store time series CPU and memory usage for millions of

computers. You need to store this data in one-second interval samples. Analysts will be

performing real-time, ad hoc analytics against the database. You want to avoid being

charged for every query executed and ensure that the schema design will allow for future

growth of the dataset. Which database and data model should you choose?

A Create a narrow table in Bigtable with a row key that combines the Computer Engine computer identifier with the sample time at each second

A Create a narrow table in Bigtable with a row key that combines the Computer Engine computer identifier with the sample time at each second

C Create a wide table in BigQuery, create a column for the sample value at each second, and update the row with the interval for each second

D Create a wide table in Bigtable with a row key that combines the computer identifier with the sample time at each minute, and combine the values for each second as column data.

5.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

2. You work for a large bank that operates in locations throughout North America. You are setting up a data storage system that will handle bank account transactions. You require ACID compliance and the ability to access data with SQL. Which solution is appropriate?

A Store transaction data in Cloud SQL. Use a federated query BigQuery for analysis

B Store transaction data in Cloud Spanner. Enable stale reads to reduce latency

C Store transaction data in BigQuery. Disabled the query cache to ensure consistency.

D Store transaction in Cloud Spanner. Use locking read-write transactions.

6.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

3. You want to archive data in Cloud Storage. Because some data is very sensitive, you want to use the Trust No One (TNO) approach to encrypt your data to prevent the cloud provider staff from decrypting your data. What should you do?

A Specify customer-supplied encryption key (CSEK) in the .boto configuration file. Use gsutil cp to upload each archival file to the Cloud Storage bucket. Save the CSEK in a different project that only the security team can access.

B Specify customer-supplied encryption key (CSEK) in the .boto configuration file. Use gsutil cp to upload each archival file to the Cloud Storage bucket. Save the CSEK in Cloud Memorystore as permanent storage of the secret.

C Use gcloud kms keys create to create a symmetric key. Then use gcloud kms encrypt to encrypt each archival file with the key and unique additional authenticated data (AAD). Use gsutil cp to upload each encrypted file to the Cloud Storage bucket, and keep the AAD outside of Google Cloud.

D Use gcloud kms keys create to create a symmetric key. Then use gcloud kms encrypt to encrypt each archival file with the key. Use gsutil cp to upload each encrypted file to the Cloud Storage bucket. Manually destroy the key previously used for encryption, and rotate the key once.

7.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

4. You work for a global shipping company. You want to train a model on 40 TB of data to predict which ships in each geographic region are likely to cause delivery delays on any given day. The model will be based on multiple attributes collected from multiple sources. Telemetry data, including location in GeoJSON format, will be pulled from each ship and loaded every hour. You want to have a dashboard that shows how many and which ships are likely to cause delays within a region. You want to use a storage solution that has native functionality for prediction and geospatial processing. Which storage solution should you use?

A Cloud SQL for PostgreSQL

B Cloud Datastore

C Cloud Bigtable

D BigQuery

Create a free account and access millions of resources

Create resources
Host any resource
Get auto-graded reports
or continue with
Microsoft
Apple
Others
By signing up, you agree to our Terms of Service & Privacy Policy
Already have an account?