Google Professional Data Engineer - All Questions

Google Professional Data Engineer - All Questions

Professional Development

321 Qs

quiz-placeholder

Similar activities

Bản nháp đề 2 - 6

Bản nháp đề 2 - 6

Professional Development

126 Qs

Operating System Essentials

Operating System Essentials

Professional Development

124 Qs

Library Classes

Library Classes

Professional Development

120 Qs

WEB EXAM PHOTOSHOP

WEB EXAM PHOTOSHOP

Professional Development

119 Qs

TMĐT

TMĐT

Professional Development

119 Qs

Công nghệ phần mềm

Công nghệ phần mềm

Professional Development

120 Qs

Google Professional Data Engineer - All Questions

Google Professional Data Engineer - All Questions

Assessment

Quiz

Computers

Professional Development

Hard

Created by

Steven Wong

Used 4+ times

FREE Resource

321 questions

Show all answers

1.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

Your company built a TensorFlow neutral-network model with a large number of neurons and layers. The model fits well for the training data. However, when tested against new data, it performs poorly. What method can you employ to address this?

Threading

Serialization

Dropout Methods

Dimensionality Reduction

Answer explanation

Dropout Methods are useful to prevent a TensorFlow model from overfitting

2.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

You are building a model to make clothing recommendations. You know a user's fashion preference is likely to change over time, so you build a data pipeline to stream new data back to the model as it becomes available. How should you use this data to train the model?

Continuously retrain the model on just the new data.

Continuously retrain the model on a combination of existing data and the new data.

Train on the existing data while using the new data as your test set.

Train on the new data while using the existing data as your test set.

Answer explanation

The training set should be shuffled well to represent data across all scenarios

3.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

You designed a database for patient records as a pilot project to cover a few hundred patients in three clinics. Your design used a single database table to represent all patients and their visits, and you used self-joins to generate reports. The server resource utilization was at 50%. Since then, the scope of the project has expanded. The database must now store 100 times more patient records. You can no longer run the reports, because they either take too long or they encounter errors with insufficient compute resources. How should you adjust the database design?

Add capacity (memory and disk space) to the database server by the order of 200.

Shard the tables into smaller ones based on date ranges, and only generate reports with prespecified date ranges.

Normalize the master patient-record table into the patient table and the visits table, and create other necessary tables to avoid self-join.

Partition the table into smaller tables, with one for each clinic. Run queries against the smaller table pairs, and use unions for consolidated reports.

Answer explanation

this option provides the least amount of inconvenience over using pre-specified date ranges or one table per clinic while also increasing performance due to avoiding self-joins.

4.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

You create an important report for your large team in Google Data Studio 360. The report uses Google BigQuery as its data source. You notice that visualizations are not showing data that is less than 1 hour old. What should you do?

Disable caching by editing the report settings.

Disable caching in BigQuery by editing table details.

Refresh your browser tab showing the visualizations.

Clear your browser history for the past hour then reload the tab showing the virtualizations.

Answer explanation

By default, Google Data Studio 360 caches data to improve performance and reduce the amount of queries made to the data source. However, this can cause visualizations to not show data that is less than 1 hour old, as the cached data is not up-to-date. To resolve this, you should disable caching by editing the report settings

5.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

An external customer provides you with a daily dump of data from their database. The data flows into Google Cloud Storage GCS as comma-separated values (CSV) files. You want to analyze this data in Google BigQuery, but the data could have rows that are formatted incorrectly or corrupted. How should you build this pipeline?

Use federated data sources, and check data in the SQL query.

Enable BigQuery monitoring in Google Stackdriver and create an alert.

Import the data into BigQuery using the gcloud CLI and set max_bad_records to 0.

Run a Google Cloud Dataflow batch pipeline to import the data into BigQuery, and push errors to another dead-letter table for analysis.

Answer explanation

An ETL pipeline will be implemented for this scenario. Check out handling invalid inputs in cloud data flow

6.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

Your weather app queries a database every 15 minutes to get the current temperature. The frontend is powered by Google App Engine and serve millions of users. How should you design the frontend to respond to a database failure?

Issue a command to restart the database servers.

Retry the query with exponential backoff, up to a cap of 15 minutes.

Retry the query every second until it comes back online to minimize staleness of data.

Reduce the query frequency to once every hour until the database comes back online.

Answer explanation

App engine create applications that use Cloud SQL database connections effectively. Below is what is written in google cloud documnetation. If your application attempts to connect to the database and does not succeed, the database could be temporarily unavailable. In this case, sending too many simultaneous connection requests might waste additional database resources and increase the time needed to recover. Using exponential backoff prevents your application from sending an unresponsive number of connection requests when it can't connect to the database. This retry only makes sense when first connecting, or when first grabbing a connection from the pool. If errors happen in the middle of a transaction, the application must do the retrying, and it must retry from the beginning of a transaction. So even if your pool is configured properly, the application might still see errors if connections are lost.

7.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

You are creating a model to predict housing prices. Due to budget constraints, you must run it on a single resource-constrained virtual machine. Which learning algorithm should you use?

Linear regression

Logistic classification

Recurrent neural network

Feedforward neural network

Answer explanation

If you are forecasting that is the values in the column that you are predicting is numeric, it is always liner regression. If you are classifying, that is buy or no buy, yes or no, you will be using logistics regression.

Create a free account and access millions of resources

Create resources
Host any resource
Get auto-graded reports
or continue with
Microsoft
Apple
Others
By signing up, you agree to our Terms of Service & Privacy Policy
Already have an account?