Day 8 - GCP ML

Day 8 - GCP ML

Professional Development

8 Qs

quiz-placeholder

Similar activities

Deep Learning

Deep Learning

Professional Development

10 Qs

Day 1 - Introduction to Data Science

Day 1 - Introduction to Data Science

Professional Development

10 Qs

Basics and PreProcessing

Basics and PreProcessing

Professional Development

7 Qs

StepUp F2F Silver Course 303

StepUp F2F Silver Course 303

Professional Development

10 Qs

SOAL PEDAGODIK PPPK

SOAL PEDAGODIK PPPK

KG - Professional Development

10 Qs

Beta Bootcamp Wrap-up Quiz

Beta Bootcamp Wrap-up Quiz

University - Professional Development

13 Qs

Beta

Beta

Professional Development

10 Qs

Designing Data Model in Power BI

Designing Data Model in Power BI

Professional Development

10 Qs

Day 8 - GCP ML

Day 8 - GCP ML

Assessment

Quiz

Professional Development

Professional Development

Hard

Created by

CloudThat Technologies

Used 1+ times

FREE Resource

8 questions

Show all answers

1.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

Your team is building a Neural Network model using Tensorflow. The preliminary experiments running on your on-premises CPU-only infrastructure were encouraging, but have slow convergence. You have been asked to speed up model training to reduce time-to-market. You want to experiment with virtual machines (VMs) on Google Cloud to leverage more powerful hardware. Which environment should you train your model on?

A VM on Compute Engine and 1 TPU with all dependencies installed manually

A VM on Compute Engine and 8 GPUs with all dependencies installed manually

A Deep Learning VM with an n1-standard-2 machine and 1 GPU with all libraries pre-installed

A Deep Learning VM with more powerful CPU e2-highcpu-16 machines with all libraries pre-installed

2.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

On Google Cloud, you trained a deep neural network model. Although the model it has a low loss on the training data, it performs poorly on the validation data. The model needs to be resistant to overfitting. Which approach can be considered while retraining the model?

 Apply a dropout parameter of 0.3, and decrease the learning rate by a factor of 5.

 Apply a regularization parameter of 0.4, and decrease the learning rate by a factor of 10.

 Run a hyperparameter tuning job on AI Platform to optimize for the regularization and dropout parameters.

 Run a hyperparameter tuning job on AI Platform to optimize for the learning rate, and increase the number of neurons by a factor of 2.

3.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

The scenario is to build an input pipeline for ML training model that processes images but the problem is your input data doesnt fit in the memory. Following Google's best practice, how do you create a dataset?

 Create a tf.data.Dataset.prefetch transformation

 Convert the images to tf.Tensor objects, and then run Dataset.from_tensor_slices().

 Convert the images to tf.Tensor objects, and then run tf.data.Dataset.from_tensors().

 Convert the images into TFRecords, store the images in Cloud Storage, and then use the tf.data API to read the images for training.

4.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

You are using a structured dataset with 100 billion records spread across many CSV files to train a TensorFlow model. The performance of input/output execution has to be improved. What should you to do?

Convert the CSV files into shards of TFRecords, and store the data in the DataProc Cluster.

Convert the CSV files into shards of TFRecords, and store the data in Cloud Storage.

Load the data into BigQuery, and read the data from BigQuery

Load the data into Cloud Bigtable, and read the data from Bigtable

5.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

WAll workers are independently training over the input data and updating variables asynchronously. Each of the worker only processes the requests from the coordinator, and communicates with servers, without direct interactions with other workers in the cluster. Which distribution strategy is best suited here?

Mirror Strategy

ParameterServerStrategy

MultiWorkerMirroredStrategy

TPU Strategy

6.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

You work for a gaming company that develops and manages a popular massively multiplayer online (MMO) game. Your team has developed an ML model with TensorFlow that predicts the next move of each player. How should you configure the deployment to have low latency serving?

Use a Cloud TPU to optimize model training speed.

Use AI Platform Prediction with a high-memory machine type to get a batch prediction for the players.

Use AI Platform Prediction with a NVIDIA GPU to make real-time predictions.

Use AI Platform Prediction with a high-CPU machine type to get a batch prediction for the players.

Answer explanation

7.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

Your team is using a TensorFlow CNN model pretrained on ImageNet for an image classification prediction challenge on 10,000 images. You will use AI Platform to perform the model training. What TensorFlow distribution strategy and AI Platform training job configuration should you use to train the model and optimize for wall-clock time?

Default Strategy; Custom tier with a single master node and four v100 GPUs.

One Device Strategy; Custom tier with a single master node and four v100 GPUs.

One Device Strategy; Custom tier with a single master node and eight v100 GPUs.

MirroredStrategy; Custom tier with a single master node and four v100 GPUs.

8.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

Mobile and embedded devices have limited computational resources, so it is important to keep your application resource efficient. Which of the following is the best practices that you can use to improve your TensorFlow Lite model performance.

If your task requires high accuracy, it is better to use a smaller model.

For tasks that require less precision, then you may need a large and complex model.

Model optimization such as Quantization should be used

All of the above