PMLE 1-50

PMLE 1-50

Professional Development

50 Qs

quiz-placeholder

Similar activities

Science pt.1

Science pt.1

Professional Development

45 Qs

Arkansas License Law / Time Share Law (Quiz 1 of 4)

Arkansas License Law / Time Share Law (Quiz 1 of 4)

Professional Development

50 Qs

2R231 Block 2 Review

2R231 Block 2 Review

Professional Development

53 Qs

State Board Licensing and Regulation Question

State Board Licensing and Regulation Question

University - Professional Development

51 Qs

PRELIM SPECIAL TOPIC

PRELIM SPECIAL TOPIC

Professional Development

51 Qs

AIW - Salon Business

AIW - Salon Business

Professional Development

45 Qs

How Smart Are You? Vol. 3

How Smart Are You? Vol. 3

Professional Development

55 Qs

Final Digital MAR

Final Digital MAR

Professional Development

48 Qs

PMLE 1-50

PMLE 1-50

Assessment

Quiz

Specialty

Professional Development

Hard

Created by

Antonio Marca

FREE Resource

50 questions

Show all answers

1.

MULTIPLE CHOICE QUESTION

2 mins • 1 pt

You are building an ML model to detect anomalies in real-time sensor data. You will use Pub/Sub to handle incoming requests. You want to store the results for analytics and visualization. How should you configure the pipeline?
1 = Dataflow, 2 = AI Platform, 3 = BigQuery
1 = DataProc, 2 = AutoML, 3 = Cloud Bigtable
1 = BigQuery, 2 = AutoML, 3 = Cloud Functions
1 = BigQuery, 2 = AI Platform, 3 = Cloud Storage

Answer explanation

To preprocess data you will use Dataflow, and then you can use the Vertex AI platform for training and serving. Since it's a recommendation use case, Cloud BigQuery is the recommended NoSQL store to manage this use case storage at scale and reduce latency.

2.

MULTIPLE CHOICE QUESTION

2 mins • 1 pt

Your team needs to build a model that predicts whether images contain a driver's license, passport, or credit card. The data engineering team already built the pipeline and generated a dataset composed of 10,000 images with driver's licenses, 1,000 images with passports, and 1,000 images with credit cards. You now have to train a model with the following label map: [`˜drivers_license', `˜passport', `˜credit_card']. Which loss function should you use?
Categorical hinge
Binary cross-entropy
Categorical cross-entropy
Sparse categorical cross-entropy

Answer explanation

Use sparse categorical crossentropy when your classes are mutually exclusive (e.g. when each sample belongs exactly to one class) and categorical crossentropy when one sample can have multiple classes or labels are soft probabilities (like [0.5, 0.3, 0.2]).

3.

MULTIPLE CHOICE QUESTION

2 mins • 1 pt

You are an ML engineer at a manufacturing company. You need to build a model that identifies defects in products based on images of the product taken at the end of the assembly line. You want your model to preprocess the images with lower computation to quickly extract features of defects in products. Which approach should you use to build the model?
Reinforcement Learning
Recommender system
Recurrent Neural Network (RNN)
Convolutional Neural Network (CNN)

Answer explanation

If image then CNN, moreover other options not suitable for image problems, RNN is sequential so can be used for time series or as LSTM for text classification

4.

MULTIPLE CHOICE QUESTION

2 mins • 1 pt

You are developing an ML model intended to classify whether X-ray images indicate bone fracture risk. You have trained a ResNet architecture on Vertex AI using a TPU as an accelerator, however you are unsatisfied with the training time and memory usage. You want to quickly iterate your training code but make minimal changes to the code. You also want to minimize impact on the model’s accuracy. What should you do?
Reduce the number of layers in the model architecture
Reduce the global batch size from 1024 to 256
Reduce the dimension of the images used in the model
Configure your model to use bgloat16 of float32

Answer explanation

This offers a good balance between speed, memory usage, and minimal code changes. Bfloat16 uses 16 bits per float value compared to 32 bits for float32. This can significantly reduce memory usage while maintaining similar accuracy in many machine learning models, especially for image recognition tasks.expand_more It's a quick change with minimal impact on the code and potentially large gains in training speed.

5.

MULTIPLE CHOICE QUESTION

2 mins • 1 pt

You have successfully deployed to production a large and complex TensorFlow model trained on tabular data. You want to predict the lifetime value (LTV) field for each subscription stored in the BigQuery table named subscription. subscriptionPurchase in the project named my-fortune500-company-project. You have organized all your training code, from preprocessing data from the BigQuery table up to deploying the validated model to the Vertex AI endpoint, into a TensorFlow Extended (TFX) pipeline. You want to prevent prediction drift, i.e., a situation when a feature data distribution in production changes significantly over time. What should you do?
Implement continuous retraining of the model daily using Vertex AI Pipelines.
Add a model monitoring job where 10% of incoming predictions are sampled 24 hours.
Add a model monitoring job where 90% of incoming predictions are sampled 24 hours.
Add a model monitoring job where 10% of incoming predictions are sampled every hour.

Answer explanation

Subscription LTV data doesn’t change rapidly → Hourly checks (D) are unnecessary. Monitoring 10% of data per day (B) is sufficient → Detects drift while minimizing cost. Cost consideration → Hourly monitoring (D) increases expenses without significant added value for slow-changing data.

6.

MULTIPLE CHOICE QUESTION

2 mins • 1 pt

You recently developed a deep learning model using Keras, and now you are experimenting with different training strategies. First, you trained the model using a single GPU, but the training process was too slow. Next, you distributed the training across 4 GPUs using tf.distribute.MirroredStrategy (with no other changes), but you did not observe a decrease in training time. What should you do?
Distribute the dataset with tf.distribute.Strategy.experimental_distribute_dataset
Create a custom training loop.
Use a TPU with tf.distribute.TPUStrategy.
Increase the batch size.

Answer explanation

When using distributed training with tf.distribute.MirroredStrategy, each GPU processes a slice of the batch. If you keep the batch size constant, each GPU receives a smaller effective batch, which might not fully utilize the computational power of each device. Increasing the batch size allows each GPU to process more data in parallel, which can lead to improved training speed and better resource utilization without modifying your training loop or switching strategies. Inoltre tf.distribute.MirroredStrategy automaticamente distribuisce i dataset across the avaiable device

7.

MULTIPLE CHOICE QUESTION

2 mins • 1 pt

You work for a gaming company that has millions of customers around the world. All games offer a chat feature that allows players to communicate with each other in real time. Messages can be typed in more than 20 languages and are translated in real time using the Cloud Translation API. You have been asked to build an ML system to moderate the chat in real time while assuring that the performance is uniform across the various languages and without changing the serving infrastructure. You trained your first model using an in-house word2vec model for embedding the chat messages translated by the Cloud Translation API. However, the model has significant differences in performance across the different languages. How should you improve it?
Add a regularization term such as the Min-Diff algorithm to the loss function.
Train a classifier using the chat messages in their original language.
Replace the in-house word2vec with GPT-3 or T5.
Remove moderation for languages for which the false positive rate is too high.

Answer explanation

Since the performance of the model varies significantly across different languages, it suggests that the translation process might have introduced some noise in the chat messages, making it difficult for the model to generalize across languages. One way to address this issue is to train a classifier using the chat messages in their original language

Create a free account and access millions of resources

Create resources
Host any resource
Get auto-graded reports
or continue with
Microsoft
Apple
Others
By signing up, you agree to our Terms of Service & Privacy Policy
Already have an account?