Search Header Logo

ATI quiz

Authored by Tuấn Nguyễn

Computers

University

Used 4+ times

ATI quiz
AI

AI Actions

Add similar questions

Adjust reading levels

Convert to real-world scenario

Translate activity

More...

    Content View

    Student View

100 questions

Show all answers

1.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

Which of the following activation functions can lead to vanishing gradients?

ReLU

Tanh

  1.  Leaky ReLU

  1.  None of the above

2.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

Which of the following is true about Batchnorm?

Batchnorm is another way of performing dropout

Batchnorm makes training faster

In Batchnorm, the mean is computed over the features

Batchnorm is a non-linear transformation to center the dataset around the origin

3.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

When should multi-task learning be used?

When your problem involves more than one class label

When two tasks have the same dataset

When you have a small amount of data for a particular task that would benefit from the large dataset of another task

When the tasks have datasets of different formats (text and images).

4.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

What is Error Analysis?

The process of analyzing the performance of a model through metrics such as precision, recall or F1-score.

The process of scanning mis-classified examples to identify weaknesses of a model

The process of tuning hyperparameters to reduce the loss function during training

 The process of identifying which parts of your model contributed to the error

5.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

Which of the following is a non-iterative method to generate adversarial examples?

  1.  Non-Saturating Cost Method

Input Optimization Method

Adversarial Training

Logit Pairing

  1.  Fast Gradient Sign Method

6.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

Which, if any, of the following propositions is true about fully-connected neural networks (FCNN)?

A FCNN with only linear activations is a linear network.

In a FCNN, there are connections between neurons of a same layer.

In a FCNN, the most common weight initialization scheme is the Zero initialization, because it leads to faster and more robust training

None of the above

7.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

How many layers Deep learning algorithms are constructed?

2

3

4

5

Access all questions and much more by creating a free account

Create resources

Host any resource

Get auto-graded reports

Google

Continue with Google

Email

Continue with Email

Classlink

Continue with Classlink

Clever

Continue with Clever

or continue with

Microsoft

Microsoft

Apple

Apple

Others

Others

Already have an account?