Week2_S2

Week2_S2

University

10 Qs

quiz-placeholder

Similar activities

Recap W02 - W04 Information and Communication Technology

Recap W02 - W04 Information and Communication Technology

University

12 Qs

Nano quiz: Bottleneck of Motivation 2 and Receptivity Threshold

Nano quiz: Bottleneck of Motivation 2 and Receptivity Threshold

University

13 Qs

QUIZ 3: SECURITY TOOLS and  VULNERABILITY ANALYSIS

QUIZ 3: SECURITY TOOLS and VULNERABILITY ANALYSIS

University

11 Qs

Design-Interface and Dialogue

Design-Interface and Dialogue

University

15 Qs

InfoSec Quiz 1

InfoSec Quiz 1

University

15 Qs

Quiz - Foundation of Testing (Testing and System Implementation)

Quiz - Foundation of Testing (Testing and System Implementation)

University

10 Qs

INTERNET AND CYBER ETHICS QUIZ

INTERNET AND CYBER ETHICS QUIZ

University

10 Qs

Week2_S2

Week2_S2

Assessment

Quiz

Information Technology (IT)

University

Practice Problem

Easy

Created by

Samiratu Ntohsi

Used 3+ times

FREE Resource

AI

Enhance your content in a minute

Add similar questions
Adjust reading levels
Convert to real-world scenario
Translate activity
More...

10 questions

Show all answers

1.

MULTIPLE CHOICE QUESTION

45 sec • 1 pt

Why is generalization more important than training accuracy in neural networks?

 Generalization proves convergence.

Training accuracy ensures bias reduction.

 It prevents vanishing gradients.

It reflects the ability to predict unseen data, the true goal.

2.

FILL IN THE BLANK QUESTION

1 min • 1 pt

A multilayer network without nonlinearities collapses into a ______ model.

3.

MULTIPLE CHOICE QUESTION

45 sec • 1 pt

When using ReLU in hidden layers instead of sigmoid, which benefit typically emerges?

  1. Guarantees linear separability in the input space.

Prevents exploding gradients.

Ensures all neurons remain active.

Reduces vanishing gradient problems

4.

MULTIPLE CHOICE QUESTION

45 sec • 1 pt

Which of the following statements about gradient descent is correct?

It always finds the global minimum.

 It updates weights by moving against the gradient of the loss function.

It requires linear separability.

It is identical to the perceptron update rule.

5.

MULTIPLE CHOICE QUESTION

45 sec • 1 pt

Why is nonlinearity essential in deep neural networks?

It guarantees zero error.

 It reduces training time.

It allows the composition of layers to model complex, non-linear boundaries.

It simplifies the optimization problem.

6.

FILL IN THE BLANK QUESTION

1 min • 1 pt

Backpropagation uses the ______ rule to propagate gradients backward through layers

7.

MULTIPLE CHOICE QUESTION

45 sec • 1 pt

In a neural network, forward propagation refers to:

Feeding inputs through the network to generate predictions

Updating weights using gradient descent

Reversing gradients to find errors

Adjusting biases to prevent saturation

Create a free account and access millions of resources

Create resources

Host any resource

Get auto-graded reports

Google

Continue with Google

Email

Continue with Email

Classlink

Continue with Classlink

Clever

Continue with Clever

or continue with

Microsoft

Microsoft

Apple

Apple

Others

Others

Already have an account?