Neural Networks Quiz

Neural Networks Quiz

University

25 Qs

quiz-placeholder

Similar activities

Security Fundamental

Security Fundamental

University

20 Qs

ITE 292 Long Quiz

ITE 292 Long Quiz

University

26 Qs

Networking Quiz

Networking Quiz

University

26 Qs

Informatik und Trends Quiz VIII

Informatik und Trends Quiz VIII

University

20 Qs

CA Internet

CA Internet

University

20 Qs

YEAR 7 MID-TERM  ASSESSMENT

YEAR 7 MID-TERM ASSESSMENT

7th Grade - University

20 Qs

Cybersecurity

Cybersecurity

University

20 Qs

Neural Networks Quiz

Neural Networks Quiz

Assessment

Quiz

Information Technology (IT)

University

Medium

Created by

Usman Ali

Used 1+ times

FREE Resource

25 questions

Show all answers

1.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

A simple Multi-Layer Perceptron (MLP) without any non-linear activation functions is mathematically equivalent to:

A universal function approximator.

A single, wider linear layer.

A model incapable of learning.

A support vector machine with a linear kernel.

2.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

The primary motivation for using the ReLU activation function over Sigmoid in deep hidden layers is to:

Ensure the output is always positive.

Confine the activation values between 0 and 1.

Mitigate the vanishing gradient problem.

Make the network more computationally expensive but more accurate.

3.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

The "dying ReLU" problem is characterized by:

A neuron's weights being updated such that its pre-activation input is consistently negative, causing its output and gradient to be zero.

The network becoming too deep, causing all ReLU activations to eventually become zero.

The learning rate being too low, preventing weights from being updated.

A neuron's weights exploding to infinity.

4.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

Why is breaking symmetry by initializing weights randomly (e.g., using Xavier or He initialization) crucial for training?

It guarantees faster convergence to the global minimum.

It prevents all neurons in a layer from learning the same features, as they would with zero initialization.

It acts as a form of L2 regularization.

It ensures the initial loss of the network is exactly 1.0.

5.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

What is the primary purpose of the bias term in a neuron?

To scale the output of the activation function.

To act as a learnable offset, allowing the activation function to be shifted left or right.

To prevent the weights from becoming zero during training.

To control the learning rate for that specific neuron.

6.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

The Universal Approximation Theorem suggests that:

Any neural network can solve any problem.

A single-layer perceptron can approximate any linear function.

A feed-forward network with one hidden layer and a non-linear activation can approximate any continuous function to arbitrary precision.

Deeper networks are always better than wider networks.

7.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

A model is exhibiting high variance and low bias. This is a classic case of:

Underfitting, where the model is too simple for the data.

Overfitting, where the model has learned the training data too well, including its noise.

A well-generalized model.

A model trained with an incorrect loss function.

Create a free account and access millions of resources

Create resources
Host any resource
Get auto-graded reports
or continue with
Microsoft
Apple
Others
By signing up, you agree to our Terms of Service & Privacy Policy
Already have an account?

Discover more resources for Information Technology (IT)