Neural Networks Quiz

Neural Networks Quiz

University

10 Qs

quiz-placeholder

Similar activities

Exploring Computer Network Models

Exploring Computer Network Models

10th Grade - University

13 Qs

Network Protocols Quiz

Network Protocols Quiz

University

14 Qs

NISS-DC-Quiz

NISS-DC-Quiz

University

15 Qs

ICT and Multimedia Quiz

ICT and Multimedia Quiz

12th Grade - University

14 Qs

Network Topologies and Key Network Components

Network Topologies and Key Network Components

10th Grade - University

15 Qs

Neural Networks Quiz

Neural Networks Quiz

Assessment

Quiz

Information Technology (IT)

University

Hard

Created by

Usman Ali

FREE Resource

10 questions

Show all answers

1.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

A simple Multi-Layer Perceptron (MLP) without any non-linear activation functions is mathematically equivalent to:

A universal function approximator.

A single, wider linear layer.

A model incapable of learning.

A support vector machine with a linear kernel.

2.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

The primary motivation for using the ReLU activation function over Sigmoid in deep hidden layers is to:

Ensure the output is always positive.

Confine the activation values between 0 and 1.

Mitigate the vanishing gradient problem.

Make the network more computationally expensive but more accurate.

3.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

The "dying ReLU" problem is characterized by:

A neuron's weights being updated such that its pre-activation input is consistently negative, causing its output and gradient to be zero.

The network becoming too deep, causing all ReLU activations to eventually become zero.

The learning rate being too low, preventing weights from being updated.

A neuron's weights exploding to infinity.

4.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

Why is breaking symmetry by initializing weights randomly (e.g., using Xavier or He initialization) crucial for training?

It guarantees faster convergence to the global minimum.

It prevents all neurons in a layer from learning the same features, as they would with zero initialization.

It acts as a form of L2 regularization.

It ensures the initial loss of the network is exactly 1.0.

5.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

What is the primary purpose of the bias term in a neuron?

To scale the output of the activation function.

To act as a learnable offset, allowing the activation function to be shifted left or right.

To prevent the weights from becoming zero during training.

To control the learning rate for that specific neuron.

6.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

The Universal Approximation Theorem suggests that:

Any neural network can solve any problem.

A single-layer perceptron can approximate any linear function.

A feed-forward network with one hidden layer and a non-linear activation can approximate any continuous function to arbitrary precision.

Deeper networks are always better than wider networks.

7.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

A model is exhibiting high variance and low bias. This is a classic case of:

Underfitting, where the model is too simple for the data.

Overfitting, where the model has learned the training data too well, including its noise.

A well-generalized model.

A model trained with an incorrect loss function.

Create a free account and access millions of resources

Create resources
Host any resource
Get auto-graded reports
or continue with
Microsoft
Apple
Others
By signing up, you agree to our Terms of Service & Privacy Policy
Already have an account?

Discover more resources for Information Technology (IT)