
Neural Networks Quiz

Quiz
•
Information Technology (IT)
•
University
•
Medium
Usman Ali
Used 1+ times
FREE Resource
25 questions
Show all answers
1.
MULTIPLE CHOICE QUESTION
30 sec • 1 pt
A simple Multi-Layer Perceptron (MLP) without any non-linear activation functions is mathematically equivalent to:
A universal function approximator.
A single, wider linear layer.
A model incapable of learning.
A support vector machine with a linear kernel.
2.
MULTIPLE CHOICE QUESTION
30 sec • 1 pt
The primary motivation for using the ReLU activation function over Sigmoid in deep hidden layers is to:
Ensure the output is always positive.
Confine the activation values between 0 and 1.
Mitigate the vanishing gradient problem.
Make the network more computationally expensive but more accurate.
3.
MULTIPLE CHOICE QUESTION
30 sec • 1 pt
The "dying ReLU" problem is characterized by:
A neuron's weights being updated such that its pre-activation input is consistently negative, causing its output and gradient to be zero.
The network becoming too deep, causing all ReLU activations to eventually become zero.
The learning rate being too low, preventing weights from being updated.
A neuron's weights exploding to infinity.
4.
MULTIPLE CHOICE QUESTION
30 sec • 1 pt
Why is breaking symmetry by initializing weights randomly (e.g., using Xavier or He initialization) crucial for training?
It guarantees faster convergence to the global minimum.
It prevents all neurons in a layer from learning the same features, as they would with zero initialization.
It acts as a form of L2 regularization.
It ensures the initial loss of the network is exactly 1.0.
5.
MULTIPLE CHOICE QUESTION
30 sec • 1 pt
What is the primary purpose of the bias term in a neuron?
To scale the output of the activation function.
To act as a learnable offset, allowing the activation function to be shifted left or right.
To prevent the weights from becoming zero during training.
To control the learning rate for that specific neuron.
6.
MULTIPLE CHOICE QUESTION
30 sec • 1 pt
The Universal Approximation Theorem suggests that:
Any neural network can solve any problem.
A single-layer perceptron can approximate any linear function.
A feed-forward network with one hidden layer and a non-linear activation can approximate any continuous function to arbitrary precision.
Deeper networks are always better than wider networks.
7.
MULTIPLE CHOICE QUESTION
30 sec • 1 pt
A model is exhibiting high variance and low bias. This is a classic case of:
Underfitting, where the model is too simple for the data.
Overfitting, where the model has learned the training data too well, including its noise.
A well-generalized model.
A model trained with an incorrect loss function.
Create a free account and access millions of resources
Similar Resources on Wayground
20 questions
Cybeteria Meetup July, 2025

Quiz
•
University
25 questions
COS 16 Quiz

Quiz
•
University
20 questions
PTS Genap Informatika Kelas XII

Quiz
•
12th Grade - University
20 questions
Informatik und Trends Quiz VIII

Quiz
•
University
25 questions
Introduction to Internet and WWW

Quiz
•
University
20 questions
CompNet 1

Quiz
•
University
25 questions
IT Concepts 2

Quiz
•
12th Grade - University
20 questions
Digitalization and Development

Quiz
•
University
Popular Resources on Wayground
55 questions
CHS Student Handbook 25-26

Quiz
•
9th Grade
18 questions
Writing Launch Day 1

Lesson
•
3rd Grade
10 questions
Chaffey

Quiz
•
9th - 12th Grade
15 questions
PRIDE

Quiz
•
6th - 8th Grade
40 questions
Algebra Review Topics

Quiz
•
9th - 12th Grade
22 questions
6-8 Digital Citizenship Review

Quiz
•
6th - 8th Grade
10 questions
Nouns, nouns, nouns

Quiz
•
3rd Grade
10 questions
Lab Safety Procedures and Guidelines

Interactive video
•
6th - 10th Grade