
Neural Networks Quiz

Quiz
•
Information Technology (IT)
•
University
•
Hard
Usman Ali
FREE Resource
10 questions
Show all answers
1.
MULTIPLE CHOICE QUESTION
30 sec • 1 pt
A simple Multi-Layer Perceptron (MLP) without any non-linear activation functions is mathematically equivalent to:
A universal function approximator.
A single, wider linear layer.
A model incapable of learning.
A support vector machine with a linear kernel.
2.
MULTIPLE CHOICE QUESTION
30 sec • 1 pt
The primary motivation for using the ReLU activation function over Sigmoid in deep hidden layers is to:
Ensure the output is always positive.
Confine the activation values between 0 and 1.
Mitigate the vanishing gradient problem.
Make the network more computationally expensive but more accurate.
3.
MULTIPLE CHOICE QUESTION
30 sec • 1 pt
The "dying ReLU" problem is characterized by:
A neuron's weights being updated such that its pre-activation input is consistently negative, causing its output and gradient to be zero.
The network becoming too deep, causing all ReLU activations to eventually become zero.
The learning rate being too low, preventing weights from being updated.
A neuron's weights exploding to infinity.
4.
MULTIPLE CHOICE QUESTION
30 sec • 1 pt
Why is breaking symmetry by initializing weights randomly (e.g., using Xavier or He initialization) crucial for training?
It guarantees faster convergence to the global minimum.
It prevents all neurons in a layer from learning the same features, as they would with zero initialization.
It acts as a form of L2 regularization.
It ensures the initial loss of the network is exactly 1.0.
5.
MULTIPLE CHOICE QUESTION
30 sec • 1 pt
What is the primary purpose of the bias term in a neuron?
To scale the output of the activation function.
To act as a learnable offset, allowing the activation function to be shifted left or right.
To prevent the weights from becoming zero during training.
To control the learning rate for that specific neuron.
6.
MULTIPLE CHOICE QUESTION
30 sec • 1 pt
The Universal Approximation Theorem suggests that:
Any neural network can solve any problem.
A single-layer perceptron can approximate any linear function.
A feed-forward network with one hidden layer and a non-linear activation can approximate any continuous function to arbitrary precision.
Deeper networks are always better than wider networks.
7.
MULTIPLE CHOICE QUESTION
30 sec • 1 pt
A model is exhibiting high variance and low bias. This is a classic case of:
Underfitting, where the model is too simple for the data.
Overfitting, where the model has learned the training data too well, including its noise.
A well-generalized model.
A model trained with an incorrect loss function.
Create a free account and access millions of resources
Similar Resources on Wayground
10 questions
Computer Networking

Quiz
•
University
11 questions
QUIZ 3: SECURITY TOOLS and VULNERABILITY ANALYSIS

Quiz
•
University
15 questions
Network Topologies and Key Network Components

Quiz
•
10th Grade - University
15 questions
EasyRound

Quiz
•
University
10 questions
IT Quiz Bee 2025 - DIFFICULT

Quiz
•
University
9 questions
Text Analysis - 7 - Computational Linguistics

Quiz
•
University
14 questions
Network Protocols Quiz

Quiz
•
University
10 questions
C1M3

Quiz
•
University
Popular Resources on Wayground
18 questions
Writing Launch Day 1

Lesson
•
3rd Grade
11 questions
Hallway & Bathroom Expectations

Quiz
•
6th - 8th Grade
11 questions
Standard Response Protocol

Quiz
•
6th - 8th Grade
40 questions
Algebra Review Topics

Quiz
•
9th - 12th Grade
4 questions
Exit Ticket 7/29

Quiz
•
8th Grade
10 questions
Lab Safety Procedures and Guidelines

Interactive video
•
6th - 10th Grade
19 questions
Handbook Overview

Lesson
•
9th - 12th Grade
20 questions
Subject-Verb Agreement

Quiz
•
9th Grade