DL_unit-2

DL_unit-2

University

14 Qs

quiz-placeholder

Similar activities

Industry 4.0 Unit 1

Industry 4.0 Unit 1

University

10 Qs

Neural Networks Quiz

Neural Networks Quiz

University

10 Qs

Neuron Network

Neuron Network

University

14 Qs

Session 7 | U

Session 7 | U

University

10 Qs

WS2324 S2 & S10 Formative Assessment

WS2324 S2 & S10 Formative Assessment

University

15 Qs

Machine Learning

Machine Learning

University

15 Qs

Quiz#1

Quiz#1

University

9 Qs

Intelligent System 2 - Prelim Exam

Intelligent System 2 - Prelim Exam

University

18 Qs

DL_unit-2

DL_unit-2

Assessment

Quiz

Computers

University

Medium

Created by

Ashu Abdul

Used 1+ times

FREE Resource

14 questions

Show all answers

1.

OPEN ENDED QUESTION

20 sec • Ungraded

Complete Roll Number

Evaluate responses using AI:

OFF

2.

MULTIPLE CHOICE QUESTION

20 sec • 1 pt

What is the primary purpose of the Sigmoid activation function in artificial neural networks?

Introduce non-linearity

Enhance computational efficiency

Guarantee vanishing gradient

  • Eliminate overfitting

3.

MULTIPLE CHOICE QUESTION

20 sec • 1 pt

Which hyperbolic function is commonly used as an activation function in neural networks?

Sine

Cosine

Hyperbolic Tangent (tanh)

Hyperbolic Secant (sech)

4.

MULTIPLE CHOICE QUESTION

20 sec • 1 pt

What is the primary purpose of the Perceptron Training Rule in neural networks?

Adjust weights to minimize errors

Control learning rate

Prevent overfitting

Guarantee non-linearity

5.

MULTIPLE CHOICE QUESTION

20 sec • 1 pt

Which activation function is more resilient to the vanishing gradient problem and is commonly used in the hidden layers of deep neural networks?

Sigmoid

Hyperbolic Tangent (tanh)

Rectified Linear Unit

Softmax

6.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

What optimization algorithm is commonly used to train artificial neural networks, and how does it address the task of adjusting weights to minimize errors during training?

Adversarial training, by adjusting weights based on the gradient of the loss function.

Genetic algorithms, by evolving a population of networks over generations.

Gradient Descent, by iteratively updating weights in the direction of steepest descent of the loss function.

Reinforcement learning, by using reward signals to adjust weights.

7.

MULTIPLE CHOICE QUESTION

30 sec • 2 pts

Explain the concept of the "Perceptron Training Rule" and its significance in adjusting weights during the training of a single-layer perceptron.

The rule for adjusting weights based on the magnitude of the input features.

A strategy for handling imbalanced datasets during training.

The iterative process of updating weights to minimize errors and improve model accuracy.

A method for introducing randomness in weight adjustments to prevent overfitting.

Create a free account and access millions of resources

Create resources
Host any resource
Get auto-graded reports
or continue with
Microsoft
Apple
Others
By signing up, you agree to our Terms of Service & Privacy Policy
Already have an account?