Data Science and Machine Learning (Theory and Projects) A to Z - Vanishing Gradients in RNN: GRU

Data Science and Machine Learning (Theory and Projects) A to Z - Vanishing Gradients in RNN: GRU

Assessment

Interactive Video

Information Technology (IT), Architecture, Religious Studies, Other, Social Studies

University

Hard

Created by

Wayground Content

FREE Resource

10 questions

Show all answers

1.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

What is a primary challenge in recurrent neural networks that affects their performance?

Overfitting

High computational cost

Data scarcity

Vanishing gradient

2.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

Which of the following is a more recent solution to the vanishing gradient problem?

LSTM

CNN

GRU

RNN

3.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

What is the main advantage of GRU over LSTM?

Higher accuracy

Simplicity

Faster training

Better generalization

4.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

In a simple recurrent neural network, which activation function is often preferred?

Sigmoid

ReLU

Tanh

Softmax

5.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

Why is the tanh activation function preferred in RNNs?

It is easier to implement

It is less prone to overfitting

It maintains memory better

It is computationally efficient

6.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

What is the role of the update gate in a GRU?

To decide whether to update activations

To compute loss

To normalize inputs

To initialize weights

7.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

In GRU, what happens if the update gate is set to zero?

The previous activation is carried forward

The activation is reset

The candidate activation is used

The model stops training

Create a free account and access millions of resources

Create resources
Host any resource
Get auto-graded reports
or continue with
Microsoft
Apple
Others
By signing up, you agree to our Terms of Service & Privacy Policy
Already have an account?

Discover more resources for Information Technology (IT)