
RNN&Transformers Quiz

Quiz
•
Computers
•
12th Grade
•
Medium
Taha rajeh
Used 19+ times
FREE Resource
28 questions
Show all answers
1.
MULTIPLE CHOICE QUESTION
2 mins • 1 pt
What is the main challenge associated with training vanilla RNNs on long sequences?
Exploding gradients
Vanishing gradients
Both exploding and vanishing gradients
None of the above
2.
MULTIPLE CHOICE QUESTION
30 sec • 1 pt
What is the main limitation of using one-hot encoding for representing words in an RNN?
a) One-hot encoding does not capture semantic relationships between words.
b) One-hot encoding is inefficient for large vocabularies.
c) One-hot encoding can lead to exploding gradients during training.
Both a) and b).
3.
MULTIPLE CHOICE QUESTION
2 mins • 1 pt
What is the key idea behind Recurrent Neural Networks (RNNs)?
RNNs have an internal state that is updated as a sequence is processed.
RNNs use a fixed set of parameters for each time step.
RNNs can only process sequential data.
RNNs are feed-forward neural networks.
4.
MULTIPLE CHOICE QUESTION
2 mins • 1 pt
What is the purpose of the "output gate" in an LSTM?
It controls what information goes into the output of the LSTM cell.
It determines how much information goes through the cell state.
It decides what information is added to the cell state.
It updates the cell state.
5.
MULTIPLE CHOICE QUESTION
2 mins • 1 pt
What is the purpose of the "forget gate" in an LSTM (Long Short-Term Memory) network?
It determines how much information goes through the cell state.
It decides what information is added to the cell state.
It controls what goes into the output.
It updates the cell state.
6.
MULTIPLE CHOICE QUESTION
2 mins • 1 pt
What is the purpose of the "attention mechanism" in sequence-to-sequence models?
To improve the model's ability to capture long-term dependencies.
To reduce the computational complexity of the model.
To allow the model to focus on different parts of the input sequence when generating the output sequence.
To prevent vanishing gradients during training.
7.
MULTIPLE CHOICE QUESTION
2 mins • 1 pt
In the context of sequence to sequence (seq2seq) models, what is the purpose of the "encoder" component?
To generate the output sequence from the input sequence.
To encode the input sequence into a fixed-size vector representation.
To decode the fixed-size vector representation into the output sequence.
To combine the input and output sequences into a single sequence.
Create a free account and access millions of resources
Similar Resources on Wayground
31 questions
Basic Robotics - CH09: Programming and file management

Quiz
•
9th - 12th Grade
25 questions
CONDITIONAL STATEMENTS

Quiz
•
11th - 12th Grade
30 questions
Informatics School Pre-Test

Quiz
•
9th Grade - University
32 questions
Mwd theory practice

Quiz
•
1st - 12th Grade
30 questions
Techlon's Code Marathon 2023

Quiz
•
12th Grade
25 questions
Problem Recognition and Solving

Quiz
•
12th Grade
25 questions
Robotics Quiz

Quiz
•
10th - 12th Grade
33 questions
2.1.1 CSE Transitioning from Block to Text Coding

Quiz
•
9th - 12th Grade
Popular Resources on Wayground
10 questions
Lab Safety Procedures and Guidelines

Interactive video
•
6th - 10th Grade
10 questions
Nouns, nouns, nouns

Quiz
•
3rd Grade
10 questions
9/11 Experience and Reflections

Interactive video
•
10th - 12th Grade
25 questions
Multiplication Facts

Quiz
•
5th Grade
11 questions
All about me

Quiz
•
Professional Development
22 questions
Adding Integers

Quiz
•
6th Grade
15 questions
Subtracting Integers

Quiz
•
7th Grade
9 questions
Tips & Tricks

Lesson
•
6th - 8th Grade
Discover more resources for Computers
20 questions
Digital Citizenship

Quiz
•
8th - 12th Grade
35 questions
Computer Baseline Examination 2025-26

Quiz
•
9th - 12th Grade
13 questions
Problem Solving Process

Quiz
•
9th - 12th Grade
10 questions
Understanding Algorithms with Pseudocode and Flowcharts

Interactive video
•
9th - 12th Grade
19 questions
AP CSP Unit 1 Review (code.org)

Quiz
•
10th - 12th Grade