
Understanding Transformer Models
Authored by Asst.Prof.,CSE Chennai
Computers
University
Used 1+ times

AI Actions
Add similar questions
Adjust reading levels
Convert to real-world scenario
Translate activity
More...
Content View
Student View
10 questions
Show all answers
1.
MULTIPLE CHOICE QUESTION
30 sec • 1 pt
What is the primary purpose of the Encoder in a Transformer model?
To generate sequential text outputs
To process and understand the input data before passing it to the decoder
To apply attention mechanisms only on the output
To directly predict the final output
2.
MULTIPLE CHOICE QUESTION
30 sec • 1 pt
In a Transformer model, what is the key difference between the Encoder and Decoder?
The Encoder processes input sequences, while the Decoder generates output sequences
The Encoder uses self-attention, while the Decoder does not
The Decoder is responsible for processing input sequences, while the Encoder generates outputs
There is no difference between them
3.
MULTIPLE CHOICE QUESTION
30 sec • 1 pt
Which of the following architectures is an Encoder-Decoder model?
BERT
GPT
T5
Word2Vec
4.
MULTIPLE CHOICE QUESTION
30 sec • 1 pt
How does BERT differ from GPT?
BERT is bidirectional, while GPT is unidirectional
GPT is bidirectional, while BERT is unidirectional
BERT generates text, while GPT is only used for classification
BERT is trained using autoregressive modeling, while GPT is not
5.
MULTIPLE CHOICE QUESTION
30 sec • 1 pt
What does the positional encoding in a Transformer do?
Helps the model understand the order of words in a sequence
Translates words into numerical vectors
Removes the need for self-attention
Reduces computational complexity
6.
MULTIPLE CHOICE QUESTION
30 sec • 1 pt
What is the purpose of the embedding layer in a Transformer model?
To convert input words into numerical vectors
To apply attention mechanisms
To remove redundant information from input
To perform sequence classification
7.
MULTIPLE CHOICE QUESTION
30 sec • 1 pt
In an Encoder-Decoder Transformer model, what is the role of the cross-attention mechanism?
It allows the decoder to focus on relevant parts of the encoder's output
It replaces self-attention in the decoder
It prevents overfitting
It ensures that the encoder ignores unnecessary information
Access all questions and much more by creating a free account
Create resources
Host any resource
Get auto-graded reports

Continue with Google

Continue with Email

Continue with Classlink

Continue with Clever
or continue with

Microsoft
%20(1).png)
Apple
Others
Already have an account?
Similar Resources on Wayground
10 questions
Introduction to NumPy (easy)
Quiz
•
University
10 questions
Lourdes Amaranta Ayala Gracia
Quiz
•
8th Grade - University
10 questions
BIOMETRICS-UNIT-I
Quiz
•
University
15 questions
AWS ACF Módulo 2 - Economia e Faturamento na Nuvem
Quiz
•
University
10 questions
CHAPTER 1: COMPUTER SECURITY REVIEW
Quiz
•
University
13 questions
Chap 1: What is a program made of?
Quiz
•
University
10 questions
Introduction to AI - Healthcare & Business
Quiz
•
University - Professi...
10 questions
POP QUIZ 4 (DFC40243)
Quiz
•
University
Popular Resources on Wayground
15 questions
Fractions on a Number Line
Quiz
•
3rd Grade
20 questions
Equivalent Fractions
Quiz
•
3rd Grade
25 questions
Multiplication Facts
Quiz
•
5th Grade
54 questions
Analyzing Line Graphs & Tables
Quiz
•
4th Grade
22 questions
fractions
Quiz
•
3rd Grade
20 questions
Main Idea and Details
Quiz
•
5th Grade
20 questions
Context Clues
Quiz
•
6th Grade
15 questions
Equivalent Fractions
Quiz
•
4th Grade