Search Header Logo

Understanding Transformer Models

Authored by Asst.Prof.,CSE Chennai

Computers

University

Used 1+ times

Understanding Transformer Models
AI

AI Actions

Add similar questions

Adjust reading levels

Convert to real-world scenario

Translate activity

More...

    Content View

    Student View

10 questions

Show all answers

1.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

What is the primary purpose of the Encoder in a Transformer model?

To generate sequential text outputs

To process and understand the input data before passing it to the decoder

To apply attention mechanisms only on the output

To directly predict the final output

2.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

In a Transformer model, what is the key difference between the Encoder and Decoder?

The Encoder processes input sequences, while the Decoder generates output sequences

The Encoder uses self-attention, while the Decoder does not

The Decoder is responsible for processing input sequences, while the Encoder generates outputs

There is no difference between them

3.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

Which of the following architectures is an Encoder-Decoder model?

BERT

GPT

T5

Word2Vec

4.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

How does BERT differ from GPT?

BERT is bidirectional, while GPT is unidirectional

GPT is bidirectional, while BERT is unidirectional

BERT generates text, while GPT is only used for classification

BERT is trained using autoregressive modeling, while GPT is not

5.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

What does the positional encoding in a Transformer do?

Helps the model understand the order of words in a sequence

Translates words into numerical vectors

Removes the need for self-attention

Reduces computational complexity

6.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

What is the purpose of the embedding layer in a Transformer model?

To convert input words into numerical vectors

To apply attention mechanisms

To remove redundant information from input

To perform sequence classification

7.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

In an Encoder-Decoder Transformer model, what is the role of the cross-attention mechanism?

It allows the decoder to focus on relevant parts of the encoder's output

It replaces self-attention in the decoder

It prevents overfitting

It ensures that the encoder ignores unnecessary information

Access all questions and much more by creating a free account

Create resources

Host any resource

Get auto-graded reports

Google

Continue with Google

Email

Continue with Email

Classlink

Continue with Classlink

Clever

Continue with Clever

or continue with

Microsoft

Microsoft

Apple

Apple

Others

Others

Already have an account?