Quiz#1

Quiz#1

University

9 Qs

quiz-placeholder

Similar activities

Teste de questionário 1 HCIA-AI

Teste de questionário 1 HCIA-AI

University

9 Qs

Attention Is All You Need | Quiz

Attention Is All You Need | Quiz

University - Professional Development

10 Qs

CV CLUB QUIZ

CV CLUB QUIZ

University

11 Qs

C7-8: ANN & Image Processing

C7-8: ANN & Image Processing

12th Grade - University

10 Qs

Session 1

Session 1

University

10 Qs

Understanding Vision Transformers

Understanding Vision Transformers

University

10 Qs

NLP-Transformers Last  Quiz

NLP-Transformers Last Quiz

University

10 Qs

QUIZ ON AI

QUIZ ON AI

University

10 Qs

Quiz#1

Quiz#1

Assessment

Quiz

Computers

University

Medium

Created by

Akhtar Jamil

Used 1+ times

FREE Resource

9 questions

Show all answers

1.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

Which component is NOT part of the Transformer model architecture?

Convolutional Layer

Feedforward Network

Decoder

Encoder

2.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

In self-attention, which tokens are used for inputs Q, K, and V?

Different tokens for each

Only queries are different

Same tokens for all three

Only keys and values are the same

3.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

What does the Vision Transformer (ViT) primarily use for image classification?

Convolutional Neural Networks

Support Vector Machines

Recurrent Neural Networks

Pure Transformer architecture

4.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

What is the purpose of positional encoding in the Transformer?

To reduce dimensionality

To add information about token order

To enhance computational speed

To increase model complexity

5.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

What is the classification token used for in the Vision Transformer?

To reduce training time

To enhance image resolution

To represent the entire image

To increase the number of patches

6.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

What is the dimensionality of the model in the training example provided?

10

8

6

4

7.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

Which layer is NOT mentioned as part of the Transformer architecture?

Pooling Layer

Multi-Head Attention

Layer Normalization

Residual Connections

8.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

What is the main advantage of using Vision Transformers over convolutional networks?

Faster training times

Less computational resources required

Higher accuracy

Better feature extraction

9.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

What is the role of the Gaussian Error Linear Unit (GELU) in the Vision Transformer?

To increase model size

To normalize inputs

To apply activation function

To reduce overfitting