Quiz#1

Quiz#1

University

9 Qs

quiz-placeholder

Similar activities

QUIZ ON AI

QUIZ ON AI

University

10 Qs

Conv Neural Networks

Conv Neural Networks

University - Professional Development

7 Qs

VITAP Quiz-1

VITAP Quiz-1

University

10 Qs

Computer Networks Quiz

Computer Networks Quiz

University

10 Qs

DDP1.1 Introduction to Programming and Computer System

DDP1.1 Introduction to Programming and Computer System

University

7 Qs

Object Detection

Object Detection

University

10 Qs

Computer Vision

Computer Vision

University

10 Qs

Transfer Learning

Transfer Learning

University

10 Qs

Quiz#1

Quiz#1

Assessment

Quiz

Computers

University

Medium

Created by

Akhtar Jamil

Used 1+ times

FREE Resource

9 questions

Show all answers

1.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

Which component is NOT part of the Transformer model architecture?

Convolutional Layer

Feedforward Network

Decoder

Encoder

2.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

In self-attention, which tokens are used for inputs Q, K, and V?

Different tokens for each

Only queries are different

Same tokens for all three

Only keys and values are the same

3.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

What does the Vision Transformer (ViT) primarily use for image classification?

Convolutional Neural Networks

Support Vector Machines

Recurrent Neural Networks

Pure Transformer architecture

4.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

What is the purpose of positional encoding in the Transformer?

To reduce dimensionality

To add information about token order

To enhance computational speed

To increase model complexity

5.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

What is the classification token used for in the Vision Transformer?

To reduce training time

To enhance image resolution

To represent the entire image

To increase the number of patches

6.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

What is the dimensionality of the model in the training example provided?

10

8

6

4

7.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

Which layer is NOT mentioned as part of the Transformer architecture?

Pooling Layer

Multi-Head Attention

Layer Normalization

Residual Connections

8.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

What is the main advantage of using Vision Transformers over convolutional networks?

Faster training times

Less computational resources required

Higher accuracy

Better feature extraction

9.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

What is the role of the Gaussian Error Linear Unit (GELU) in the Vision Transformer?

To increase model size

To normalize inputs

To apply activation function

To reduce overfitting