tech_quiz

tech_quiz

Professional Development

10 Qs

quiz-placeholder

Similar activities

Jacke's Big Gameshow

Jacke's Big Gameshow

Professional Development

9 Qs

HTML and CSS

HTML and CSS

KG - Professional Development

7 Qs

Quadratics PT 3 Review

Quadratics PT 3 Review

Professional Development

15 Qs

Pretest Data Mining 1

Pretest Data Mining 1

University - Professional Development

10 Qs

DSBC - Attention Is All You Need

DSBC - Attention Is All You Need

University - Professional Development

11 Qs

OPERES2 QUIZ_Review of OPERES 1

OPERES2 QUIZ_Review of OPERES 1

University - Professional Development

10 Qs

Marvin’s quiz try if you dare

Marvin’s quiz try if you dare

KG - Professional Development

15 Qs

EPIC GAMER QUIZ

EPIC GAMER QUIZ

KG - Professional Development

10 Qs

tech_quiz

tech_quiz

Assessment

Quiz

Computers

Professional Development

Hard

Created by

Sai Akella

Used 4+ times

FREE Resource

10 questions

Show all answers

1.

MULTIPLE CHOICE QUESTION

20 sec • 10 pts

Media Image

A

B

C

D

E

2.

MULTIPLE CHOICE QUESTION

10 sec • 2 pts

What is the purpose of setting the model to evaluation mode with model.eval() in PyTorch?

To initialize the model parameters.

To disable gradient computation during training.

To ensure layers like dropout and batch normalization behave correctly during inference.

To load the pre-trained weigths of the model.

3.

MULTIPLE SELECT QUESTION

10 sec • 2 pts

Which of the following code snippets correctly initializes a ResNet50 model without pre-trained weights in PyTorch?

Media Image
Media Image
Media Image
Media Image

4.

MULTIPLE CHOICE QUESTION

10 sec • 5 pts

Media Image

What does the following code snippet do in the context of PyTorch model inferencing?

It disables gradient computation and performs inference.

It initializes the model parameters without gradient computation.

It enables gradient computation and performs inference.

It computes gradients and performs inference.

5.

MULTIPLE CHOICE QUESTION

20 sec • 5 pts

What is the primary advantage of using model parallelism in large language model inference?

Reducing the inference time by processing multiple inputs in parallel.

Distributing the model's computations across multiple GPUs to handle large models that cannot fit into the memory of a single GPU.

Increasing the precision of the model's predictions.

Improving the training speed of the model.

6.

MULTIPLE CHOICE QUESTION

10 sec • 10 pts

What is the purpose of the torch.no_grad() context in the inference of large language models?

To enable gradient computation for backpropagation.

To reduce memory usage by disabling gradient computation.

To speed up the training process.

To initialize model weights.

7.

MULTIPLE CHOICE QUESTION

20 sec • 5 pts

When using a pre-trained large language model for inference, what is the recommended way to handle tokenization?

Implement a custom tokenization algorithm.

Use the tokenization method provided by the model's pre-trained package.

Use a generic tokenization method.

Use a generic tokenization method.

Create a free account and access millions of resources

Create resources
Host any resource
Get auto-graded reports
or continue with
Microsoft
Apple
Others
By signing up, you agree to our Terms of Service & Privacy Policy
Already have an account?