tech_quiz

tech_quiz

Professional Development

10 Qs

quiz-placeholder

Similar activities

KUIS PPPK 2021

KUIS PPPK 2021

Professional Development

10 Qs

Gradual Release Model

Gradual Release Model

Professional Development

10 Qs

Pretest Ekonometrika 2

Pretest Ekonometrika 2

University - Professional Development

10 Qs

Mastering strategies to multiply larger numbers

Mastering strategies to multiply larger numbers

Professional Development

6 Qs

area & 2D/3D shapes

area & 2D/3D shapes

Professional Development

14 Qs

Multiplication fun!!!

Multiplication fun!!!

3rd Grade - Professional Development

9 Qs

Algebra Competency Assessment

Algebra Competency Assessment

Professional Development

12 Qs

GEMATMW - Fractals

GEMATMW - Fractals

7th Grade - Professional Development

12 Qs

tech_quiz

tech_quiz

Assessment

Quiz

Computers

Professional Development

Hard

Created by

Sai Akella

Used 4+ times

FREE Resource

10 questions

Show all answers

1.

MULTIPLE CHOICE QUESTION

20 sec • 10 pts

Media Image

A

B

C

D

E

2.

MULTIPLE CHOICE QUESTION

10 sec • 2 pts

What is the purpose of setting the model to evaluation mode with model.eval() in PyTorch?

To initialize the model parameters.

To disable gradient computation during training.

To ensure layers like dropout and batch normalization behave correctly during inference.

To load the pre-trained weigths of the model.

3.

MULTIPLE SELECT QUESTION

10 sec • 2 pts

Which of the following code snippets correctly initializes a ResNet50 model without pre-trained weights in PyTorch?

Media Image
Media Image
Media Image
Media Image

4.

MULTIPLE CHOICE QUESTION

10 sec • 5 pts

Media Image

What does the following code snippet do in the context of PyTorch model inferencing?

It disables gradient computation and performs inference.

It initializes the model parameters without gradient computation.

It enables gradient computation and performs inference.

It computes gradients and performs inference.

5.

MULTIPLE CHOICE QUESTION

20 sec • 5 pts

What is the primary advantage of using model parallelism in large language model inference?

Reducing the inference time by processing multiple inputs in parallel.

Distributing the model's computations across multiple GPUs to handle large models that cannot fit into the memory of a single GPU.

Increasing the precision of the model's predictions.

Improving the training speed of the model.

6.

MULTIPLE CHOICE QUESTION

10 sec • 10 pts

What is the purpose of the torch.no_grad() context in the inference of large language models?

To enable gradient computation for backpropagation.

To reduce memory usage by disabling gradient computation.

To speed up the training process.

To initialize model weights.

7.

MULTIPLE CHOICE QUESTION

20 sec • 5 pts

When using a pre-trained large language model for inference, what is the recommended way to handle tokenization?

Implement a custom tokenization algorithm.

Use the tokenization method provided by the model's pre-trained package.

Use a generic tokenization method.

Use a generic tokenization method.

Create a free account and access millions of resources

Create resources
Host any resource
Get auto-graded reports
or continue with
Microsoft
Apple
Others
By signing up, you agree to our Terms of Service & Privacy Policy
Already have an account?