Quiz#1

Quiz#1

University

9 Qs

quiz-placeholder

Similar activities

NLP - Lecture 1

NLP - Lecture 1

University

7 Qs

Unit 24 - Java Basics

Unit 24 - Java Basics

University

10 Qs

C++ - Introduction

C++ - Introduction

University

10 Qs

UNIT 1 - WHAT IS C++

UNIT 1 - WHAT IS C++

University

10 Qs

COMPILER DESIGN QUIZ 28.3.2023

COMPILER DESIGN QUIZ 28.3.2023

University

10 Qs

Cross-Site Request Forgery

Cross-Site Request Forgery

University

11 Qs

Quiz: Motherboard

Quiz: Motherboard

3rd Grade - University

10 Qs

Recuperación de errores

Recuperación de errores

University

8 Qs

Quiz#1

Quiz#1

Assessment

Quiz

Computers

University

Medium

Created by

Akhtar Jamil

Used 1+ times

FREE Resource

9 questions

Show all answers

1.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

Which component is NOT part of the Transformer model architecture?

Convolutional Layer

Feedforward Network

Decoder

Encoder

2.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

In self-attention, which tokens are used for inputs Q, K, and V?

Different tokens for each

Only queries are different

Same tokens for all three

Only keys and values are the same

3.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

What does the Vision Transformer (ViT) primarily use for image classification?

Convolutional Neural Networks

Support Vector Machines

Recurrent Neural Networks

Pure Transformer architecture

4.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

What is the purpose of positional encoding in the Transformer?

To reduce dimensionality

To add information about token order

To enhance computational speed

To increase model complexity

5.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

What is the classification token used for in the Vision Transformer?

To reduce training time

To enhance image resolution

To represent the entire image

To increase the number of patches

6.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

What is the dimensionality of the model in the training example provided?

10

8

6

4

7.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

Which layer is NOT mentioned as part of the Transformer architecture?

Pooling Layer

Multi-Head Attention

Layer Normalization

Residual Connections

8.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

What is the main advantage of using Vision Transformers over convolutional networks?

Faster training times

Less computational resources required

Higher accuracy

Better feature extraction

9.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

What is the role of the Gaussian Error Linear Unit (GELU) in the Vision Transformer?

To increase model size

To normalize inputs

To apply activation function

To reduce overfitting