Generative Models Quiz

Generative Models Quiz

Professional Development

12 Qs

quiz-placeholder

Similar activities

Computer Hardware

Computer Hardware

12th Grade - Professional Development

16 Qs

Fusion 360 Commands

Fusion 360 Commands

9th Grade - Professional Development

16 Qs

Robotics/Coding Pop Quiz

Robotics/Coding Pop Quiz

4th Grade - Professional Development

10 Qs

Attention Is All You Need | Quiz

Attention Is All You Need | Quiz

University - Professional Development

10 Qs

Fun Facts about AI !

Fun Facts about AI !

Professional Development

12 Qs

FinTech 11-2 Classification

FinTech 11-2 Classification

Professional Development

10 Qs

ภาพกราฟิกเคลื่อนไหว

ภาพกราฟิกเคลื่อนไหว

Professional Development

10 Qs

STC Assessment 2

STC Assessment 2

Professional Development

12 Qs

Generative Models Quiz

Generative Models Quiz

Assessment

Quiz

Computers

Professional Development

Medium

Created by

Vijay Agrawal

Used 1+ times

FREE Resource

12 questions

Show all answers

1.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

Which of the following correctly describes the role of the encoder and decoder in a Variational Autoencoder (VAE) for image generation?

The encoder compresses the image into a fixed representation, and the decoder reconstructs the exact same image

The encoder maps the image to a distribution in latent space, and the decoder samples from this distribution to generate similar images

The encoder identifies key features in the image, and the decoder amplifies these features

The encoder removes noise from the image, and the decoder enhances image quality

2.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

Which statement(s) correctly distinguish a Variational Autoencoder (VAE) from a standard Autoencoder (AE)?

VAEs produce deterministic encodings, while AEs produce probabilistic latent representations.

VAEs learn a continuous latent space by sampling from a distribution, whereas AEs learn a direct mapping without probabilistic sampling.

VAEs cannot reconstruct inputs accurately due to random noise in sampling, while AEs always perform perfect reconstruction.

AEs use an encoder–decoder structure, while VAEs do not.

3.

MULTIPLE SELECT QUESTION

30 sec • 1 pt

Which of the following are true about Diffusion Models used in image generation?

They learn to reverse a noising process applied to training images.

They require adversarial training with a discriminator network.

They can produce high-quality images by iteratively denoising samples from random noise.

They do not use latent variables at all.

4.

MULTIPLE SELECT QUESTION

30 sec • 1 pt

Which of the following statements about latent space is/are correct?

A well-trained latent space allows smooth interpolation between data samples.

Latent dimensions always correspond directly to interpretable features (e.g., “hair color,” “background color”).

A “latent vector” can be sampled from a prior (e.g., Gaussian) to generate new outputs.

High-dimensional latent spaces automatically guarantee high-quality generation.

5.

MULTIPLE SELECT QUESTION

30 sec • 1 pt

In large language models (LLMs) such as GPT-3.5, which techniques are commonly used to improve training and generation quality?

Masked language modeling (like in BERT)

Teacher forcing for sequence-to-sequence training

Next-token prediction on large-scale unlabelled text corpora

Reinforcement learning with human feedback (RLHF)

6.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

Which of the following is a challenge specific to text-to-video generation compared to text-to-image?

Synchronizing multiple frames to maintain temporal consistency

Generating color images

Handling variable-length text prompts

Converting text to embeddings

7.

MULTIPLE SELECT QUESTION

30 sec • 1 pt

You want to ensemble different generative models (e.g., a diffusion model and a VAE) to leverage their strengths. Which approach(s) correctly describe possible ensembling strategies?

Train both models on the same data and then average their generated outputs pixel by pixel.

Use one model’s latent representation as input to another model (e.g., VAE → Diffusion) to refine generation.

Randomly pick outputs from either the diffusion model or the VAE.

Use a gating network that decides which model to use based on input constraints or quality measures.

Create a free account and access millions of resources

Create resources
Host any resource
Get auto-graded reports
or continue with
Microsoft
Apple
Others
By signing up, you agree to our Terms of Service & Privacy Policy
Already have an account?