Search Header Logo

Generative Models Quiz

Authored by Vijay Agrawal

Computers

Professional Development

Used 1+ times

Generative Models Quiz
AI

AI Actions

Add similar questions

Adjust reading levels

Convert to real-world scenario

Translate activity

More...

    Content View

    Student View

12 questions

Show all answers

1.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

Which of the following correctly describes the role of the encoder and decoder in a Variational Autoencoder (VAE) for image generation?

The encoder compresses the image into a fixed representation, and the decoder reconstructs the exact same image

The encoder maps the image to a distribution in latent space, and the decoder samples from this distribution to generate similar images

The encoder identifies key features in the image, and the decoder amplifies these features

The encoder removes noise from the image, and the decoder enhances image quality

2.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

Which statement(s) correctly distinguish a Variational Autoencoder (VAE) from a standard Autoencoder (AE)?

VAEs produce deterministic encodings, while AEs produce probabilistic latent representations.

VAEs learn a continuous latent space by sampling from a distribution, whereas AEs learn a direct mapping without probabilistic sampling.

VAEs cannot reconstruct inputs accurately due to random noise in sampling, while AEs always perform perfect reconstruction.

AEs use an encoder–decoder structure, while VAEs do not.

3.

MULTIPLE SELECT QUESTION

30 sec • 1 pt

Which of the following are true about Diffusion Models used in image generation?

They learn to reverse a noising process applied to training images.

They require adversarial training with a discriminator network.

They can produce high-quality images by iteratively denoising samples from random noise.

They do not use latent variables at all.

4.

MULTIPLE SELECT QUESTION

30 sec • 1 pt

Which of the following statements about latent space is/are correct?

A well-trained latent space allows smooth interpolation between data samples.

Latent dimensions always correspond directly to interpretable features (e.g., “hair color,” “background color”).

A “latent vector” can be sampled from a prior (e.g., Gaussian) to generate new outputs.

High-dimensional latent spaces automatically guarantee high-quality generation.

5.

MULTIPLE SELECT QUESTION

30 sec • 1 pt

In large language models (LLMs) such as GPT-3.5, which techniques are commonly used to improve training and generation quality?

Masked language modeling (like in BERT)

Teacher forcing for sequence-to-sequence training

Next-token prediction on large-scale unlabelled text corpora

Reinforcement learning with human feedback (RLHF)

6.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

Which of the following is a challenge specific to text-to-video generation compared to text-to-image?

Synchronizing multiple frames to maintain temporal consistency

Generating color images

Handling variable-length text prompts

Converting text to embeddings

7.

MULTIPLE SELECT QUESTION

30 sec • 1 pt

You want to ensemble different generative models (e.g., a diffusion model and a VAE) to leverage their strengths. Which approach(s) correctly describe possible ensembling strategies?

Train both models on the same data and then average their generated outputs pixel by pixel.

Use one model’s latent representation as input to another model (e.g., VAE → Diffusion) to refine generation.

Randomly pick outputs from either the diffusion model or the VAE.

Use a gating network that decides which model to use based on input constraints or quality measures.

Access all questions and much more by creating a free account

Create resources

Host any resource

Get auto-graded reports

Google

Continue with Google

Email

Continue with Email

Classlink

Continue with Classlink

Clever

Continue with Clever

or continue with

Microsoft

Microsoft

Apple

Apple

Others

Others

Already have an account?