ML B2 CH7

ML B2 CH7

University

10 Qs

quiz-placeholder

Similar activities

VLSID Lab Monday Quizizz

VLSID Lab Monday Quizizz

University

11 Qs

DLD Quiz 02

DLD Quiz 02

University

10 Qs

PTS - Sistem Komputer

PTS - Sistem Komputer

University

15 Qs

Decoders

Decoders

University

10 Qs

Class Test (11-08-20)

Class Test (11-08-20)

University

12 Qs

NLP_PIPELINES

NLP_PIPELINES

University

11 Qs

AI Quiz

AI Quiz

University

9 Qs

218 - quiz 12  decodes encodes mux demux

218 - quiz 12 decodes encodes mux demux

University

12 Qs

ML B2 CH7

ML B2 CH7

Assessment

Quiz

Computers

University

Easy

Created by

Jhonston Benjumea

Used 2+ times

FREE Resource

10 questions

Show all answers

1.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

What does an LSTM-based language model generate?
Probability of word translations
Probability distribution of the next word
Fixed-length sentence embeddings
Correct grammar for sentences

Answer explanation

The LSTM model generates a probability distribution for the next word in a sentence.

2.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

What is a deterministic method in sentence generation?
Randomly choosing any word
Sampling based on probability
Choosing the word with the highest probability
Shuffling the input words

Answer explanation

Deterministic generation selects the word with the highest probability at each step.

3.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

What does the seq2seq model consist of?
Multiple attention layers
A Transformer and Decoder
An Encoder and a Decoder
Two separate RNNs without communication

Answer explanation

Seq2seq models consist of an Encoder that processes input and a Decoder that generates output.

4.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

What is the purpose of the encoder in seq2seq?
Generate translations
Convert a sequence into a fixed-length vector
Decode output text
Create random data

Answer explanation

The encoder transforms a variable-length input sequence into a fixed-length context vector.

5.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

What layer is used to convert text into vectors in the encoder?
Softmax
ReLU
Embedding Layer
Pooling Layer

Answer explanation

The embedding layer maps discrete word indices into continuous vector space representations.

6.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

What is the function of the decoder in seq2seq?
Generate input text
Interpret vector h and generate output sequence
Evaluate gradients
Summarize input only

Answer explanation

The decoder uses the vector h from the encoder to generate the output sequence step by step.

7.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

What is the purpose of padding in seq2seq?
To compress data for memory
To add noise to training data
To equalize the length of input and output sequences
To remove rare words

Answer explanation

Padding ensures all sequences have the same length for batch processing.

Create a free account and access millions of resources

Create resources
Host any resource
Get auto-graded reports
or continue with
Microsoft
Apple
Others
By signing up, you agree to our Terms of Service & Privacy Policy
Already have an account?