Search Header Logo

ML B2 CH7

Authored by Jhonston Benjumea

Computers

University

Used 2+ times

ML B2 CH7
AI

AI Actions

Add similar questions

Adjust reading levels

Convert to real-world scenario

Translate activity

More...

    Content View

    Student View

10 questions

Show all answers

1.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

What does an LSTM-based language model generate?

Probability of word translations
Probability distribution of the next word
Fixed-length sentence embeddings
Correct grammar for sentences

Answer explanation

The LSTM model generates a probability distribution for the next word in a sentence.

2.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

What is a deterministic method in sentence generation?

Randomly choosing any word
Sampling based on probability
Choosing the word with the highest probability
Shuffling the input words

Answer explanation

Deterministic generation selects the word with the highest probability at each step.

3.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

What does the seq2seq model consist of?

Multiple attention layers
A Transformer and Decoder
An Encoder and a Decoder
Two separate RNNs without communication

Answer explanation

Seq2seq models consist of an Encoder that processes input and a Decoder that generates output.

4.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

What is the purpose of the encoder in seq2seq?

Generate translations
Convert a sequence into a fixed-length vector
Decode output text
Create random data

Answer explanation

The encoder transforms a variable-length input sequence into a fixed-length context vector.

5.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

What layer is used to convert text into vectors in the encoder?

Softmax
ReLU
Embedding Layer
Pooling Layer

Answer explanation

The embedding layer maps discrete word indices into continuous vector space representations.

6.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

What is the function of the decoder in seq2seq?

Generate input text
Interpret vector h and generate output sequence
Evaluate gradients
Summarize input only

Answer explanation

The decoder uses the vector h from the encoder to generate the output sequence step by step.

7.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

What is the purpose of padding in seq2seq?

To compress data for memory
To add noise to training data
To equalize the length of input and output sequences
To remove rare words

Answer explanation

Padding ensures all sequences have the same length for batch processing.

Access all questions and much more by creating a free account

Create resources

Host any resource

Get auto-graded reports

Google

Continue with Google

Email

Continue with Email

Classlink

Continue with Classlink

Clever

Continue with Clever

or continue with

Microsoft

Microsoft

Apple

Apple

Others

Others

Already have an account?