C5M2

C5M2

University

10 Qs

quiz-placeholder

Similar activities

Srna Milacevic

Srna Milacevic

6th Grade - University

11 Qs

Quiz - Hardware & Software

Quiz - Hardware & Software

University

10 Qs

pyThaiNLP

pyThaiNLP

University

15 Qs

Тест по Microsoft Office

Тест по Microsoft Office

12th Grade - University

13 Qs

Manfaat dan Fungsi OLE dalam Dokumen

Manfaat dan Fungsi OLE dalam Dokumen

9th Grade - University

9 Qs

CUESTIONARIO_CLASE 2

CUESTIONARIO_CLASE 2

University

14 Qs

Licencje part 2

Licencje part 2

3rd Grade - University

11 Qs

TEKNOLOGI INTERNET

TEKNOLOGI INTERNET

University

8 Qs

C5M2

C5M2

Assessment

Quiz

Information Technology (IT)

University

Medium

Created by

Abylai Aitzhanuly

Used 1+ times

FREE Resource

10 questions

Show all answers

1.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

Suppose you learn a word embedding for a vocabulary of 10000 words. Then the embedding vectors should be 10000 dimensional, so as to capture the full range of variation and meaning in those words.

TRUE

FALSE

2.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

What is t-SNE?

A linear transformation that allows us to solve analogies on word vectors

A non-linear dimensionality reduction technique

A supervised learning algorithm for learning word embeddings

An open-source sequence modeling library

3.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

Media Image

TRUE

FALSE

4.

MULTIPLE SELECT QUESTION

45 sec • 1 pt

Which of these equations do you think should hold for a good word embedding? (Check all that apply)

eboy - egirl ≈ ebrother - esister

eboy - egirl ≈ esister - ebrother

eboy - ebrother ≈ egirl - esister

eboy - ebrother ≈ esister - egirl

5.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

Let EE be an embedding matrix, and let o1234 be a one-hot vector corresponding to word 1234. Then to get the embedding of word 1234, why don’t we call E * o1234 in Python?

It is computationally wasteful.

The correct formula is ET* o1234.

This doesn’t handle unknown words ().

None of the above: calling the Python snippet as described above is fine.

6.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

When learning word embeddings, we create an artificial task of estimating P(target∣context). It is okay if we do poorly on this artificial prediction task; the more important by-product of this task is that we learn a useful set of word embeddings.

TRUE

FALSE

7.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

In the word2vec algorithm, you estimate P(t∣c), where t is the target word and c is a context word. How are t and c chosen from the training set? Pick the best answer.

c is a sequence of several words immediately before t.

c is the one word that comes immediately before t.

c and t are chosen to be nearby words.

c is the sequence of all the words in the sentence before t.

Create a free account and access millions of resources

Create resources
Host any resource
Get auto-graded reports
or continue with
Microsoft
Apple
Others
By signing up, you agree to our Terms of Service & Privacy Policy
Already have an account?