Data Science and Big Data Analytics (MA-462)

Data Science and Big Data Analytics (MA-462)

University

20 Qs

quiz-placeholder

Similar activities

DSBDA Quiz

DSBDA Quiz

University

20 Qs

What's the Matter & Weather With You Review

What's the Matter & Weather With You Review

8th Grade - University

17 Qs

Data

Data

University

15 Qs

10 in 60

10 in 60

8th Grade - University

15 Qs

APA Citation

APA Citation

University

15 Qs

Data

Data

5th Grade - University

15 Qs

Data

Data

6th Grade - University

15 Qs

Big Data Analytics

Big Data Analytics

University

15 Qs

Data Science and Big Data Analytics (MA-462)

Data Science and Big Data Analytics (MA-462)

Assessment

Quiz

Science

University

Easy

Created by

Kuldeep Singh Jadon

Used 1+ times

FREE Resource

20 questions

Show all answers

1.

MULTIPLE CHOICE QUESTION

45 sec • 1 pt

Which of the following best describes the Bag-of-Words (BoW) model?

It encodes semantic meaning of words in low-dimensional vectors.

It considers the order of words in a sentence.

It converts text into fixed-length vectors based on word occurrence.

It uses neural networks to learn word embeddings.

2.

MULTIPLE CHOICE QUESTION

45 sec • 1 pt

What is the primary role of the IDF (Inverse Document Frequency) component in TF-IDF?

To increase the weight of frequently occurring terms in all documents.

To reduce the weight of commonly used terms and highlight rarer terms.

To normalize term frequencies across sentences.

To generate low-dimensional word vectors.

3.

MULTIPLE CHOICE QUESTION

45 sec • 1 pt

In the context of Word2Vec, how is the vector for a word like "hat" learned?

Through frequency counting in documents.

By calculating TF and IDF values.

By using a neural network that learns to predict context words (Skip-Gram) or the target word (CBOW).

By averaging character-level embeddings.

4.

MULTIPLE CHOICE QUESTION

45 sec • 1 pt

Why is Word2Vec considered more "context-aware" than BoW or TF-IDF?

Because it stores the exact position of each word in the sentence.

Because it encodes words as binary strings.

Because it learns word meanings based on surrounding words during training.

Because it includes part-of-speech tags in the representation.

5.

MULTIPLE CHOICE QUESTION

45 sec • 1 pt

In the ID3 algorithm, which metric is used to select the best attribute at each node?

Mean Squared Error

Gini Index

Entropy

Information Gain

6.

MULTIPLE CHOICE QUESTION

45 sec • 1 pt

If an attribute results in zero information gain, what does it imply?

It has the highest entropy

It is the best attribute for splitting

It provides no useful information for classification

It must be the root node

7.

MULTIPLE CHOICE QUESTION

45 sec • 1 pt

What effect does pruning have on bias and variance?

Decreases both bias and variance

Increases bias, decreases variance

Decreases bias, increases variance

Has no impact on bias or variance

Create a free account and access millions of resources

Create resources
Host any resource
Get auto-graded reports
or continue with
Microsoft
Apple
Others
By signing up, you agree to our Terms of Service & Privacy Policy
Already have an account?