Search Header Logo

Classification for NLP

Authored by Vishal Chakravarthi

Mathematics

University

Used 3+ times

Classification for NLP
AI

AI Actions

Add similar questions

Adjust reading levels

Convert to real-world scenario

Translate activity

More...

    Content View

    Student View

25 questions

Show all answers

1.

MULTIPLE CHOICE QUESTION

45 sec • 4 pts

A model achieves high accuracy on a test dataset, but a deeper analysis shows it performs poorly on the minority class. What does this indicate, and how should you evaluate the model instead?

  1. 

The model is overfitting. So, use cross-validation

The model has high recall. So, evaluate using a confusion matrix

The dataset is imbalanced. Use metrics such as F1 Score and ROC AUC to evaluate

The model is underfitting. Try a different algorithm

2.

MULTIPLE CHOICE QUESTION

45 sec • 3 pts

You’re working on a disease detection model where missing a positive case could have severe consequences. Which evaluation metric should you prioritize and why?

Precision, because false positives must be minimized

Recall, because false negatives must be minimized.

F1-score, because it balances precision and recall.

Accuracy, because it reflects overall model performance.

3.

MULTIPLE CHOICE QUESTION

30 sec • 4 pts

Two models produce identical accuracy scores but have different confusion matrices. What can you infer from this?

Both models perform equally well.

Both models have the same recall and precision.

The models handle false positives and false negatives differently.

All of the above

4.

MULTIPLE CHOICE QUESTION

30 sec • 2 pts

When is the F1-score a better evaluation metric than accuracy?

When the test data is balanced.

When both, Precision and Recall are important.

When you want to optimize for true negatives.

When the model has high AUC-ROC.

5.

MULTIPLE CHOICE QUESTION

45 sec • 4 pts

How does the ROC curve help in comparing models?

It shows the model’s accuracy at different thresholds.

It visualizes the trade-off between true positive rate and false positive rate.

It compares the number of true positives and true negatives.

It measures the correlation between actual and predicted probabilities.

6.

MULTIPLE CHOICE QUESTION

45 sec • 4 pts

A spam detection system is tested using precision and recall. Precision is very high, but recall is low. What does this indicate about the model’s behavior?

The model predicts fewer spam emails but is highly confident in its predictions.

The model predicts most emails as spam, including false positives.

The model is balanced and performs well overall.

The model has a high rate of false positives and false negatives.

7.

MULTIPLE CHOICE QUESTION

45 sec • 6 pts

What makes text classification different from other types of classification tasks?

It may require special algorithms for complicated classification tasks

The input data is categorical rather than numerical.

The input data is unstructured and requires preprocessing like tokenization and embedding.

Both A and C

All of the above

Access all questions and much more by creating a free account

Create resources

Host any resource

Get auto-graded reports

Google

Continue with Google

Email

Continue with Email

Classlink

Continue with Classlink

Clever

Continue with Clever

or continue with

Microsoft

Microsoft

Apple

Apple

Others

Others

Already have an account?