Search Header Logo

AI Advanced 3

Authored by Dinh Hieu

Information Technology (IT)

University

AI Advanced 3
AI

AI Actions

Add similar questions

Adjust reading levels

Convert to real-world scenario

Translate activity

More...

    Content View

    Student View

9 questions

Show all answers

1.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

In supervised learning, the assumption that training and future data come from the same distribution is critical. Why is this assumption important?

It ensures the model never needs generalization

It allows the model to ignore training data entirely

It guarantees that overfitting improves performance

If the future data distribution matches the training data distribution, learned patterns are likely to generalize effectively

2.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

Overfitting in a supervised learning model occurs when:

The model’s complexity is lower than necessary

The model generalizes well to unseen data

The model ignores training data patterns

The model fits noise and peculiarities of the training set too closely, harming its performance on new data

3.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

Decision trees are prone to overfitting. How do techniques like pruning address this issue?

By making the tree more complex

By ignoring data attributes entirely

By preventing any splits from being made

By removing branches that do not significantly improve predictive accuracy on validation data, thus improving generalization

4.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

Precision and recall are favored metrics over accuracy in certain situations. In which scenario is this especially true?

When class distributions are even and all errors have equal cost

When no data is labeled

When accuracy alone reflects all necessary performance aspects

When dealing with imbalanced classes or when certain error types are more critical, making a single accuracy value insufficient

5.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

Cross-validation provides a more reliable estimate of a model’s performance than a single holdout set. Why?

It uses the same single split repeatedly

It relies on no test data

It ensures that training and testing sets never vary

It averages performance across multiple folds, reducing the influence of any particular data split’s peculiarities

6.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

Support Vector Machines (SVMs) rely on maximizing margins between classes. In complex datasets where linear separability is not possible, how do SVMs adapt?

They fail to classify the data correctly

They remove all complex features

They rely solely on linear kernels

They use kernel functions to project data into higher-dimensional feature spaces, enabling nonlinear decision boundaries

7.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

k-Nearest Neighbor (kNN) is a lazy learner. What is the main trade-off of its simplicity?

kNN requires complicated training but makes classification instant

kNN never achieves high accuracy

kNN cannot handle multiple classes

kNN is simple and requires no training time, but classification can be slow because it searches the entire training set at query time

Access all questions and much more by creating a free account

Create resources

Host any resource

Get auto-graded reports

Google

Continue with Google

Email

Continue with Email

Classlink

Continue with Classlink

Clever

Continue with Clever

or continue with

Microsoft

Microsoft

Apple

Apple

Others

Others

Already have an account?