Classification day 2

Classification day 2

Assessment

Quiz

Created by

Patrycja Sawicka

Mathematics, Science

1st - 5th Grade

2 plays

Medium

Student preview

quiz-placeholder

10 questions

Show all answers

1.

MULTIPLE CHOICE QUESTION

2 mins • 1 pt

What do you mean by generalization in terms of the SVM?

How far the hyperplane is from the support vectors

How accurately the SVM can predict outcomes for unseen data

The threshold amount of error in an SVM

2.

MULTIPLE CHOICE QUESTION

2 mins • 1 pt

In SVM, when the C parameter is rising:

Margin will be smaller

Margin will be bigger

C parameter not affects margin

3.

MULTIPLE CHOICE QUESTION

2 mins • 1 pt

The effectiveness of an SVM depends upon:

Selection of Kernel

Kernel Parameters

Soft Margin Parameter C

All

4.

MULTIPLE CHOICE QUESTION

2 mins • 1 pt

Support vectors are the data points that lie closest to the decision surface.

TRUE

FALSE

5.

MULTIPLE CHOICE QUESTION

2 mins • 1 pt

What would happen when you use very small C (C~0)?

Misclassification would happen

Data will be correctly classified

Can’t say

None of these

6.

MULTIPLE CHOICE QUESTION

2 mins • 1 pt

If I am using all features of my dataset and I achieve 100% accuracy on my training set, but ~70% on validation set, what should I look out for?

Underfitting

Nothing, the model is perfect

Overfitting

7.

MULTIPLE CHOICE QUESTION

2 mins • 1 pt

In simple linear regression, if you change the input value by 1 then output value will be changed by:

The slope parameter

The intercept parameter

None

1

8.

MULTIPLE CHOICE QUESTION

2 mins • 1 pt

The Kernel Trick in Support Vector Classification

Mapping low dimensional data to high dimensional space

Using for features selection

None of these

9.

MULTIPLE SELECT QUESTION

2 mins • 1 pt

Which of the following machine learning algorithm can be used for both classification and regression problems?

K-NN

SVM

Logistic Regression 

10.

MULTIPLE CHOICE QUESTION

2 mins • 1 pt

Which of the following statements is true for k-NN classifiers?

K is not hyperparameter of algorithm

The decision boundary is smoother with smaller values of k

The decision boundary is linear

k-NN does not require an explicit training step

Explore all questions with a free account

or continue with
Microsoft
Apple
Others
By signing up, you agree to our Terms of Service & Privacy Policy
Already have an account?