STINTSY Quiz 2

STINTSY Quiz 2

University

5 Qs

quiz-placeholder

Similar activities

ILT-ML-03-AT

ILT-ML-03-AT

University

7 Qs

Bangkit - Introduction Machine Learning

Bangkit - Introduction Machine Learning

University

7 Qs

Machine Learning Quiz

Machine Learning Quiz

University

10 Qs

Classification

Classification

University

10 Qs

Logistic Regression

Logistic Regression

University

10 Qs

Log Regression

Log Regression

University

10 Qs

Session 5 | U

Session 5 | U

University

10 Qs

10 Questions of Machine Learning

10 Questions of Machine Learning

University

10 Qs

STINTSY Quiz 2

STINTSY Quiz 2

Assessment

Quiz

Computers

University

Medium

Created by

Bryant Lee

Used 7+ times

FREE Resource

5 questions

Show all answers

1.

MULTIPLE CHOICE QUESTION

1 min • 1 pt

In evaluating the performance of a linear regression model, which metric is more commonly used when you want to minimize the impact of large errors and also retain the original scale of the target variable?

Mean Sum of Squared Error

Coefficient of Determination

Root Mean Squared Error

Mean of Absolute Error

2.

MULTIPLE CHOICE QUESTION

1 min • 1 pt

In the context of supervised learning with linear regression, what is the primary objective when fitting a model to the training data?

Maximize the number of features in the model

Minimize the differences between predicted and actual target values

Ensure that the model perfectly fits the training data

Maximize the R-squared value regardless of other metrics

3.

MULTIPLE CHOICE QUESTION

1 min • 1 pt

In the context of training a linear regression model using gradient descent, what is the potential consequence of choosing a learning rate that is too high?

The model converges too slowly

The model converges to the optimal solution faster

The model performs better on unseen data

The model may fail to converge and oscillate around the minimum

4.

MULTIPLE CHOICE QUESTION

1 min • 1 pt

What is a key difference between stochastic gradient descent (SGD) and mini-batch gradient descent in the context of training a linear regression model?

SGD uses all training data at once, while mini-batch only uses a single sample

SGD updates weights after each data point, while mini-batch updates weights after a small subset of data

Mini-batch gradient descent is faster but less accurate than SGD

Mini-batch gradient descent always converges to the global minimum, while SGD does not

5.

MULTIPLE CHOICE QUESTION

1 min • 1 pt

Which of the following feature preprocessing steps is typically important before applying linear regression?

Scaling or normalizing numerical features

Adding polynomial features to increase dimensionality

Removing outliers without any consideration

Encoding continuous features as categorical