Search Header Logo

Project 2 Midterm Prep Quiz

Authored by Emily Anne

Computers

University

Used 1+ times

Project 2 Midterm Prep Quiz
AI

AI Actions

Add similar questions

Adjust reading levels

Convert to real-world scenario

Translate activity

More...

    Content View

    Student View

66 questions

Show all answers

1.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

What does convergence mean in the context of gradient descent?

When the loss function becomes exactly zero

When the learning rate becomes constant

When the optimization process stops improving significantly

When weights increase indefinitely

2.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

What is a global minimum in an optimization problem?

The lowest value in a local region of the loss function

The average of all local minimums

The highest point on the cost curve

The absolute lowest value of the loss function across all parameter space

3.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

What is a local minimum?

A temporary value reached before the global minimum

A point where the gradient is zero, but not the absolute minimum

A saddle point

The lowest value that gradient descent can never reach

4.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

6. Which statement best describes L1 regularization (Lasso)?

It adds the absolute value of coefficients as a penalty term to the loss function.

It adds the square of coefficients as a penalty term to the loss function.

It reduces the learning rate during training.

It increases the number of features in the model.

5.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

What is the purpose of regularization in machine learning?

To reduce training time

To increase model complexity

To reduce overfitting

To eliminate outliers from the dataset

6.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

Which regularization method uses the square of the coefficients?

Lasso

Ridge

Elastic Net

None of the above

7.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

What is Elastic Net regularization?

A method that switches between L1 and L2 during training

A combination of L1 and L2 regularization

A regularization that focuses on maximizing entropy

A method used to increase recall

Access all questions and much more by creating a free account

Create resources

Host any resource

Get auto-graded reports

Google

Continue with Google

Email

Continue with Email

Classlink

Continue with Classlink

Clever

Continue with Clever

or continue with

Microsoft

Microsoft

Apple

Apple

Others

Others

Already have an account?

Discover more resources for Computers