Intro to ML: The ML Revision Quiz

Intro to ML: The ML Revision Quiz

University

11 Qs

quiz-placeholder

Similar activities

Neural Networks Quiz

Neural Networks Quiz

University

10 Qs

AI Bootcamp [Quiz - 2]

AI Bootcamp [Quiz - 2]

University

10 Qs

Introduction to Deep Learning

Introduction to Deep Learning

University

10 Qs

AI Quiz Part 1

AI Quiz Part 1

University

13 Qs

Gradient Decent

Gradient Decent

University

7 Qs

Intro to ML: Neural Networks Lecture 1 Part 2

Intro to ML: Neural Networks Lecture 1 Part 2

University

6 Qs

Deep Learning

Deep Learning

University

12 Qs

Intro to ML: Neural Networks Lecture 1 Part 1

Intro to ML: Neural Networks Lecture 1 Part 1

University

6 Qs

Intro to ML: The ML Revision Quiz

Intro to ML: The ML Revision Quiz

Assessment

Quiz

Computers

University

Hard

Created by

Josiah Wang

Used 20+ times

FREE Resource

11 questions

Show all answers

1.

MULTIPLE CHOICE QUESTION

1 min • 1 pt

If we predict every observation to be True, what will our model precision be?

100%

0%

The proportion of True values in the dataset

Not enough information

Answer explanation

Media Image

Think of all the False Positives

2.

MULTIPLE SELECT QUESTION

1 min • 1 pt

James, Amelia, and George are participating in a machine learning competition. They have to choose an algorithm for their project. Select which of the following algorithms they should consider if they want to use eager learners:

K-nearest neighbours

Decision trees

Neural networks

Linear regression

Answer explanation

Recall that K-nn is a lazy learner. At training time the algorithm simply stores the training data - i.e. no calculations/training occurs. It is not until inference time when the algorithm checks the K nearest points to the unseen datapoint in question. All calculations occur at inference time, hence being called lazy rather than eager.

3.

MULTIPLE SELECT QUESTION

1 min • 1 pt

Which of the following statements are True:

Performance on the validation set can be used to see if a model is overfitting to the training data

We cannot tell from the training performance alone if a model is overfitting or not

Underfitting implies better generalisation to other datasets

Answer explanation

Underfitting is when the model lacks the capacity to fit the underlying pattern/trend of the data. A model that underfits a training set will perform no better on unseen data.

4.

MULTIPLE SELECT QUESTION

1 min • 1 pt

Scarlett is working on a machine learning project and she is worried about underfitting. Which of the following actions may cause underfitting in her model?

Reducing the max. depth of a decision tree

Increasing the value of K in K-nn

Adding more layers to a neural network

Increasing the size of the training data

Increasing the value of K in K-means

Answer explanation

Underfitting is caused when the model lacks the capacity to fit the underlying trend/pattern of the data.

5.

MULTIPLE CHOICE QUESTION

1 min • 1 pt

True or False:

If we use grid-search for testing different hyper-parameter values, we can use each of these results for finding the confidence interval of the model error.

True

False

Answer explanation

Confidence Intervals should be reported for the final model architecture. They are a prediction on how the final model will perform on unseen data. If this is calculated with a range of different models, this will clearly be an inaccurate prediction.

6.

MULTIPLE SELECT QUESTION

1 min • 1 pt

Which of the following algorithms will change given different random seeds:

Neural networks

K-nearest neighbours (K = 1, with no ties)

Decision trees

K-means

Evolution Algorithms using simple tournament

Answer explanation

Think about which methods are deterministic.

7.

MULTIPLE SELECT QUESTION

1 min • 1 pt

Which statements below are True describing the differences between Gradient Descent, Stochastic Gradient Descent and Mini-batched Gradient Descent:

Gradient Descent is faster to compute than Stochastic Gradient Descent

Stochastic Gradient Descent is faster to compute than Mini-batched Gradient Descent

There is less noise in the gradients when using Mini-batched Gradient Descent compared to Stochastic Gradient Descent

Answer explanation

Gradient descent - gradients are calculated and a step is taken based on the whole training step. This is a computationally heavy computation, as apposed to calculating the gradients and updating the parameters based on one sample. Stochastic gradient descent results in a very noising learning signal as the gradient is sensitive to the variability of each individual datapoint. The learning signal can be smoothed out by sampling groups of datapoints in mini-batch gradient descent. Here the gradient is averaged over all datapoints.

Create a free account and access millions of resources

Create resources
Host any resource
Get auto-graded reports
or continue with
Microsoft
Apple
Others
By signing up, you agree to our Terms of Service & Privacy Policy
Already have an account?