Deep learning Batch 1

Deep learning Batch 1

University

10 Qs

quiz-placeholder

Similar activities

C1M3

C1M3

University

10 Qs

POST TEST PELATIHAN

POST TEST PELATIHAN

University

10 Qs

Татаал издөө сурамдары

Татаал издөө сурамдары

7th Grade - University

10 Qs

PT101: Quiz No. 1

PT101: Quiz No. 1

University

10 Qs

Power BI Quiz (Beginner Level)

Power BI Quiz (Beginner Level)

University

15 Qs

Contpaqi Contabilidad Sesión 5 ultima

Contpaqi Contabilidad Sesión 5 ultima

University

12 Qs

STP_Integrasi aplikasi

STP_Integrasi aplikasi

9th Grade - University

15 Qs

Ôn tập Trí tuệ Nhân tạo (VNUIS)

Ôn tập Trí tuệ Nhân tạo (VNUIS)

University

15 Qs

Deep learning Batch 1

Deep learning Batch 1

Assessment

Quiz

Information Technology (IT)

University

Hard

Created by

MoneyMakesMoney MoneyMakesMoney

Used 1+ times

FREE Resource

10 questions

Show all answers

1.

MULTIPLE CHOICE QUESTION

10 sec • 3 pts

In the context of tensors, what is the order of a matrix?

A) 0th-order

B) 1st-order

C) 2nd-order

D) 3rd-order

2.

MULTIPLE CHOICE QUESTION

10 sec • 3 pts

Which of the following is true about the probability density function (PDF) for continuous variables?

A) The PDF must satisfy \( p(x) \leq 1 \)

B) The integral of the PDF over its domain must equal 1

C) The PDF can take negative values

D) The PDF is only defined for discrete variables

3.

MULTIPLE CHOICE QUESTION

10 sec • 3 pts

What is the primary difference between Batch Gradient Descent and Stochastic Gradient Descent (SGD)?

A) SGD uses the entire dataset for each update, while Batch GD uses a single example

B) SGD uses a single example for each update, while Batch GD uses the entire dataset

C) SGD is slower but more accurate than Batch GD

D) SGD is only used for convex functions

4.

MULTIPLE CHOICE QUESTION

10 sec • 3 pts

Which of the following is a key disadvantage of Stochastic Gradient Descent (SGD)?

A) It requires large memory to compute gradients

B) It has high variance in parameter updates

C) It converges slower than Batch Gradient Descent

D) It cannot escape local minima

5.

MULTIPLE CHOICE QUESTION

10 sec • 3 pts

What is the main purpose of the learning rate in gradient-based optimization?

A) To control the speed of convergence

B) To increase the number of iterations

C) To reduce the loss function directly

D) To increase the model complexity

6.

MULTIPLE CHOICE QUESTION

10 sec • 3 pts

Which of the following is true about L1 regularization?

A) It penalizes the square of the weights

B) It is less effective than L2 regularization

C) It is also known as weight decay

D) It can reduce some weights to exactly zero

7.

MULTIPLE CHOICE QUESTION

10 sec • 3 pts

What is the primary purpose of dropout in neural networks?

A) To reduce the number of layers in the network

B) To increase the learning rate

C) To randomly remove nodes during training to prevent overfitting

D) To reduce the number of parameters in the model

Create a free account and access millions of resources

Create resources
Host any resource
Get auto-graded reports
or continue with
Microsoft
Apple
Others
By signing up, you agree to our Terms of Service & Privacy Policy
Already have an account?