
Deep learning Batch 1
Authored by MoneyMakesMoney MoneyMakesMoney
Information Technology (IT)
University
Used 1+ times

AI Actions
Add similar questions
Adjust reading levels
Convert to real-world scenario
Translate activity
More...
Content View
Student View
10 questions
Show all answers
1.
MULTIPLE CHOICE QUESTION
10 sec • 3 pts
In the context of tensors, what is the order of a matrix?
A) 0th-order
B) 1st-order
C) 2nd-order
D) 3rd-order
2.
MULTIPLE CHOICE QUESTION
10 sec • 3 pts
Which of the following is true about the probability density function (PDF) for continuous variables?
A) The PDF must satisfy \( p(x) \leq 1 \)
B) The integral of the PDF over its domain must equal 1
C) The PDF can take negative values
D) The PDF is only defined for discrete variables
3.
MULTIPLE CHOICE QUESTION
10 sec • 3 pts
What is the primary difference between Batch Gradient Descent and Stochastic Gradient Descent (SGD)?
A) SGD uses the entire dataset for each update, while Batch GD uses a single example
B) SGD uses a single example for each update, while Batch GD uses the entire dataset
C) SGD is slower but more accurate than Batch GD
D) SGD is only used for convex functions
4.
MULTIPLE CHOICE QUESTION
10 sec • 3 pts
Which of the following is a key disadvantage of Stochastic Gradient Descent (SGD)?
A) It requires large memory to compute gradients
B) It has high variance in parameter updates
C) It converges slower than Batch Gradient Descent
D) It cannot escape local minima
5.
MULTIPLE CHOICE QUESTION
10 sec • 3 pts
What is the main purpose of the learning rate in gradient-based optimization?
A) To control the speed of convergence
B) To increase the number of iterations
C) To reduce the loss function directly
D) To increase the model complexity
6.
MULTIPLE CHOICE QUESTION
10 sec • 3 pts
Which of the following is true about L1 regularization?
A) It penalizes the square of the weights
B) It is less effective than L2 regularization
C) It is also known as weight decay
D) It can reduce some weights to exactly zero
7.
MULTIPLE CHOICE QUESTION
10 sec • 3 pts
What is the primary purpose of dropout in neural networks?
A) To reduce the number of layers in the network
B) To increase the learning rate
C) To randomly remove nodes during training to prevent overfitting
D) To reduce the number of parameters in the model
Access all questions and much more by creating a free account
Create resources
Host any resource
Get auto-graded reports

Continue with Google

Continue with Email

Continue with Classlink

Continue with Clever
or continue with

Microsoft
%20(1).png)
Apple
Others
Already have an account?