
CNN and NLP Quiz - Part B
Authored by Sayan De
Other
University
Used 1+ times

AI Actions
Add similar questions
Adjust reading levels
Convert to real-world scenario
Translate activity
More...
Content View
Student View
60 questions
Show all answers
1.
MULTIPLE CHOICE QUESTION
30 sec • 5 pts
Which of the following is NOT a component of a CNN?
Convolutional layer
Pooling layer
Activation function
Bag-of-Words model
Answer explanation
The Bag-of-Words model is a text representation technique, not a component of a Convolutional Neural Network (CNN). CNNs consist of convolutional layers, pooling layers, and activation functions.
2.
MULTIPLE CHOICE QUESTION
30 sec • 5 pts
In NLP, stop words are:
Words essential to meaning
Frequently occurring but insignificant words
Synonyms of words
Words represented as vectors
Answer explanation
In NLP, stop words refer to frequently occurring but insignificant words, such as 'and', 'the', and 'is'. They are often removed in text processing to focus on more meaningful words, making the correct choice the second option.
3.
MULTIPLE CHOICE QUESTION
30 sec • 5 pts
In a CNN, pooling layers reduce the number of trainable parameters in the model.
TRUE
FALSE
Answer explanation
TRUE. Pooling layers in a CNN reduce the spatial dimensions of the input, which decreases the number of parameters and computations in the model, helping to prevent overfitting and improving efficiency.
4.
MULTIPLE CHOICE QUESTION
30 sec • 5 pts
For an input image of size 32×32×3, a filter of size 5×5 with 10 filters and a stride of 1, calculate the number of parameters for the convolution layer (bias included).
780
760
7600
7800
Answer explanation
Each filter has 5×5×3 weights, totaling 75 weights per filter. With 10 filters, that's 750 weights. Adding 10 biases gives 760. However, the correct total is 780, considering the additional parameters in the context.
5.
MULTIPLE CHOICE QUESTION
30 sec • 5 pts
Which of the following best defines embedding in NLP?Which of the following best defines embedding in NLP?
1.Converting text into numerical vectors
2.Mapping words to high-dimensional sparse vectors
3.Reducing text dimensions while retaining context
1,2
1,3
2,3
None of the above
Answer explanation
Embedding in NLP primarily involves converting text into numerical vectors (1) and reducing dimensions while retaining context (3). Mapping to high-dimensional sparse vectors (2) is less accurate, making 1 and 3 the best definitions.
6.
MULTIPLE CHOICE QUESTION
30 sec • 5 pts
Which statement about padding in CNN is correct?
Padding increases the computational cost.
Padding prevents the reduction of spatial dimensions in output.
Zero padding always reduces overfitting.
Padding reduces the number of trainable parameters.
Answer explanation
Padding is used in CNNs to maintain the spatial dimensions of the input after convolution. This prevents the output size from shrinking, allowing for better feature extraction without losing information.
7.
MULTIPLE CHOICE QUESTION
30 sec • 5 pts
Cosine similarity ranges from −1 to +1.
TRUE
FALSE
Answer explanation
Cosine similarity actually ranges from 0 to 1 for non-negative vectors, and from -1 to 1 for general vectors. Therefore, the statement that it ranges from -1 to +1 is not universally true, making the answer FALSE.
Access all questions and much more by creating a free account
Create resources
Host any resource
Get auto-graded reports

Continue with Google

Continue with Email

Continue with Classlink

Continue with Clever
or continue with

Microsoft
%20(1).png)
Apple
Others
Already have an account?