
Distributed Systems
Authored by Distributed Systems
Information Technology (IT)
University
Used 2+ times

AI Actions
Add similar questions
Adjust reading levels
Convert to real-world scenario
Translate activity
More...
Content View
Student View
73 questions
Show all answers
1.
MULTIPLE CHOICE QUESTION
30 sec • 1 pt
Primary motivation for parallel execution in ML training
To reduce model complexity
To speed up the model training process for large datasets
To minimize GPU memory usage
To simplify hyperparameter tuning
2.
MULTIPLE CHOICE QUESTION
30 sec • 1 pt
Training ResNet-50 on ImageNet-1K, single GPU
Minutes to hours
Hours to one day
Several days to two weeks
Over a month
3.
MULTIPLE CHOICE QUESTION
30 sec • 1 pt
Most popular form of in-parallel training
Model Parallelism
Pipeline Parallelism
Data Parallelism
Hybrid Parallelism
4.
MULTIPLE CHOICE QUESTION
30 sec • 1 pt
Key characteristic ImageNet vs CIFAR-10
Larger dataset size
More classes
Higher image resolution
Greater label complexity
5.
MULTIPLE CHOICE QUESTION
30 sec • 1 pt
Primary goal of parallel execution
To enable larger models
To reduce data storage costs
To speed up the training process
To improve model accuracy
6.
MULTIPLE CHOICE QUESTION
30 sec • 1 pt
Fundamental cause of long training times on single node
Limited CPU cores
GPU clock speed
Mismatch between data loading bandwidth and training bandwidth
Small batch sizes
7.
MULTIPLE CHOICE QUESTION
30 sec • 1 pt
Effect of higher resolution (ImageNet-1K)
Faster convergence
Larger activations → more memory → fewer images per batch
Reduced overfitting
Higher gradient precision
Access all questions and much more by creating a free account
Create resources
Host any resource
Get auto-graded reports

Continue with Google

Continue with Email

Continue with Classlink

Continue with Clever
or continue with

Microsoft
%20(1).png)
Apple
Others
Already have an account?