
Technical Quiz & Challenge
Authored by Ranjeet M
Fun
Professional Development
Used 10+ times

AI Actions
Add similar questions
Adjust reading levels
Convert to real-world scenario
Translate activity
More...
Content View
Student View
22 questions
Show all answers
1.
MULTIPLE CHOICE QUESTION
30 sec • 5 pts
After changing the default block size and restarting the cluster, to which data does the new size apply?
all data
no data
existing data
new data
Answer explanation
In Hadoop HDFS, block size is specified in conf. file – hdfs-site.xml. To change the block size, parameter, dfs.block.size can be changed to required value(default 64mb/128mb) in hdfs-site.xml file. Once this is changed, cluster restart is required for the change to effect, for which will be applied only to the new files. Existing files’ block size doesn’t change. In order to change the existing files’ block size, ‘distcp’ utility can be used.
2.
MULTIPLE CHOICE QUESTION
30 sec • 5 pts
Data node failure is handled by?
Replication-factor
Checkpointing
Block Report
Secondary NameNode
Answer explanation
If the DataNode fails then the data block which it holds are created on another DataNode. So, the replication factor is always maintained. If the failed DataNode comes back then it's treated as fresh node and used for newer data blocks.
3.
MULTIPLE CHOICE QUESTION
30 sec • 5 pts
What will happen if the block size in Hadoop cluster is set to 4KB?
Under utilization of cluster.
Lesser number of blocks are created.
Over burdening of NameNode
Better parallelism will be achived.
Answer explanation
If a block size is set to 4KB then will lead to many data blocks. NameNode stores the metadata and DataNode will always ask NameNode to get next data block address whcih will over burden the NameNode.
4.
MULTIPLE CHOICE QUESTION
30 sec • 5 pts
Suppose, in Hadoop 2.0 we have a 750 MB of input file and there are 3 nodes in the cluster, with default replication factor, what will be the total number of blocks generated in HDFS for that file?
10
24
12
18
Answer explanation
File size/data block size = number of data blocks 750/128=6
Default replication factor 3, 6*3=18
5.
MULTIPLE CHOICE QUESTION
30 sec • 5 pts
What kind of scaling does HDFS supports primarily?
Vertical
Horizontal
Adaptive
Diagonal
Answer explanation
Hadoop adds more node to existing cluster which can be done without stopping the existing system.
6.
MULTIPLE CHOICE QUESTION
30 sec • 5 pts
How NameNode gets to know if a data block is corrupted?
Heartbeat
Secondary NameNode sends Notification
Metadata
Block Report
7.
MULTIPLE CHOICE QUESTION
30 sec • 5 pts
What does Apache Spark provide?
storage + computation
all things whatever Hadoop core provides
Only computation
Only storage.
Access all questions and much more by creating a free account
Create resources
Host any resource
Get auto-graded reports

Continue with Google

Continue with Email

Continue with Classlink

Continue with Clever
or continue with

Microsoft
%20(1).png)
Apple
Others
Already have an account?
Similar Resources on Wayground
20 questions
Identify the Characters
Quiz
•
Professional Development
20 questions
CTS China Employee Check-In
Quiz
•
Professional Development
21 questions
Quiz September Mission - English
Quiz
•
Professional Development
22 questions
Revelion 2023
Quiz
•
Professional Development
20 questions
ZCLA Induction Recap Week 2
Quiz
•
Professional Development
20 questions
RELAX AND RETAIN : WITH COMSSA
Quiz
•
KG - Professional Dev...
19 questions
Electronic Ignition
Quiz
•
Professional Development
20 questions
Utilities Power Up Day Quiz
Quiz
•
Professional Development
Popular Resources on Wayground
15 questions
Fractions on a Number Line
Quiz
•
3rd Grade
20 questions
Equivalent Fractions
Quiz
•
3rd Grade
25 questions
Multiplication Facts
Quiz
•
5th Grade
54 questions
Analyzing Line Graphs & Tables
Quiz
•
4th Grade
22 questions
fractions
Quiz
•
3rd Grade
20 questions
Main Idea and Details
Quiz
•
5th Grade
20 questions
Context Clues
Quiz
•
6th Grade
15 questions
Equivalent Fractions
Quiz
•
4th Grade