
Technical Quiz & Challenge

Quiz
•
Fun
•
Professional Development
•
Hard

Ranjeet M
Used 10+ times
FREE Resource
22 questions
Show all answers
1.
MULTIPLE CHOICE QUESTION
30 sec • 5 pts
After changing the default block size and restarting the cluster, to which data does the new size apply?
all data
no data
existing data
new data
Answer explanation
In Hadoop HDFS, block size is specified in conf. file – hdfs-site.xml. To change the block size, parameter, dfs.block.size can be changed to required value(default 64mb/128mb) in hdfs-site.xml file. Once this is changed, cluster restart is required for the change to effect, for which will be applied only to the new files. Existing files’ block size doesn’t change. In order to change the existing files’ block size, ‘distcp’ utility can be used.
2.
MULTIPLE CHOICE QUESTION
30 sec • 5 pts
Data node failure is handled by?
Replication-factor
Checkpointing
Block Report
Secondary NameNode
Answer explanation
If the DataNode fails then the data block which it holds are created on another DataNode. So, the replication factor is always maintained. If the failed DataNode comes back then it's treated as fresh node and used for newer data blocks.
3.
MULTIPLE CHOICE QUESTION
30 sec • 5 pts
What will happen if the block size in Hadoop cluster is set to 4KB?
Under utilization of cluster.
Lesser number of blocks are created.
Over burdening of NameNode
Better parallelism will be achived.
Answer explanation
If a block size is set to 4KB then will lead to many data blocks. NameNode stores the metadata and DataNode will always ask NameNode to get next data block address whcih will over burden the NameNode.
4.
MULTIPLE CHOICE QUESTION
30 sec • 5 pts
Suppose, in Hadoop 2.0 we have a 750 MB of input file and there are 3 nodes in the cluster, with default replication factor, what will be the total number of blocks generated in HDFS for that file?
10
24
12
18
Answer explanation
File size/data block size = number of data blocks 750/128=6
Default replication factor 3, 6*3=18
5.
MULTIPLE CHOICE QUESTION
30 sec • 5 pts
What kind of scaling does HDFS supports primarily?
Vertical
Horizontal
Adaptive
Diagonal
Answer explanation
Hadoop adds more node to existing cluster which can be done without stopping the existing system.
6.
MULTIPLE CHOICE QUESTION
30 sec • 5 pts
How NameNode gets to know if a data block is corrupted?
Heartbeat
Secondary NameNode sends Notification
Metadata
Block Report
7.
MULTIPLE CHOICE QUESTION
30 sec • 5 pts
What does Apache Spark provide?
storage + computation
all things whatever Hadoop core provides
Only computation
Only storage.
Create a free account and access millions of resources
Similar Resources on Wayground
Popular Resources on Wayground
10 questions
Video Games

Quiz
•
6th - 12th Grade
10 questions
Lab Safety Procedures and Guidelines

Interactive video
•
6th - 10th Grade
25 questions
Multiplication Facts

Quiz
•
5th Grade
10 questions
UPDATED FOREST Kindness 9-22

Lesson
•
9th - 12th Grade
22 questions
Adding Integers

Quiz
•
6th Grade
15 questions
Subtracting Integers

Quiz
•
7th Grade
20 questions
US Constitution Quiz

Quiz
•
11th Grade
10 questions
Exploring Digital Citizenship Essentials

Interactive video
•
6th - 10th Grade