Technical Quiz & Challenge

Technical Quiz & Challenge

Professional Development

22 Qs

quiz-placeholder

Similar activities

Squad2019

Squad2019

1st Grade - Professional Development

20 Qs

Nokia Wave 4

Nokia Wave 4

Professional Development

20 Qs

APL Quiz 1

APL Quiz 1

Professional Development

20 Qs

Fun Friday | APMOD & AE

Fun Friday | APMOD & AE

Professional Development

20 Qs

TMC Website Quiz

TMC Website Quiz

Professional Development

18 Qs

Logo quiz

Logo quiz

Professional Development

20 Qs

Canvalover quiz

Canvalover quiz

7th Grade - Professional Development

23 Qs

Science Game Quiz

Science Game Quiz

KG - Professional Development

20 Qs

Technical Quiz & Challenge

Technical Quiz & Challenge

Assessment

Quiz

Fun

Professional Development

Hard

Created by

Ranjeet M

Used 10+ times

FREE Resource

22 questions

Show all answers

1.

MULTIPLE CHOICE QUESTION

30 sec • 5 pts

After changing the default block size and restarting the cluster, to which data does the new size apply?

all data

no data

existing data

new data

Answer explanation

In Hadoop HDFS, block size is specified in conf. file – hdfs-site.xml. To change the block size, parameter, dfs.block.size can be changed to required value(default 64mb/128mb) in hdfs-site.xml file. Once this is changed, cluster restart is required for the change to effect, for which will be applied only to the new files. Existing files’ block size doesn’t change. In order to change the existing files’ block size, ‘distcp’ utility can be used.

2.

MULTIPLE CHOICE QUESTION

30 sec • 5 pts

Data node failure is handled by?

Replication-factor

Checkpointing

Block Report

Secondary NameNode

Answer explanation

If the DataNode fails then the data block which it holds are created on another DataNode. So, the replication factor is always maintained. If the failed DataNode comes back then it's treated as fresh node and used for newer data blocks.

3.

MULTIPLE CHOICE QUESTION

30 sec • 5 pts

What will happen if the block size in Hadoop cluster is set to 4KB?

Under utilization of cluster.

Lesser number of blocks are created.

Over burdening of NameNode

Better parallelism will be achived.

Answer explanation

If a block size is set to 4KB then will lead to many data blocks. NameNode stores the metadata and DataNode will always ask NameNode to get next data block address whcih will over burden the NameNode.

4.

MULTIPLE CHOICE QUESTION

30 sec • 5 pts

Suppose, in Hadoop 2.0 we have a 750 MB of input file and there are 3 nodes in the cluster, with default replication factor, what will be the total number of blocks generated in HDFS for that file?

10

24

12

18

Answer explanation

File size/data block size = number of data blocks 750/128=6

Default replication factor 3, 6*3=18

5.

MULTIPLE CHOICE QUESTION

30 sec • 5 pts

What kind of scaling does HDFS supports primarily?

Vertical

Horizontal

Adaptive

Diagonal

Answer explanation

Hadoop adds more node to existing cluster which can be done without stopping the existing system.

6.

MULTIPLE CHOICE QUESTION

30 sec • 5 pts

How NameNode gets to know if a data block is corrupted?

Heartbeat

Secondary NameNode sends Notification

Metadata

Block Report

7.

MULTIPLE CHOICE QUESTION

30 sec • 5 pts

What does Apache Spark provide?

storage + computation

all things whatever Hadoop core provides

Only computation

Only storage.

Create a free account and access millions of resources

Create resources
Host any resource
Get auto-graded reports
or continue with
Microsoft
Apple
Others
By signing up, you agree to our Terms of Service & Privacy Policy
Already have an account?