Search Header Logo

Technical Quiz & Challenge

Authored by Ranjeet M

Fun

Professional Development

Used 10+ times

Technical Quiz & Challenge
AI

AI Actions

Add similar questions

Adjust reading levels

Convert to real-world scenario

Translate activity

More...

    Content View

    Student View

22 questions

Show all answers

1.

MULTIPLE CHOICE QUESTION

30 sec • 5 pts

After changing the default block size and restarting the cluster, to which data does the new size apply?

all data

no data

existing data

new data

Answer explanation

In Hadoop HDFS, block size is specified in conf. file – hdfs-site.xml. To change the block size, parameter, dfs.block.size can be changed to required value(default 64mb/128mb) in hdfs-site.xml file. Once this is changed, cluster restart is required for the change to effect, for which will be applied only to the new files. Existing files’ block size doesn’t change. In order to change the existing files’ block size, ‘distcp’ utility can be used.

2.

MULTIPLE CHOICE QUESTION

30 sec • 5 pts

Data node failure is handled by?

Replication-factor

Checkpointing

Block Report

Secondary NameNode

Answer explanation

If the DataNode fails then the data block which it holds are created on another DataNode. So, the replication factor is always maintained. If the failed DataNode comes back then it's treated as fresh node and used for newer data blocks.

3.

MULTIPLE CHOICE QUESTION

30 sec • 5 pts

What will happen if the block size in Hadoop cluster is set to 4KB?

Under utilization of cluster.

Lesser number of blocks are created.

Over burdening of NameNode

Better parallelism will be achived.

Answer explanation

If a block size is set to 4KB then will lead to many data blocks. NameNode stores the metadata and DataNode will always ask NameNode to get next data block address whcih will over burden the NameNode.

4.

MULTIPLE CHOICE QUESTION

30 sec • 5 pts

Suppose, in Hadoop 2.0 we have a 750 MB of input file and there are 3 nodes in the cluster, with default replication factor, what will be the total number of blocks generated in HDFS for that file?

10

24

12

18

Answer explanation

File size/data block size = number of data blocks 750/128=6

Default replication factor 3, 6*3=18

5.

MULTIPLE CHOICE QUESTION

30 sec • 5 pts

What kind of scaling does HDFS supports primarily?

Vertical

Horizontal

Adaptive

Diagonal

Answer explanation

Hadoop adds more node to existing cluster which can be done without stopping the existing system.

6.

MULTIPLE CHOICE QUESTION

30 sec • 5 pts

How NameNode gets to know if a data block is corrupted?

Heartbeat

Secondary NameNode sends Notification

Metadata

Block Report

7.

MULTIPLE CHOICE QUESTION

30 sec • 5 pts

What does Apache Spark provide?

storage + computation

all things whatever Hadoop core provides

Only computation

Only storage.

Access all questions and much more by creating a free account

Create resources

Host any resource

Get auto-graded reports

Google

Continue with Google

Email

Continue with Email

Classlink

Continue with Classlink

Clever

Continue with Clever

or continue with

Microsoft

Microsoft

Apple

Apple

Others

Others

Already have an account?