Hadoop Distributed File System (HDFS) Block Replication – Big Data Analytics
The design of HDFS is based on two types of nodes: one NameNode and one or more DataNodes.
When a client wants to write data, it first communicates with the NameNode and requests to create a file. The NameNode determines how many blocks are needed and provides the client with the DataNodes that will store the data. As part of the storage process, the data blocks are replicated after they are written to the assigned node.
Let us take an example to understand the concept of block replication in HDFS:
Assume there are 4 DataNodes
The client wants to write 150MB of Data
NameNode divides the data into 3 blocks (The maximum block size is 64MB)
The first Block is 64 MB, the second is 64MB and the third is 22MB.
NameNode Assigns DataNode1 and DataNode2 to the client to store the data.
Original storage of block is shown below:
Here blocks 1 and 2 are stored in DataNode 1 and block 3 is stored in DataNode 2.
|1||1 and 2|
The Replicated blocks are stored in DataNode 3 and DataNode 4 (red color), as shown below.
|1||1 and 2|
|4||2 and 3|
NameNode will attempt to write replicas of the data blocks on nodes that are in other separate racks (if possible). If there is only one rack, then the replicated blocks are written to other servers in the same rack.
After the DataNode acknowledges that the file block replication is complete, the client closes the file and informs the NameNode that the operation is complete.
Note that the NameNode does not write any data directly to the DataNodes. However, it gives the client a limited amount of time to complete the operation. If it does not complete in the time period, the operation is canceled.
When HDFS writes a file, the file will be replicated across the cluster.
For Hadoop clusters containing more than eight DataNodes, the replication value is usually set to 3.
In a Hadoop cluster of eight or fewer DataNodes but more than one DataNode, a replication factor of 2 is adequate.
The amount of replication is based on the value of dfs.replication in the hdfs-site.xml file.
Rack Awareness in HDFS Block Replication – Big Data Analytics
If all the DataNodes are present in one Rack, then the replicated blocks are stored on the same rack but on different DataNode. Here DataNodes 1, 2, 3, and 4 are present on the same Rack. Hence the replicated blocks are stored on the same rack but on different DataNodes.
|1||1||1 and 2|
|3||3 and 2|
If all the DataNodes are present on more than one Rack, then the replicated blocks are stores on the different racks. Here DataNodes 1, and 2, are present on Rack 1, and DataNodes 3, and 4 are present on Rack 2. As there are two racks, the replicated blocks are stored on other racks. Data blocks 1 and 2 are stored in rack 1, hence they are replicated in rack 2. Similarly, data block 3 is stored in rack 2, hence it is replicated in rack 1.
|1||1||1 and 2|
|2||3||3 and 1|
In a typical operating system like Windows and UNIX, the block size is 4KB or 8KB.
The default block size is often 64MB (Maximum) in Hadoop Distributed File System (HDFS).
The HDFS default block size is not the minimum block size.
If a 20KB file is written to HDFS, it will create a block that is approximately 20KB in size.
If a file of size 80MB is written to HDFS, a 64MB block and a 16MB block will be created.
This article discusses the Block Replication factor in Hadoop Distributed File System (HDFS) – Big Data Analytics. Don’t forget to give your comment and Subscribe to our YouTube channel for more videos and like the Facebook page for regular updates.