HADOOP TALKS - Contributing limited storage as slave node to the cluster

In Hadoop Cluster, sometimes we don’t want that my data node(slave) contribute their whole storage to the master node.
To solving this challenge, we have some way available. One of the efficient way, I’m providing here…
So let’s start…
I already set up my Hadoop cluster with 1 Master Node and 1 Slave node on the top of AWS Cloud. In the slave node, I am attaching one more 8GB hard disk. I want to contribute only 4GB of my hard disk to the Hadoop Cluster.

You can see that I have one extra hard disk of 8GB attached in my slave node.
- Now I’m first going to create one new partition of 4GB...


Follow the same and your partition is getting created.
- Now, Next step is we need to format this partition…

- Now we need to mount this partition, for this I’m creating one directory using mkdir command…


Now it’s done.
Now I need to set this directory in Hadoop hdfs.xml file…
Go to /etc/hadoop and then edit hdfs-site.xml


That’s it, Now I need to start my Hadoop services in both Master and Data Node and Check it worked or not…
- In Slave Node:

- In Master Node:

You can see that now my HADOOP Cluster is only able to use 4GB from my slave node.
Thanks for reading :)