HADOOP TALKS - Contributing limited storage as slave node to the cluster

Gaurav Gupta
Oct 17, 2020

--

In Hadoop Cluster, sometimes we don’t want that my data node(slave) contribute their whole storage to the master node.

To solving this challenge, we have some way available. One of the efficient way, I’m providing here…

So let’s start…

I already set up my Hadoop cluster with 1 Master Node and 1 Slave node on the top of AWS Cloud. In the slave node, I am attaching one more 8GB hard disk. I want to contribute only 4GB of my hard disk to the Hadoop Cluster.

You can see that I have one extra hard disk of 8GB attached in my slave node.

  • Now I’m first going to create one new partition of 4GB...

Follow the same and your partition is getting created.

  • Now, Next step is we need to format this partition…
  • Now we need to mount this partition, for this I’m creating one directory using mkdir command…

Now it’s done.

Now I need to set this directory in Hadoop hdfs.xml file…

Go to /etc/hadoop and then edit hdfs-site.xml

That’s it, Now I need to start my Hadoop services in both Master and Data Node and Check it worked or not…

  • In Slave Node:
  • In Master Node:

You can see that now my HADOOP Cluster is only able to use 4GB from my slave node.

Thanks for reading :)

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

Gaurav Gupta
Gaurav Gupta

Written by Gaurav Gupta

Tech Enthusiasts, Passion to learn and share

No responses yet

Write a response