Sunday, July 24, 2016

Fully Distributed Hadoop Cluster with Automatic Failover Namenode HA Using QJM & ZooKeeper on LXD Containers


Various approaches have been taken to meet the increased demand for highly reliable infrastructures, providing "five 9s" standard of availability. The concept of high-availability has long been providing highly reliable solution to serve critical systems with single point of failures and handle their increased system load in the most efficient ways.

NameNode was a single point of failure in HDFS cluster built on Apache Hadoop 1.x releases. Later Apache Hadoop 2.x release came up with high-availability solution to overcome NameNode's single point of failure. The HDFS high-availability feature provides an option to run two NameNodes redundantly in a active / passive configuration. One of the two NameNodes is always in "active" state while the other is a hot standby. Only on of the NameNodes that is in active state can write to the majority of JournalNodes in order to avoid split-brain scenario. Whenever there is a modification in the namespace, the active NameNode records the modifications to the JournalNodes. In the event of failure, the standby node always makes sure that it has read the edits logs from JournalNodes and has the most updated copy of edits logs. The DataNodes send the block location report and heartbeat to both active and standby nodes so that both the nodes are fully synchronized before a failover occurs.
The JournalNode daemon can be collocated with any of the NameNode, ResourceManager or JobTracker daemons. There should be a minimum number of three JournalNodes in a HDFS cluster.

SETUP SUMMARY

Node Name
IP Address
Hostname
Roles
NameNode1
192.168.56.11
node1
NameNode, JournalNode
NameNode2
192.168.56.12
node2
NameNode, JournalNode
DataNode1
192.168.56.13
node3
DataNode, ZooKeeper
DataNode2
192.168.56.14
node4
DataNode, ZooKeeper
DataNode3
192.168.56.15
node5
DataNode, ZooKeeper
Client
192.168.56.16
node6
Client, JournalNode

PREREQUISITES
We have setup a six node Hadoop cluster using Apache Hadoop 2.7.2 binary package inside LXD containers. Download the Apache Hadoop 2.7.2 binary package from this link, copy and extract it on all nodes inside /opt directory. The /opt directory should have required permissions and ownership for "hadmin" user.
We have a running LXD 2.0.2 Server on Ubuntu 16.04 Desktop operating system. Follow this link to learn and setup LXD.
Each container is running CentOS-6 image configured with static ip address, iptables & selinux disabled, password-less SSH setup.
Both the NameNodes should be able to access self, each other and all datanodes using password-less SSH using "root", "huser" and "hadmin" users. Configure the "huser" and "hadmin" users to be "hadoop" group members. We will perform all hadoop related tasks using "hadmin" user.
Java7 is required to be installed on all nodes. Install java as root in /opt directory on all nodes.

ZOOKEEPER INSTALLATION
To install and configure ZooKeeper, refer this post.











TAGS: Fully Distributed / Hadoop Cluster / Automatic Failover / High Avalailability / Namenode HA / ZooKeeper / Quorum Journal Manager / LXD / Linux / Containers
LXD container on a single host is just like "chroot on steroids". LXD's main goal is to provide an experience

1 comment: