WANdisco’s Distributed Data Replication Approach to Hadoop’s Single Point of Failure

by   |   February 6, 2013 3:26 pm   |   0 Comments

Facebook, Amazon and Twitter may rely on Hadoop for processing data but despite their legions of genius engineers, not one of these tech titans can guarantee Hadoop’s fail-safe performance.

That’s because Hadoop’s Distributed File System (HDFS) metadata service, NameNode, contains an inherent weakness so that when one NameNode in a cluster goes down, the entire cluster fails, blocking access to mission-critical applications. It’s a single point of failure (SPOF) dilemma that’s prompting some companies to hesitate before jumping on the Hadoop bandwagon.

Would you like to read the full article?

Please login with your data-informed username and password. If you do not have an account you can register for FREE here.

 

Tags: , ,

Post a Comment

Your email is never published nor shared. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>