Contact Sales

RSS

Hadoop in the Enterprise?

Barbara Murphy

CMO

I recently attended a panel event at the Churchill Club in Silicon Valley (http://www.churchillclub.org). “The Elephant in the Enterprise: What Role Will Hadoop Play?” certainly packed the house. The panel included users and vendors from Facebook, Cloudera, Oracle, MapR, and Metamarkets, each providing very different perspectives on Hadoop’s readiness for the enterprise. . It was interesting to compare the user perspective “…it is no way ready for primetime,” with the vendor perspective being “…of course it’s ready!”

On the user side, Jay Parikh, VP of Infrastructure Engineering at Facebook, and Michael Driscoll, CEO at Metamarkets, provided a cautious view on Hadoop’s readiness for the mainstream, particularly around the ability to realize high availability and scalability. As the owner of the largest single Hadoop cluster, now over 100 petabytes (PB), Jay Parikh cautioned that the challenge of meeting the scalability and high availability needs of Hadoop is outpacing any open source community development to fix the problems. One of the biggest challenges is that the rate of data growth is on an exponential curve, far outpacing the rate of improvement in scalability and high availability. The Achilles heel of the Hadoop architecture is the single namenode (metadata server) which will bring down the entire cluster if it itself goes down. While Mike Olson claims it is fixed in Hadoop, the folks at HortonWorks were clear that the new code base is not yet a stable solution.

The big box vendor approach is to sell a turn-key, preconfigured all-in-one appliance dedicated to Hadoop accompanied by a support contract from Cloudera or MapR. This is fine but it adds yet more complexity for system administrators. As Michael Driscoll from Metamarkets put it so well, “It is an oxymoron to talk about a big data appliance, because I think big data is too big to fit on any one box. The beautiful thing about Hadoop is how scalable it is, [Hadoop offers] flexible scale as your data ramps up, which is harder to do with some of the appliances.”

A recent survey released by Karmasphere (www.karmasphere.com) confirmed that 55% of Hadoop users are running clusters of between one and 10 terabytes (TB) and that 32% of all Hadoop users have less than 2TB of storage. This is not exactly the kind of workload that any big box vendor would struggle with, but it highlights that the big data market for analytics is a nascent market with little testing at enterprise scale. By contrast, Facebook’s largest cluster is over 100PB (one hundred thousand terabytes) and has a dedicated team of 60 people to keep it operational. Contrasting these two extremes, it becomes clear why Jay Parikh questions why people think Hadoop is enterprise ready as it has not been tested at scale in the enterprise.

The reason that this topic is of interest to Panasas is that scalability, availability, and manageability have been the core of our business in big data for design and discover market segments (http://www.panasas.com/solutions/big-data). Hadoop is addressing big data for the decide segment and while this market is still in its infancy, future problems will have to be addressed in the same way as Panasas has been doing in design and discover for many years. We have already solved the problem of scaling to thousands of compute nodes that share petabytes of storage with very high performance, no single point of failure, and a simple management interface. More importantly, as Michael Driscoll highlighted, the model for big data is to provide flexible scale as your data ramps, not as a pre-configured black box that requires you to buy lots of more black boxes as your data set grows. The Panasas architecture shares significant architectural similarity with the Hadoop architecture by separating the metadata service (Hadoop’s Namenode) from the storage servers, but Panasas goes further to provide scaling with many metadata services, integrated management to create an appliance from a large collection of hardware and software, and integrated high availability and failover for all services.  These features have been hardened with more than eight years of production use in the enterprise.

Watch the complete panel discussion here – http://www.youtube.com/user/ChurchillClub