The New Age of Big Data: Is It the Death of Hadoop?

Big Data had gone through several transformations through the years, growing into the phrase we identify it as today. From its first identified use on the back of Hadoop and MapReduce, a new age of Big Data has been ushered in with the spread of new technologies such as Kubernetes, Spark, and NoSQL databases.

These might not serve the exact same purpose as Hadoop individually, but they fill the same niche and do the same job with features the original platform designers never envisioned.

The multi-cloud architecture boom and increasing emphasis on real-time data may just mean the end of Big Data as we know it, and Hadoop with it.

A brief history of Big Data

The use of data for making business decisions can be traced back to ancient civilizations in Mesopotamia. However, the age of Big Data as we know it is only as old as 2005 when O’Reilly Media launched the phrase. It was used to describe the massive amounts of data that the world was beginning to produce on the internet.

The newly-dubbed Web 2.0 needed to be indexed and easily searchable, and, Yahoo, being the behemoth that it was, was just the right company for the job. Hadoop was born off the efforts of Yahoo engineers, depending on Google’s MapReduce under the hood. A new era of Big Data had begun, and Hadoop was at the forefront of the revolution.

The new technologies led to a fundamental shift in the way the world regarded data processing. Traditional assumptions of atomicity, consistency, isolation, and durability (ACID) began to fade, and new use cases for previously unusable data began to emerge.

Hadoop would begin its life as a commercial platform with the launch of Cloudera in 2008, followed by rivals such as Hortonworks, EMC and MapR. It continued its momentous run until it seemingly hit its peak in 2015, and its place in the enterprise market would never be guaranteed again

Where Hadoop Couldn’t Keep Up

Hadoop made its mark in the world of Big Data by being a platform to collect, store and analyze large swathes of data. However, not even a technology as revolutionary and versatile as Hadoop could exist without its drawbacks.

Some of these would be so costly developers would rather design whole new systems to deal with them. With time, Hadoop started to lose its charm, unable to grow past its initial vision as a Big Data software.

Hadoop is a machine made up of smaller moving parts that are incredibly efficient at what they do – crunch data. This ultimately results in one of the first drawbacks of Hadoop – it does not come with built-in support for analytics data. Hadoop works well to process your data, but not likely as you need – visual reports about how the data is being processed, for instance.

MapReduce was also built from the ground up to be file-intensive. This makes it a great piece of software for simple requests, but not so much for iterative data. For smaller datasets, it turns out to be a rather inefficient solution.

Another area Hadoop lands flat on its face is with regards to real-time processing and reporting. Hadoop suffers from the curse of time. It relies on technologies that even its very founders (Google in particular) no longer rely on.

With MapReduce, every time you want to analyze a modified dataset (say, after adding or deleting data), you have to stream over the whole dataset again. Thanks to this feature, Hadoop is horrible at real-time reporting – a feature that led to the creation of Percolator, MapReduce’s replacement within Google.

The emergence of better technology has also meant a rise in the number of threats to said technology and a corresponding increase in the emphasis that is placed on it.

Unfortunately, Hadoop is nowhere close to being secure. As a matter of fact, its security settings are off by default, and it has too much inertia to simply change that. To make things worse, plugging in security measures isn’t that much easier.

The Fall of Hadoop

With these and more shortcomings in the data science world, new tools such as Hive, Pig and Spark were created to work on top of Hadoop to overcome its weaknesses. But it simply couldn’t grow out of the shoes it had been made for.

The growth of NoSQL databases such as Hazelcast and MongoDB also meant that problems Hadoop was designed to support were now being solved by single players rather than the ‘all or nothing’ approach Hadoop was designed with. It wasn’t flexible enough to evolve beyond simply being a batch processing software.

Over time, new Big Data challenges began to emerge that a large monolithic software like Hadoop couldn’t deal with, either. Being primarily file-intensive, it couldn’t keep up with the variety of data sources that were now available, the lack of support for dynamic schemas, on-the-fly queries, and the rise of cloud infrastructure all caused people to seek different solutions. Hadoop had lost its grip on the enterprise world.

Businesses whose primary concern was dealing with Hadoop infrastructure like Cloudera and Hortonworks were seeing less and less adoption. This led to the eventual merger of the two companies in 2019, and the same message rang out from different corners of the world at the same time: ‘Hadoop is dead.’

Is Hadoop Really Dead?

Hadoop still has a place in the enterprise world – the problems it was designed to solve still exist to this day. Technologies such as Spark have largely taken over the same space that Hadoop once occupied.

The question of Hadoop or Spark is one every data scientist has to contend with at some point, and most seem to be settling in the latter of these, thanks to the great advantages is speed it offers.

It’s unlikely Hadoop will see much more adoption with newer marker entrants, especially considering the pace with which technology moves. It also doesn’t help that a lot of alternatives have a much smaller learning curve than the convoluted monolith that is Hadoop. Companies like MapR and Cloudera have also begun to pivot away from Hadoop-only infrastructure to more robust cloud-based solutions. Hadoop still has its place, but maybe not for long.

Edward Huskin

Edward Huskin is a freelance data and analytics consultant. He specializes in finding the best technical solution for companies to manage their data and produce meaningful insights.

3 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *

7814 Views