Big Data Processing Using Hadoop


As a CS freshman I was required to present on an emerging technology of my choice. I chose Big Data Processing Using Hadoop, with the focus being on general understanding of the core rather than technical depth. So here is a four part general write up:

Part 1 covers the very fundamentals of Big Data and Google’s milestone contribution towards its processing.

Part 2 discusses the development of Hadoop- an open source approach for the processing of Big Data.

Part 3 focuses on the MapReduce Framework.

Part 4 presents a simplified illustrative example focussing on just the core of the working of the MapReduce Framework.

Enjoy!

[box color=IBBlue]A four part  Big Data Series:

Part 1: Big Data, GFS and MapReduce – Google’s Historic Contributions
Part 2: Hadoop – The Open Source Approach To Big Data Processing You Ought To Know
Part 3: MapReduce – The Big Data Crunching Framework
Part 4: MapReduce Framework – How Does It Work?

[/box]

 


About Yusra Haider

Yusra Haider is an undergraduate Computer Science student who is just warming up to the global reach, greater interactions, intellectually and socially engaging opportunities of our times aka blogging. She has participated in a few MOOC courses and apparently she is hooked. Would love to have comments, feedback, criticism, accolades and all that lies in between at yusra.haider@technoduet.com