Apache Hadoop on Mac OS X
For some reasons I started to play with Apache Hadoop (Core):
Hadoop is a software platform that lets one easily write and run applications that process vast amounts of data. Here's what makes Hadoop especially useful:Hadoop implements MapReduce, using the Hadoop Distributed File System (HDFS). MapReduce divides applications into many small blocks of work. HDFS creates multiple replicas of data blocks for reliability, placing them on compute nodes around the cluster. MapReduce can then process the data where it is located. Hadoop has been demonstrated on clusters with 2000 nodes. The current design target is 10,000 node clusters.
- Scalable: Hadoop can reliably store and process petabytes.
- Economical: It distributes the data and processing across clusters of commonly available computers. These clusters can number into the thousands of nodes.
- Efficient: By distributing the data, Hadoop can process it in parallel on the nodes where the data is located. This makes it extremely rapid.
- Reliable: Hadoop automatically maintains multiple copies of data and automatically redeploys computing tasks based on failures.
I followed the Quickstart guide and I can confirm that it works on [en:Mac OS X] too, but I managed only to make it run in “standalone” mode: usefull for first-stage development and debugging.
To understand a bit more how it work, I decided to do also the Map-Reduce Tutorial: I took the code of the example Word Counter (v1.0) and I wrote a Character Counter: same code but with one for
more and more internal documentation.
Assumed it can’t help you without a minimum of study of what MapReduce is, I would like to share the code of what I did:
If interested, the first thing to read is “MapReduce: Simplified Data Processing on Large Clusters”, a publication by Jeffrey Dean and Sanjay Ghemawat sponsored by Google Labs.