Apache Hadoop – Free Java-Based Programming Framework

Apache Hadoop is a free Java-based programming framework that supports the processing of large data sets in a distributed computing environment. It develops open-source software for reliable, scalable, distributed computing. Also it allows companies such as Facebook and Twitter to easily process huge amounts of data.

The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using a simple programming model. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. Rather than rely on hardware to deliver high-avaiability, the library itself is designed to detect and handle failures at the application layer, so delivering a highly-availabile service on top of a cluster of computers, each of which may be prone to failures.

[advt]Hadoop was originally conceived on the basis of Google’s MapReduce, in which an application is broken down into numerous small parts. Any of these parts (also called fragments or blocks) can be run on any node in the cluster. Hadoop makes it possible to run applications on systems with thousands of nodes involving thousands of terabytes.

A distributed file system (DFS) facilitates rapid data transfer rates among nodes and allows the system to continue operating uninterrupted in case of a node failure. The risk of catastrophic system failure is low, even if a significant number of nodes become inoperative.

Be the first to comment

Leave a Reply