Hadoop is
a platform that provides both distributed storage and computational
capabilities.
Hadoop was first conceived to fix a scalability issue
that existed in Nutch, an open source crawler and search engine.
At the time Google had published papers that described
its novel distributed filesystem, the Google File System (GFS), and Map-
Reduce, a computational framework for parallel processing.
The successful implementation of these papers concepts in
Nutch resulted in its split into two separate projects, the second of which
became Hadoop, a first-class Apache project.
Hadoop is a distributed master-slave architecture that
consists of the Hadoop Distributed File System (HDFS) for storage and Map-
Reduce for computational capabilities.
Hadoop has
two major components:
- the distributed filesystem component, the main example of
which is the Hadoop Distributed File System, though other file systems are
supported.
- the MapReduce component, which is a framework for
performing calculations on the data in the distributed file system.
A node is
simply a computer, typically non-enterprise, commodity hardware for nodes that
contain data. So in this example, we have Node 1. Then we can add more nodes,
such as Node 2, Node 3, and so on. This would be called a rack.
A rack is a collection of 30 or 40 nodes that are physically
stored close together and are all connected to the same network switch.
Network bandwidth between any two nodes in rack is greater
than bandwidth between two nodes on different racks.
A Hadoop Cluster (or just ‘cluster’ from now on) is a
collection of racks
Apache Hadoop Fundamentals – HDFS and
MapReduce
Hadoop is an open source software used
for distributed computing that can be used to query a large set of data and get
the results faster using reliable and scalable architecture.
In a traditional non distributed architecture, you’ll have data stored in one server and any client program will access this central data server to retrieve the data.
The non distributed model has few fundamental issues. In this
model, you’ll mostly scale vertically by adding more CPU, adding more storage,
etc.
This architecture is also not reliable, as if the main server
fails, you have to go back to the backup to restore the data.
From performance point of view, this architecture will not
provide the results faster when you are running a query against a huge data
set.
In a hadoop
distributed architecture, both data and processing are distributed across
multiple servers. The following are some of the key points to remember about
the hadoop:
- Each and every server offers local computation and storage. i.e When you run a query against a large data set, every server in this distributed architecture will be executing the query on its local machine against the local data set. Finally, the resultset from all this local servers are consolidated.
- In simple terms, instead of running a query on a single server, the query is split across multiple servers, and the results are consolidated. This means that the results of a query on a larger dataset are returned faster.
- You don’t need a powerful server. Just use several less expensive commodity servers as hadoop individual nodes.
- High fault-tolerance. If any of the nodes fails in the hadoop environment, it will still return the dataset properly, as hadoop takes care of replicating and distributing the data efficiently across the multiple nodes.
- A simple hadoop implementation can use just two servers. But you can scale up to several thousands of servers without any additional effort.
- Hadoop is written in Java. So, it can run on any platform.
- Please keep in mind that hadoop is not a replacement for your RDBMS. You’ll typically use hadoop for unstructured data
- Originally Google started using the distributed computing model based on GFS (Google Filesystem) and MapReduce. Later Nutch (open source web search software) was rewritten using MapReduce. Hadoop was branced out of Nutch as a separate project. Now Hadoop is a top-level Apache project that has gained tremendous momentum and popularity in recent years.
HDFS
HDFS
stands for Hadoop Distributed File System, which is the storage system used by
Hadoop. The following is a high-level architecture that explains how HDFS
works.
The
following are some of the key points to remember about the HDFS:
- In the above diagram, there is one NameNode, and multiple DataNodes (servers). b1, b2, indicates data blocks.
- When you dump a file (or data) into the HDFS, it stores them in blocks on the various nodes in the hadoop cluster. HDFS creates several replication of the data blocks and distributes them accordingly in the cluster in way that will be reliable and can be retrieved faster. A typical HDFS block size is 128MB. Each and every data block is replicated to multiple nodes across the cluster.
- Hadoop will internally make sure that any node failure will never results in a data loss.
- There will be one NameNode that manages the file system metadata
- There will be multiple DataNodes (These are the real cheap commodity servers) that will store the data blocks
- When you execute a query from a client, it will reach out to the NameNode to get the file metadata information, and then it will reach out to the DataNodes to get the real data blocks
- Hadoop provides a command line interface for administrators to work on HDFS
- The NameNode comes with an in-built web server from where you can browse the HDFS filesystem and view some basic cluster statistics
- MapReduce
The
following are some of the key points to remember about the HDFS:
- MapReduce is a parallel programming model that is used to retrieve the data from the Hadoop cluster
- In this model, the library handles lot of messy details that programmers doesn’t need to worry about. For example, the library takes care of parallelization, fault tolerance, data distribution, load balancing, etc.
- This splits the tasks and executes on the various nodes parallely, thus speeding up the computation and retriving required data from a huge dataset in a fast manner.
- This provides a clear abstraction for programmers. They have to just implement (or use) two functions: map and reduce
- The data are fed into the map function as key value pairs to produce intermediate key/value pairs
- Once the mapping is done, all the intermediate results from various nodes are reduced to create the final output
- JobTracker keeps track of all the MapReduces jobs that are running on various nodes. This schedules the jobs, keeps track of all the map and reduce jobs running across the nodes. If any one of those jobs fails, it reallocates the job to another node, etc. In simple terms, JobTracker is responsible for making sure that the query on a huge dataset runs successfully and the data is returned to the client in a reliable manner.
- TaskTracker performs the map and reduce tasks that are assigned by the JobTracker. TaskTracker also constantly sends a hearbeat message to JobTracker, which helps JobTracker to decide whether to delegate a new task to this particular node or not.
Data Organization
Data Blocks
HDFS is
designed to support very large files. Applications that are compatible with
HDFS are those that deal with large data sets. These applications write their
data only once but they read it one or more times and require these reads to be
satisfied at streaming speeds. HDFS supports write-once-read-many semantics on
files. A typical block size used by HDFS is 64 MB. Thus, an HDFS file is
chopped up into 64 MB chunks, and if possible, each chunk will reside on a
different DataNode.
Staging
A
client request to create a file does not reach the NameNode immediately. In
fact, initially the HDFS client caches the file data into a temporary local
file. Application writes are transparently redirected to this temporary local
file. When the local file accumulates data worth over one HDFS block size, the
client contacts the NameNode. The NameNode inserts the file name into the file
system hierarchy and allocates a data block for it. The NameNode responds to
the client request with the identity of the DataNode and the destination data
block. Then the client flushes the block of data from the local temporary file
to the specified DataNode. When a file is closed, the remaining un-flushed data
in the temporary local file is transferred to the DataNode. The client then tells
the NameNode that the file is closed. At this point, the NameNode commits the
file creation operation into a persistent store. If the NameNode dies before
the file is closed, the file is lost.
The
above approach has been adopted after careful consideration of target
applications that run on HDFS. These applications need streaming writes to
files. If a client writes to a remote file directly without any client side
buffering, the network speed and the congestion in the network impacts
throughput considerably. This approach is not without precedent. Earlier
distributed file systems, e.g. AFS, have
used client side caching to improve performance. A POSIX requirement has been
relaxed to achieve higher performance of data uploads.
Replication
Pipelining
When a
client is writing data to an HDFS file, its data is first written to a local
file as explained in the previous section. Suppose the HDFS file has a
replication factor of three. When the local file accumulates a full block of
user data, the client retrieves a list of DataNodes from the NameNode. This
list contains the DataNodes that will host a replica of that block. The client
then flushes the data block to the first DataNode. The first DataNode starts
receiving the data in small portions (4 KB), writes each portion to its local
repository and transfers that portion to the second DataNode in the list. The
second DataNode, in turn starts receiving each portion of the data block,
writes that portion to its repository and then flushes that portion to the
third DataNode. Finally, the third DataNode writes the data to its local
repository. Thus, a DataNode can be receiving data from the previous one in the
pipeline and at the same time forwarding data to the next one in the pipeline.
Thus, the data is pipelined from one DataNode to the next.
HDFS
has a master/slave architecture. An HDFS cluster consists of a single NameNode,
a master server that manages the file system namespace and regulates access to
files by clients. In addition, there are a number of DataNodes, usually one per
node in the cluster, which manage storage attached to the nodes that they run
on. HDFS exposes a file system namespace and allows user data to be stored in
files. Internally, a file is split into one or more blocks and these blocks are
stored in a set of DataNodes. The NameNode executes file system namespace
operations like opening, closing, and renaming files and directories. It also
determines the mapping of blocks to DataNodes. The DataNodes are responsible
for serving read and write requests from the file system’s clients. The
DataNodes also perform block creation, deletion, and replication upon
instruction from the NameNode.
The
NameNode and DataNode are pieces of software designed to run on commodity
machines. These machines typically run a GNU/Linux operating system (OS). HDFS is built
using the Java language; any machine that supports Java can run the NameNode or
the DataNode software. Usage of the highly portable Java language means that
HDFS can be deployed on a wide range of machines. A typical deployment has a
dedicated machine that runs only the NameNode software. Each of the other
machines in the cluster runs one instance of the DataNode software. The
architecture does not preclude running multiple DataNodes on the same machine
but in a real deployment that is rarely the case.
The
existence of a single NameNode in a cluster greatly simplifies the architecture
of the system. The NameNode is the arbitrator and repository for all HDFS
metadata. The system is designed in such a way that user data never flows
through the NameNode.
The
File System Namespace
HDFS
supports a traditional hierarchical file organization. A user or an application
can create directories and store files inside these directories. The file
system namespace hierarchy is similar to most other existing file systems; one
can create and remove files, move a file from one directory to another, or
rename a file. HDFS does not yet implement user quotas or access permissions.
HDFS does not support hard links or soft links. However, the HDFS architecture
does not preclude implementing these features.
The
NameNode maintains the file system namespace. Any change to the file system
namespace or its properties is recorded by the NameNode. An application can
specify the number of replicas of a file that should be maintained by HDFS. The
number of copies of a file is called the replication factor of that file. This
information is stored by the NameNode.
Data
Replication
HDFS is
designed to reliably store very large files across machines in a large cluster.
It stores each file as a sequence of blocks; all blocks in a file except the
last block are the same size. The blocks of a file are replicated for fault
tolerance. The block size and replication factor are configurable per file. An
application can specify the number of replicas of a file. The replication
factor can be specified at file creation time and can be changed later. Files
in HDFS are write-once and have strictly one writer at any time.
The
NameNode makes all decisions regarding replication of blocks. It periodically
receives a Heartbeat and a Blockreport from each of the DataNodes in the
cluster. Receipt of a Heartbeat implies that the DataNode is functioning
properly. A Blockreport contains a list of all blocks on a DataNode.
Replica Placement:
The
placement of replicas is critical to HDFS reliability and performance.
Optimizing replica placement distinguishes HDFS from most other distributed
file systems. This is a feature that needs lots of tuning and experience. The
purpose of a rack-aware replica placement policy is to improve data
reliability, availability, and network bandwidth utilization. The current
implementation for the replica placement policy is a first effort in this
direction. The short-term goals of implementing this policy are to validate it
on production systems, learn more about its behavior, and build a foundation to
test and research more sophisticated policies.
Large
HDFS instances run on a cluster of computers that commonly spread across many
racks. Communication between two nodes in different racks has to go through
switches. In most cases, network bandwidth between machines in the same rack is
greater than network bandwidth between machines in different racks.
The
NameNode determines the rack id each DataNode belongs to via the process
outlined in Rack
Awareness. A simple but non-optimal policy is to place replicas on unique
racks. This prevents losing data when an entire rack fails and allows use of
bandwidth from multiple racks when reading data. This policy evenly distributes
replicas in the cluster which makes it easy to balance load on component
failure. However, this policy increases the cost of writes because a write
needs to transfer blocks to multiple racks.
For the
common case, when the replication factor is three, HDFS’s placement policy is
to put one replica on one node in the local rack, another on a different node
in the local rack, and the last on a different node in a different rack. This
policy cuts the inter-rack write traffic which generally improves write
performance. The chance of rack failure is far less than that of node failure;
this policy does not impact data reliability and availability guarantees.
However, it does reduce the aggregate network bandwidth used when reading data
since a block is placed in only two unique racks rather than three. With this
policy, the replicas of a file do not evenly distribute across the racks. One
third of replicas are on one node, two thirds of replicas are on one rack, and
the other third are evenly distributed across the remaining racks. This policy
improves write performance without compromising data reliability or read
performance.
The
current, default replica placement policy described here is a work in progress.
Replica Selection
To
minimize global bandwidth consumption and read latency, HDFS tries to satisfy a
read request from a replica that is closest to the reader. If there exists a
replica on the same rack as the reader node, then that replica is preferred to
satisfy the read request. If angg/ HDFS cluster spans multiple data centers,
then a replica that is resident in the local data center is preferred over any
remote replica.
Safemode
On
startup, the NameNode enters a special state called Safemode. Replication of
data blocks does not occur when the NameNode is in the Safemode state. The
NameNode receives Heartbeat and Blockreport messages from the DataNodes. A
Blockreport contains the list of data blocks that a DataNode is hosting. Each
block has a specified minimum number of replicas. A block is considered safely
replicated when the minimum number of replicas of that data block has checked
in with the NameNode. After a configurable percentage of safely replicated data
blocks checks in with the NameNode (plus an additional 30 seconds), the
NameNode exits the Safemode state. It then determines the list of data blocks
(if any) that still have fewer than the specified number of replicas. The
NameNode then replicates these blocks to other DataNodes.
The Persistence of File System Metadata
The
HDFS namespace is stored by the NameNode. The NameNode uses a transaction log
called the EditLog to persistently record every change that occurs to file
system metadata. For example, creating a new file in HDFS causes the NameNode
to insert a record into the EditLog indicating this. Similarly, changing the
replication factor of a file causes a new record to be inserted into the
EditLog. The NameNode uses a file in its local host OS file system to store the
EditLog. The entire file system namespace, including the mapping of blocks to
files and file system properties, is stored in a file called the FsImage. The
FsImage is stored as a file in the NameNode’s local file system too.
The
NameNode keeps an image of the entire file system namespace and file Blockmap
in memory. This key metadata item is designed to be compact, such that a
NameNode with 4 GB of RAM is plenty to support a huge number of files and
directories. When the NameNode starts up, it reads the FsImage and EditLog from
disk, applies all the transactions from the EditLog to the in-memory
representation of the FsImage, and flushes out this new version into a new
FsImage on disk. It can then truncate the old EditLog because its transactions
have been applied to the persistent FsImage. This process is called a
checkpoint. In the current implementation, a checkpoint only occurs when the
NameNode starts up. Work is in progress to support periodic checkpointing in
the near future.
The
DataNode stores HDFS data in files in its local file system. The DataNode has
no knowledge about HDFS files. It stores each block of HDFS data in a separate
file in its local file system. The DataNode does not create all files in the
same directory. Instead, it uses a heuristic to determine the optimal number of
files per directory and creates subdirectories appropriately. It is not optimal
to create all local files in the same directory because the local file system
might not be able to efficiently support a huge number of files in a single
directory. When a DataNode starts up, it scans through its local file system,
generates a list of all HDFS data blocks that correspond to each of these local
files and sends this report to the NameNode: this is the Blockreport.
Related links:
Hadoop training, Hadoop online training, Hadoop training in Hyderabad
Related links:
Hadoop training, Hadoop online training, Hadoop training in Hyderabad