The Hadoop filesystem is called HDFS, and today I’m going to give a short introduction to how it works for a beginner.
The Hadoop File System (HDFS) sits on top of a Hadoop cluster and facilitates the distributed storage and access of files. When a file is stored in HDFS, it is split into chunks called “blocks”. They can be of different sizes. The blocks are scattered between the nodes. These nodes have a daemon running called a datanode. There is one node called the namenode that has metadata about the blocks and their whereabouts.
To protect against network or disk failure data is replicated in three places across the cluster. This makes the data redundant. Therefore if one datanode goes down, there are other copies of the data elsewhere. When this happens a new copy of the data is created, so that there are always three.
The namenode is even more important, because it has metadata about all the files. If there is a network issue, all of the data will be unavailable. However, if the disk on the namenode fails, the data may be lost forever, because the namenode has all the information about how the pieces of the files go together. We’d still have all the chunks on the data nodes, but we’d have no idea what file they go to.
To get around this issue, one solution is to also mount the drive on a network file system (NFS). Another way to approach this (which is a better alternative) is to have an active namenode and a standby namenode. This way, there is a “backup” if something goes wrong.
- To list files on HDFS:
$ hadoop fs -ls
- To put files on HDFS:
$ hadoop fs -put filename
- this takes a local file and puts on HDFS
- To display the end of a file :
$ hadoop fs -tail filename
- Most bash commands will work if you put a dash in front of them
$ hadoop fs -cat
$ hadoop fs -mv
$ hadoop fs -mkdir