What is meant by "HDFS lacks random read and write access"?

lovespring picture lovespring · Jul 11, 2014 · Viewed 7.7k times · Source

Any file system should provide an API to access its files and directories, etc.

So, what is meant by "HDFS lacks random read and write access"?

So, we should use HBase.

Answer

Daniel Darabos picture Daniel Darabos · Jul 12, 2014

The default HDFS block size is 128 MB. So you cannot read one line here, one line there. You always read and write 128 MB blocks. This is fine when you want to process the whole file. But it makes HDFS unsuitable for some applications, like where you want to use an index to look up small records.

HBase on the other hand is great for this. If you want to read a small record, you will only read that small record.

HBase uses HDFS as its backing store. So how does it provide efficient record-based access?

HBase loads the tables from HDFS to memory or local disk, so most reads do not go to HDFS. Mutations are stored first in an append-only journal. When the journal gets large, it is built into an "addendum" table. When there are too many addendum tables, they all get compacted into a brand new primary table. For reads, the journal is consulted first, then the addendum tables, and at last the primary table. This system means that we only write a full HDFS block when we have a full HDFS block's worth of changes.

A more thorough description of this approach is in the Bigtable whitepaper.