Count lines in large files

Dnaiel picture Dnaiel · Oct 3, 2012 · Viewed 109.9k times · Source

I commonly work with text files of ~20 Gb size and I find myself counting the number of lines in a given file very often.

The way I do it now it's just cat fname | wc -l, and it takes very long. Is there any solution that'd be much faster?

I work in a high performance cluster with Hadoop installed. I was wondering if a map reduce approach could help.

I'd like the solution to be as simple as one line run, like the wc -l solution, but not sure how feasible it is.

Any ideas?

Answer

P.P picture P.P · Oct 3, 2012

Try: sed -n '$=' filename

Also cat is unnecessary: wc -l filename is enough in your present way.