Why should I use string.length == 0 over string == "" when checking for empty string in ECMAScript?

Thomas Müller picture Thomas Müller · Dec 2, 2009 · Viewed 26.8k times · Source

Most of the developers on my current project use a (to me) strange way to check for empty strings in ECMAScript:

if (theString.length == 0)
    // string is empty

I would normally write this instead:

if (theString == "")
    // string is empty

The latter version seems more readable and natural to me.

Nobody I asked seemed to be able to explain the advantages of version 1. I guess that at some time in the past somebody told everybody that this is the way to do it, but now that person left and nobody remembers why it should be done this way.

I'm wondering whether there is a reason why I should choose the first version over the second? Does it matter, is one version better than the other one? Is one version safer or faster for some reason?

(We actually do this in Siebel eScript which is compliant with ECMAScript Edition 4)

Thanks.

Answer

Shog9 picture Shog9 · Dec 2, 2009

I actually prefer that technique in a number of languages, since it's sometimes hard to differentiate between an empty string literal "" and several other strings (" ", '"').

But there's another reason to avoid theString == "" in ECMAScript: 0 == "" evaluates to true, as does false == "" and 0.0 == ""...

...so unless you know that theString is actually a string, you might end up causing problems for yourself by using the weak comparison. Fortunately, you can avoid this with judicious use of the strict equal (===) operator:

if ( theString === "" )
   // string is a string and is empty

See also: