Does it make sense to use Hungarian notation prefixes in interpreted languages?

Andrey Rubshtein picture Andrey Rubshtein · Jan 9, 2012 · Viewed 8.3k times · Source

First of all, I have taken a look at the following posts to avoid duplicate question.

https://stackoverflow.com/questions/1184717/hungarian-notation
Why shouldn't I use "Hungarian Notation"?
Are variable prefixes (“Hungarian notation”) really necessary anymore?
Do people use the Hungarian Naming Conventions in the real world?

Now, all of these posts are related to C#, C++, Java - strongly typed languages.
I do understand that there is no need for the prefixes when the type is known before compilation.
Nevertheless, my question is:

Is it worthwhile to use the prefixes in interpreter based languages, considering the fact that you cant see the type of the object before runtime?

Edit: If someone can make this post a community wiki, please do. I am hardly interested in the reputation (or negative reputation) from this post.

Answer

glglgl picture glglgl · Jan 9, 2012

It depends on which of the two versions you refer to:

  • If you want to use the "real", original Hungarian notation AKA Applications Hungarian notation, denoting the logical variable type resp. its purpose, feel free to do so.

  • OTOH, the "misunderstood" version AKA Systems Hungarian notation, denotng just the physical variable type is frowned upon and should not be used.