Why use constants in programming?

Adam N picture Adam N · Jun 1, 2010 · Viewed 29k times · Source

I've just been going back over a bit of C studying using Ivor Horton's Beginning C book. I got to the bit about declaring constants which seems to get mixed up with variables in the same sentence.

Just to clarify, what is the difference in specifying constants and variables in C, and really, when do you need to use a constant instead of a variable? I know folks say to use a constant when the information doesn't change during program execution but I can't really think of a time when a variable couldn't be used instead.

Answer

Uri picture Uri · Jun 1, 2010

A variable, as you can guess from the name, varies over time. If it doesn't vary, there is "no loss". When you tell the compiler that the value will not change, the compiler can do a whole bunch of optimizations, like directly inlining the value and never allocating any space for the constant on the stack.

However, you cannot always count on your compiler to be smart enough to be able to correctly determine if a value will change once set. In any situation where the compiler is incapable of determining this with 100% confidence, the compiler will err on the side of safety and assume it could change. This can result in various performance impacts like avoiding inlining, not optimizing certain loops, creating object code that is not as parallelism-friendly.

Because of this, and since readability is also important, you should strive to use an explicit constant whenever possible and leave variables for things that can actually change.

As to why constants are used instead of literal numbers:

1) It makes code more readable. Everyone knows what 3.14 is (hopefully), not everyone knows that 3.07 is the income tax rate in PA. This is an example of domain-specific knowledge, and not everyone maintaining your code in the future (e.g., a tax software) will know it.

2) It saves work when you make a change. Going and changing every 3.07 to 3.18 if the tax rate changes in the future will be annoying. You always want to minimize changes and ideally make a single change. The more concurrent changes you have to make, the higher the risk that you will forget something, leading to errors.

3) You avoid risky errors. Imagine that there were two states with an income tax rate of 3.05, and then one of them changes to 3.18 while the other stays at 3.07. By just going and replacing, you could end up with severe errors. Of course, many integer or string constant values are more common than "3.07". For example, the number 7 could represent the number of days in the week, and something else. In large programs, it is very difficult to determine what each literal value means.

4) In the case of string text, it is common to use symbolic names for strings to allow the string pools to change quickly in the case of supporting multiple languages.

Note that in addition to variables and "constant variables", there are also some languages with enumerations. An enumeration actually allows you to defines a type for a small group of constants (e.g., return values), so using them will provide type safety.

For example, if I have an enumeration for the days of the weeks and for the months, I will be warned if I assign a month into a day. If I just use integer constants, there will be no warning when day 3 is assigned to month 3. You always want type safety, and it improves readability. Enumerations are also better for defining order. Imagine that you have constants for the days of the week, and now you want your week to start on Monday rather than Sunday.