I have seen the following macro definitions in a coding book.
#define TRUE '/'/'/'
#define FALSE '-'-'-'
There was no explanation there.
Please explain to me how these will work as TRUE
and FALSE
.
Let's see: '/' / '/'
means the char
literal /
, divided by the char
literal '/'
itself. The result is one, which sounds reasonable for TRUE
.
And '-' - '-'
means the char
literal '-'
, subtracted from itself. This is zero (FALSE
).
There are two problems with this: first, it's not readable. Using 1
and 0
is absolutely better. Also, as TartanLlama and KerrekSB have pointed out, if you are ever going to use that definition, please do add parentheses around them so you won't have any surprises:
#include <stdio.h>
#define TRUE '/'/'/'
#define FALSE '-'-'-'
int main() {
printf ("%d\n", 2 * FALSE);
return 0;
}
This will print the value of the char
literal '-'
(45 on my system).
With parentheses:
#define TRUE ('/'/'/')
#define FALSE ('-'-'-')
the program correctly prints zero, even though it doesn't make much sense to multiply a truth value by an integer, but it's just an example of the kind of unexpected bugs that could bite you if you don't parenthesize your macros.