When was the NULL macro not 0?

György Andrasek picture György Andrasek · Apr 8, 2010 · Viewed 11.6k times · Source

I vaguely remember reading about this a couple of years ago, but I can't find any reference on the net.

Can you give me an example where the NULL macro didn't expand to 0?

Edit for clarity: Today it expands to either ((void *)0), (0), or (0L). However, there were architectures long forgotten where this wasn't true, and NULL expanded to a different address. Something like

#ifdef UNIVAC
     #define NULL (0xffff)
#endif

I'm looking for an example of such a machine.

Update to address the issues:

I didn't mean this question in the context of current standards, or to upset people with my incorrect terminology. However, my assumptions were confirmed by the accepted answer:

Later models used [blah], evidently as a sop to all the extant poorly-written C code which made incorrect assumptions.

For a discussion about null pointers in the current standard, see this question.

Answer

janks picture janks · Apr 8, 2010

The C FAQ has some examples of historical machines with non-0 NULL representations.

From The C FAQ List, question 5.17:

Q: Seriously, have any actual machines really used nonzero null pointers, or different representations for pointers to different types?

A: The Prime 50 series used segment 07777, offset 0 for the null pointer, at least for PL/I. Later models used segment 0, offset 0 for null pointers in C, necessitating new instructions such as TCNP (Test C Null Pointer), evidently as a sop to [footnote] all the extant poorly-written C code which made incorrect assumptions. Older, word-addressed Prime machines were also notorious for requiring larger byte pointers (char *'s) than word pointers (int *'s).

The Eclipse MV series from Data General has three architecturally supported pointer formats (word, byte, and bit pointers), two of which are used by C compilers: byte pointers for char * and void *, and word pointers for everything else. For historical reasons during the evolution of the 32-bit MV line from the 16-bit Nova line, word pointers and byte pointers had the offset, indirection, and ring protection bits in different places in the word. Passing a mismatched pointer format to a function resulted in protection faults. Eventually, the MV C compiler added many compatibility options to try to deal with code that had pointer type mismatch errors.

Some Honeywell-Bull mainframes use the bit pattern 06000 for (internal) null pointers.

The CDC Cyber 180 Series has 48-bit pointers consisting of a ring, segment, and offset. Most users (in ring 11) have null pointers of 0xB00000000000. It was common on old CDC ones-complement machines to use an all-one-bits word as a special flag for all kinds of data, including invalid addresses.

The old HP 3000 series uses a different addressing scheme for byte addresses than for word addresses; like several of the machines above it therefore uses different representations for char * and void * pointers than for other pointers.

The Symbolics Lisp Machine, a tagged architecture, does not even have conventional numeric pointers; it uses the pair <NIL, 0> (basically a nonexistent <object, offset> handle) as a C null pointer.

Depending on the "memory model" in use, 8086-family processors (PC compatibles) may use 16-bit data pointers and 32-bit function pointers, or vice versa.

Some 64-bit Cray machines represent int * in the lower 48 bits of a word; char * additionally uses some of the upper 16 bits to indicate a byte address within a word.