We have a legacy database schema that has some interesting design decisions. Until recently, we have only supported Oracle and SQL Server, but we are trying to add support for PostgreSQL, which has brought up an interesting problem. I have searched Stack Overflow and the rest of the internet and I don't believe this particular situation is a duplicate.
Oracle and SQL Server both behave the same when it comes to nullable columns in a unique constraint, which is to essentially ignore the columns that are NULL when performing the unique check.
Let's say I have the following table and constraint:
CREATE TABLE EXAMPLE
(
ID TEXT NOT NULL PRIMARY KEY,
FIELD1 TEXT NULL,
FIELD2 TEXT NULL,
FIELD3 TEXT NULL,
FIELD4 TEXT NULL,
FIELD5 TEXT NULL,
...
);
CREATE UNIQUE INDEX EXAMPLE_INDEX ON EXAMPLE
(
FIELD1 ASC,
FIELD2 ASC,
FIELD3 ASC,
FIELD4 ASC,
FIELD5 ASC
);
On both Oracle and SQL Server, leaving any of the nullable columns NULL
will result in only performing a uniqueness check on the non-null columns. So the following inserts can only be done once:
INSERT INTO EXAMPLE VALUES ('1','FIELD1_DATA', NULL, NULL, NULL, NULL );
INSERT INTO EXAMPLE VALUES ('2','FIELD1_DATA','FIELD2_DATA', NULL, NULL,'FIELD5_DATA');
-- These will succeed when they should violate the unique constraint:
INSERT INTO EXAMPLE VALUES ('3','FIELD1_DATA', NULL, NULL, NULL, NULL );
INSERT INTO EXAMPLE VALUES ('4','FIELD1_DATA','FIELD2_DATA', NULL, NULL,'FIELD5_DATA');
However, because PostgreSQL (correctly) adheres to the SQL Standard, those insertions (and any other combination of values as long as one of them is NULL) will not throw an error and be inserted correctly no problem. Unfortunately, because of our legacy schema and the supporting code, we need PostgreSQL to behave the same as SQL Server and Oracle.
I am aware of the following Stack Overflow question and its answers: Create unique constraint with null columns. From my understanding, there are two strategies to solve this problem:
NULL
and NOT NULL
(which results in exponential growth of the number of partial indexes)COAELSCE
with a sentinel value on the nullable columns in the index.The problem with (1) is that the number of partial indexes we'd need to create grows exponentially with each additional nullable column we'd like to add to the constraint (2^N if I am not mistaken). The problems with (2) are that sentinel values reduces the number of available values for that column and all of the potential performance problems.
My question: are these the only two solutions to this problem? If so, what are the tradeoffs between them for this particular use case? A good answer would discuss the performance of each solution, the maintainability, how PostgreSQL would utilize these indexes in simple SELECT
statements, and any other "gotchas" or things to be aware of. Keep in mind that 5 nullable columns was only for an example; we have some tables in our schema with up to 10 (yes, I cry every time I see it, but it is what it is).
You are striving for compatibility with your existing Oracle and SQL Server implementations.
Here is a presentation comparing physical row storage formats of the three involved RDBS.
Since Oracle does not implement NULL
values at all in row storage, it can't tell the difference between an empty string and NULL
anyway. So wouldn't it be prudent to use empty strings (''
) instead of NULL
values in Postgres as well - for this particular use case?
Define columns included in the unique constraint as NOT NULL DEFAULT ''
, problem solved:
CREATE TABLE example (
example_id serial PRIMARY KEY
, field1 text NOT NULL DEFAULT ''
, field2 text NOT NULL DEFAULT ''
, field3 text NOT NULL DEFAULT ''
, field4 text NOT NULL DEFAULT ''
, field5 text NOT NULL DEFAULT ''
, CONSTRAINT example_index UNIQUE (field1, field2, field3, field4, field5)
);
What you demonstrate in the question is a unique index:
CREATE UNIQUE INDEX ...
not the unique constraint you keep talking about. There are subtle, important differences!
I changed that to an actual constraint like you made it the subject of the post.
The keyword ASC
is just noise, since that is the default sort order. I left it away.
Using a serial
PK column for simplicity which is totally optional but typically better than numbers stored as text
.
Just omit empty / null fields from the INSERT
:
INSERT INTO example(field1) VALUES ('F1_DATA');
INSERT INTO example(field1, field2, field5) VALUES ('F1_DATA', 'F2_DATA', 'F5_DATA');
Repeating any of theses inserts would violate the unique constraint.
Or if you insist on omitting target columns (which is a bit of antipattern in persisted INSERT
statements):
Or for bulk inserts where all columns need to be listed:
INSERT INTO example VALUES
('1', 'F1_DATA', DEFAULT, DEFAULT, DEFAULT, DEFAULT)
, ('2', 'F1_DATA','F2_DATA', DEFAULT, DEFAULT,'F5_DATA');
Or simply:
INSERT INTO example VALUES
('1', 'F1_DATA', '', '', '', '')
, ('2', 'F1_DATA','F2_DATA', '', '','F5_DATA');
Or you can write a trigger BEFORE INSERT OR UPDATE
that converts NULL
to ''
.
If you need to use actual NULL values I would suggest the unique index with COALESCE
like you mentioned as option (2) and @wildplasser provided as his last example.
The index on an array like @Rudolfo presented is simple, but considerably more expensive. Array handling isn't very cheap in Postgres and there is an array overhead similar to that of a row (24 bytes):
Arrays are limited to columns of the same data type. You could cast all columns to text
if some are not, but it will typically further increase storage requirements. Or you could use a well-known row type for heterogeneous data types ...
A corner case: array (or row) types with all NULL values are considered equal (!), so there can only be 1 row with all involved columns NULL. May or may not be as desired. If you want to disallow all columns NULL: