Fast way to discover the row count of a table in PostgreSQL

Renato Dinhani picture Renato Dinhani · Oct 30, 2011 · Viewed 96.1k times · Source

I need to know the number of rows in a table to calculate a percentage. If the total count is greater than some predefined constant, I will use the constant value. Otherwise, I will use the actual number of rows.

I can use SELECT count(*) FROM table. But if my constant value is 500,000 and I have 5,000,000,000 rows in my table, counting all rows will waste a lot of time.

Is it possible to stop counting as soon as my constant value is surpassed?

I need the exact number of rows only as long as it's below the given limit. Otherwise, if the count is above the limit, I use the limit value instead and want the answer as fast as possible.

Something like this:

SELECT text,count(*), percentual_calculus()  
FROM token  
GROUP BY text  
ORDER BY count DESC;

Answer

Erwin Brandstetter picture Erwin Brandstetter · Oct 30, 2011

Counting rows in big tables is known to be slow in PostgreSQL. To get a precise number it has to do a full count of rows due to the nature of MVCC. There is a way to speed this up dramatically if the count does not have to be exact like it seems to be in your case.

Instead of getting the exact count (slow with big tables):

SELECT count(*) AS exact_count FROM myschema.mytable;

You get a close estimate like this (extremely fast):

SELECT reltuples::bigint AS estimate FROM pg_class where relname='mytable';

How close the estimate is depends on whether you run ANALYZE enough. It is usually very close.
See the PostgreSQL Wiki FAQ.
Or the dedicated wiki page for count(*) performance.

Better yet

The article in the PostgreSQL Wiki is was a bit sloppy. It ignored the possibility that there can be multiple tables of the same name in one database - in different schemas. To account for that:

SELECT c.reltuples::bigint AS estimate
FROM   pg_class c
JOIN   pg_namespace n ON n.oid = c.relnamespace
WHERE  c.relname = 'mytable'
AND    n.nspname = 'myschema'

Or better still

SELECT reltuples::bigint AS estimate
FROM   pg_class
WHERE  oid = 'myschema.mytable'::regclass;

Faster, simpler, safer, more elegant. See the manual on Object Identifier Types.

Use to_regclass('myschema.mytable') in Postgres 9.4+ to avoid exceptions for invalid table names:


TABLESAMPLE SYSTEM (n) in Postgres 9.5+

SELECT 100 * count(*) AS estimate FROM mytable TABLESAMPLE SYSTEM (1);

Like @a_horse commented, the newly added clause for the SELECT command might be useful if statistics in pg_class are not current enough for some reason. For example:

  • No autovacuum running.
  • Immediately after a big INSERT or DELETE.
  • TEMPORARY tables (which are not covered by autovacuum).

This only looks at a random n % (1 in the example) selection of blocks and counts rows in it. A bigger sample increases the cost and reduces the error, your pick. Accuracy depends on more factors:

  • Distribution of row size. If a given block happens to hold wider than usual rows, the count is lower than usual etc.
  • Dead tuples or a FILLFACTOR occupy space per block. If unevenly distributed across the table, the estimate may be off.
  • General rounding errors.

In most cases the estimate from pg_class will be faster and more accurate.

Answer to actual question

First, I need to know the number of rows in that table, if the total count is greater than some predefined constant,

And whether it ...

... is possible at the moment the count pass my constant value, it will stop the counting (and not wait to finish the counting to inform the row count is greater).

Yes. You can use a subquery with LIMIT:

SELECT count(*) FROM (SELECT 1 FROM token LIMIT 500000) t;

Postgres actually stops counting beyond the given limit, you get an exact and current count for up to n rows (500000 in the example), and n otherwise. Not nearly as fast as the estimate in pg_class, though.