Many tables or rows, which one is more efficient in SQL?

KimchiMan picture KimchiMan · Jan 22, 2014 · Viewed 8.3k times · Source

I'm building a program that stores news headlines for companies and its timestamp from various sources.

Let's say the number of company is 1000. It goes like Apple, Google, Microsoft.. etc.

So I can think about two options.

  1. One table with numerous rows (above code is just an example).

    CREATE TABLE news
    (
        news_id INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
        company VARCHAR(10) NOT NULL,
        timestamp TIMESTAMP NOT NULL,
        source TEXT NOT NULL,
        content TEXT NOT NULL,
        ...
    )
    
    // I also can make company and timestamp as primary keys,
       and news_id will be unique key.*
    
  2. 1000 Tables

    CREATE TABLE news_apple // and news_google, news_microsoft, news_...(x 1000)
    (
        news_id INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
        timestamp TIMESTAMP NOT NULL,
        source TEXT NOT NULL,
        content TEXT NOT NULL,
        ...
    )
    

Most of the time, I will find the news for the certain company. Let's say there are more than 10000 news for each company. I wonder that if I use a 'WHERE' clause in the first option, it would be slower than the second option.

Which one is more efficient in terms of performance and why?

Answer

Gordon Linoff picture Gordon Linoff · Jan 22, 2014

Relational databases are designed to store many rows per table. There are a whole bunch of mechanisms to facilitate large tables, such as:

  • Indexes on any combination of fields to speed searches
  • Page caching so commonly used pages remain in memory
  • Vertical partitioning (columnar databases) to further speed requests
  • Advanced algorithms such as hash joins and group bys (at least in databases other than MySQL)
  • Use of multiple processors and disks to process queries

There is one thing that is more difficult when putting data in a single table, and that is security. And, in fact, in some circumstances this is a primary concern and basically requires that the data go in separate table. Those applications are rare and far between.

To give an example of how bad storing data in multiple tables could be, imagine that in your system you have one record per company and you store it in a table. This record stores information about the company -- something like name, address, whatever. Call is 100 bytes of information.

In your schema there is a separate table for each "company", so that is one row per table. That record will reside on one data page. A data page could be 16 kbytes, so you are wasting about 15.9 kbytes to store this data. Storing 1000 such records occupies 16 Mbytes instead of about 7 pages worth (112 Kbytes). That can be a significant performance hit.

In addition, with multiple tables you are not taking into account the challenges of maintaining all the tables and ensuring the correctness of data in the different tables. Maintenance updates need to be applied to thousands of tables, instead of a handful.