MySQL performance for version 5.7 vs. 5.6

Peter A picture Peter A · Apr 22, 2017 · Viewed 12.9k times · Source

I have noticed a particular performance issue that I am unsure on how to deal with.

I am in the process of migrating a web application from one server to another with very similar specifications. The new server typically outperforms the old server to be clear.

The old server is running MySQL 5.6.35
The new server is running MySQL 5.7.17

Both the new and old server have virtually identical MySQL configurations. Both the new and old server are running the exact same database perfectly duplicated.

The web application in question is Magento 1.9.3.2.

In Magento, the following function Mage_Catalog_Model_Category::getChildrenCategories() is intended to list all the immediate children categories given a certain category.

In my case, this function bubbles down eventually to this query:

SELECT    `main_table`.`entity_id`
        , main_table.`name`
        , main_table.`path`
        , `main_table`.`is_active`
        , `main_table`.`is_anchor`
        , `url_rewrite`.`request_path`

FROM `catalog_category_flat_store_1` AS `main_table`

LEFT JOIN `core_url_rewrite` AS `url_rewrite`
ON url_rewrite.category_id=main_table.entity_id
AND url_rewrite.is_system=1
AND url_rewrite.store_id = 1
AND url_rewrite.id_path LIKE 'category/%'

WHERE (main_table.include_in_menu = '1')
AND (main_table.is_active = '1')
AND (main_table.path LIKE '1/494/%')
AND (`level` <= 2)
ORDER BY `main_table`.`position` ASC;

While the structure for this query is the same for any Magento installation, there will obviously be slight discrepancies on values between Magento Installation to Magento Installation and what category the function is looking at.

My catalog_category_flat_store_1 table has 214 rows.
My url_rewrite table has 1,734,316 rows.

This query, when executed on its own directly into MySQL performs very differently between MySQL versions.

I am using SQLyog to profile this query.

In MySQL 5.6, the above query performs in 0.04 seconds. The profile for this query looks like this: https://codepen.io/Petce/full/JNKEpy/

In MySQL 5.7, the above query performs in 1.952 seconds. The profile for this query looks like this: https://codepen.io/Petce/full/gWMgKZ/

As you can see, the same query on almost the exact same setup is virtually 2 seconds slower, and I am unsure as to why.

For some reason, MySQL 5.7 does not want to use the table index to help produce the result set.

Anyone out there with more experience/knowledge can explain what is going on here and how to go about fixing it?

I believe the issue has something to do with the way that MYSQL 5.7 optimizer works. For some reason, it appears to think that a full table scan is the way to go. I can drastically improve the query performance by setting max_seeks_for_key very low (like 100) or dropping the range_optimizer_max_mem_size really low to forcing it to throw a warning.

Doing either of these increases the query speed by almost 10x down to 0.2 sec, however, this is still magnitudes slower that MYSQL 5.6 which executes in 0.04 seconds, and I don't think either of these is a good idea as I'm not sure if there would be other implications.

It is also very difficult to modify the query as it is generated by the Magento framework and would require customisation of the Magento codebase which I'd like to avoid. I'm also not even sure if it is the only query that is effected.

I have included the minor versions for my MySQL installations. I am now attempting to update MySQL 5.7.17 to 5.7.18 (the latest build) to see if there is any update to the performance.

After upgrading to MySQL 5.7.18 I saw no improvement. In order to bring the system back to a stable high performing state, we decided to downgrade back to MySQL 5.6.30. After doing the downgrade we saw an instant improvement.

The above query executed in MySQL 5.6.30 on the NEW server executed in 0.036 seconds.

Answer

Rick James picture Rick James · Apr 22, 2017

Wow! This is the first time I have seen something useful from Profiling. Dynamically creating an index is a new Optimization feature from Oracle. But it looks like that was not the best plan for this case.

First, I will recommend that you file a bug at http://bugs.mysql.com -- they don't like to have regressions, especially this egregious. If possible, provide EXPLAIN FORMAT=JSON SELECT... and "Optimizer trace". (I do not accept tweaking obscure tunables as an acceptable answer, but thanks for discovering them.)

Back to helping you...

  • If you don't need LEFT, don't use it. It returns NULLs when there are no matching rows in the 'right' table; will that happen in your case?
  • Please provide SHOW CREATE TABLE. Meanwhile, I will guess that you don't have INDEX(include_in_menu, is_active, path). The first two can be in either order; path needs to be last.
  • And INDEX(category_id, is_system, store_id, id_path) with id_path last.
  • Your query seems to have a pattern that works well for turning into a subquery:

(Note: this even preserves the semantics of LEFT.)

SELECT  `main_table`.`entity_id` , main_table.`name` , main_table.`path` ,
        `main_table`.`is_active` , `main_table`.`is_anchor` ,
        ( SELECT  `request_path`
            FROM  url_rewrite
            WHERE  url_rewrite.category_id=main_table.entity_id
              AND  url_rewrite.is_system = 1
              AND  url_rewrite.store_id  = 1
              AND  url_rewrite.id_path LIKE 'category/%' 
        ) as request_path
    FROM  `catalog_category_flat_store_1` AS `main_table`
    WHERE  (main_table.include_in_menu = '1')
      AND  (main_table.is_active = '1')
      AND  (main_table.path like '1/494/%')
      AND  (`level` <= 2)
    ORDER BY  `main_table`.`position` ASC
    LIMIT  0, 1000 

(The suggested indexes apply here, too.)