Why is hibernate batching / order_inserts / order_updates disabled by default?

fbe picture fbe · Jan 3, 2015 · Viewed 15.8k times · Source

are there any reasons why hibernate batching / hibernate.order_updates / hibernate.order_inserts are disabled by default? Is there any disadvantage when you enable a batch size of 50? Same for the order_updates / order_inserts parameters. Is there an use case where you shouldn't enable this features? Are there any performance impacts when using this features?

I can only see that these settings help a lot when i need to reduce my query count which is necessary especially in a cloud environment with high latencies between my application and database server.

Answer

przemek hertel picture przemek hertel · Jan 3, 2015

Generally setting batch size to resonable size and order_insert, order_updates to true can significantly improve performance.

In all my projects I use this configuration as basis:

hibernate.jdbc.batch_size = 100
hibernate.order_inserts   = true 
hibernate.order_updates   = true
hibernate.jdbc.fetch_size = 400

But, yes - there can be memory impact when using batching. But this depends on jdbc driver.

For example Oracle JDBC driver creates internal buffers for each PreparedStatement and reuses these buffers. If you call simple update statement you set some parameters with ps.setInt(1, ...), ps.setString(2, ...) etc, and Oracle converts this values to some byte representation and stores in buffer associated with this PreparedStatement and connection.

However when your PreparedStatement uses batch of size 100, this buffer will be 100 times larger. And if you have some connection pool with for exapmle 50 connections there can be 50 such big buffers. And if you have 100 different statements using batching all such buffers can have significant memory impact. When you enable batch size it becomes global setting - Hibernate will use it for all insert/updates.

However I found that in all my projects performance gain was more important that this memory impact and that is why I use batchsize=100 as my default.

With order_inserts, order_updates, I think that these are disabled by default, because these settings make sense only when batching is on. With batching set off, these ordering is simply overhead.

You can find more info in Oracle's white paper:

http://www.oracle.com/technetwork/topics/memory.pdf

in section "Statement Batching and Memory Use".

==== EDIT 2016.05.31 ====

A word about order_inserts and order_udpates property. Lets say we have entities A, B and persist 6 objects this way:

session.save(A1);  // added to action queue
session.save(B1);  // added to action queue
session.save(A2);  // ...
session.save(B2);  // ...
session.save(A3);  // ...
session.save(B3);  // ...

after above execution:

  • these 6 objects has identifiers generated
  • these 6 objects are connected to session (StatefulPersistenceContext: entitiesByKey, entityEntries, etc. /Hib.v3/)
  • these 6 objects are added to ActionQueue in the same order: [A1, B1, A2, B2, A3, B3]

Now, consider 2 cases:

case 1: order_inserts = false

during flush phase hibernate performs 6 insert statements:

ActionQueue = [A1, B1, A2, B2, A3, B3]
insert into A - (A1)
insert into B - (B1)
insert into A - (A2)
insert into B - (B2)
insert into A - (A3)
insert into B - (B3)

case 2: order_inserts = true, batching allowed

now, during flush phase hibernate performs 2 batch insert statements:

ActionQueue = [A1, A2, A3, B1, B2, B3]
insert into A -  (A1, A2, A3)
insert into B -  (B1, B2, B3)

I investigated this for Hibernate v3, I think Hibernate v4 uses ActionQueue in the same way.