How to avoid OOM (Out of memory) error when retrieving all records from huge table?

janetsmith picture janetsmith · Jul 10, 2009 · Viewed 34.4k times · Source

I am given a task to convert a huge table to custom XML file. I will be using Java for this job.

If I simply issue a "SELECT * FROM customer", it may return huge amount of data that eventually causing OOM. I wonder, is there a way i can process the record immediately once it become available, and remove the record from memory after that during sql retrieving process?

--- edited on 13 Jul 2009

Let me elaborate my question. I have 1 db server and 1 application server. When I issue a select query in application, the data will travel from db server to app server.

I believe (correct me if I am wrong) ResultSet will need to wait until receiving all records in the query. Even if we set fetch size as 4, for a 1000-record table, we still end up having 1000 records in heap memory of app server, is it correct? Fetch size only affect the number of round trip from/to db server.

My question is, how to start processing on that 4 (or any number) records immediately after it's arrival to app server, and dispose it to free up memory in app server?

Answer

Clint picture Clint · Jul 10, 2009

With a little more information I can get a more helpful answer.

If you are using MySQL:

stmt = conn.createStatement(java.sql.ResultSet.TYPE_FORWARD_ONLY,
       java.sql.ResultSet.CONCUR_READ_ONLY);
stmt.setFetchSize(Integer.MIN_VALUE);

from http://www.oracle.com/technology/tech/java/sqlj_jdbc/htdocs/jdbc_faq.html:

java.util.Properties info = new java.util.Properties();
info.put ("user", "scott");
info.put ("password","tiger");
info.put ("defaultRowPrefetch","15");
getConnection ("jdbc:oracle:oci:@",info);