I have a very large training set (~2Gb) in a CSV file. The file is too large to read directly into memory (read.csv()
brings the computer to a halt) and I would like to reduce the size of the data file using PCA. The problem is that (as far as I can tell) I need to read the file into memory in order to run a PCA algorithm (e.g., princomp()
).
I have tried the bigmemory
package to read the file in as a big.matrix
, but princomp
doesn't function on big.matrix
objects and it doesn't seem like big.matrix
can be converted into something like a data.frame
.
Is there a way of running princomp
on a large data file that I'm missing?
I'm a relative novice at R, so some of this may be obvious to more seasoned users (apologies in avance).
Thanks for any info.
The way I solved it was by calculating the sample covariance matrix iteratively. In this way you only need a subset of the data for any point in time. Reading in just a subset of the data can be done using readLines
where you open a connection to the file and read iteratively. The algorithm looks something like (it is a two-step algorithm):
Calculate the mean values per column (assuming that are the variables)
con = open(...)
)readLines(con, n = 1000)
)sos_column = sos_column + new_sos
)Calculate the covariance matrix:
con = open(...)
)readLines(con, n = 1000)
)crossprod
When you have the covariance matrix, just call princomp
with covmat = your_covmat
and princomp
will skip calulating the covariance matrix himself.
In this way the datasets you can process are much, much larger than your available RAM. During the iterations, the memory usage is roughly the memory the chunk takes (e.g. 1000 rows), after that the memory usage is limited to the covariance matrix (nvar * nvar doubles).