I have a really large csv file that I opened in pandas as follows....
import pandas
df = pandas.read_csv('large_txt_file.txt')
Once I do this my memory usage increases by 2GB, which is expected because this file contains millions of rows. My problem comes when I need to release this memory. I ran....
del df
However, my memory usage did not drop. Is this the wrong approach to release memory used by a pandas data frame? If it is, what is the proper way?
Reducing memory usage in Python is difficult, because Python does not actually release memory back to the operating system. If you delete objects, then the memory is available to new Python objects, but not free()
'd back to the system (see this question).
If you stick to numeric numpy arrays, those are freed, but boxed objects are not.
>>> import os, psutil, numpy as np
>>> def usage():
... process = psutil.Process(os.getpid())
... return process.get_memory_info()[0] / float(2 ** 20)
...
>>> usage() # initial memory usage
27.5
>>> arr = np.arange(10 ** 8) # create a large array without boxing
>>> usage()
790.46875
>>> del arr
>>> usage()
27.52734375 # numpy just free()'d the array
>>> arr = np.arange(10 ** 8, dtype='O') # create lots of objects
>>> usage()
3135.109375
>>> del arr
>>> usage()
2372.16796875 # numpy frees the array, but python keeps the heap big
Python keep our memory at high watermark, but we can reduce the total number of dataframes we create. When modifying your dataframe, prefer inplace=True
, so you don't create copies.
Another common gotcha is holding on to copies of previously created dataframes in ipython:
In [1]: import pandas as pd
In [2]: df = pd.DataFrame({'foo': [1,2,3,4]})
In [3]: df + 1
Out[3]:
foo
0 2
1 3
2 4
3 5
In [4]: df + 2
Out[4]:
foo
0 3
1 4
2 5
3 6
In [5]: Out # Still has all our temporary DataFrame objects!
Out[5]:
{3: foo
0 2
1 3
2 4
3 5, 4: foo
0 3
1 4
2 5
3 6}
You can fix this by typing %reset Out
to clear your history. Alternatively, you can adjust how much history ipython keeps with ipython --cache-size=5
(default is 1000).
Wherever possible, avoid using object dtypes.
>>> df.dtypes
foo float64 # 8 bytes per value
bar int64 # 8 bytes per value
baz object # at least 48 bytes per value, often more
Values with an object dtype are boxed, which means the numpy array just contains a pointer and you have a full Python object on the heap for every value in your dataframe. This includes strings.
Whilst numpy supports fixed-size strings in arrays, pandas does not (it's caused user confusion). This can make a significant difference:
>>> import numpy as np
>>> arr = np.array(['foo', 'bar', 'baz'])
>>> arr.dtype
dtype('S3')
>>> arr.nbytes
9
>>> import sys; import pandas as pd
>>> s = pd.Series(['foo', 'bar', 'baz'])
dtype('O')
>>> sum(sys.getsizeof(x) for x in s)
120
You may want to avoid using string columns, or find a way of representing string data as numbers.
If you have a dataframe that contains many repeated values (NaN is very common), then you can use a sparse data structure to reduce memory usage:
>>> df1.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 39681584 entries, 0 to 39681583
Data columns (total 1 columns):
foo float64
dtypes: float64(1)
memory usage: 605.5 MB
>>> df1.shape
(39681584, 1)
>>> df1.foo.isnull().sum() * 100. / len(df1)
20.628483479893344 # so 20% of values are NaN
>>> df1.to_sparse().info()
<class 'pandas.sparse.frame.SparseDataFrame'>
Int64Index: 39681584 entries, 0 to 39681583
Data columns (total 1 columns):
foo float64
dtypes: float64(1)
memory usage: 543.0 MB
You can view the memory usage (docs):
>>> df.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 39681584 entries, 0 to 39681583
Data columns (total 14 columns):
...
dtypes: datetime64[ns](1), float64(8), int64(1), object(4)
memory usage: 4.4+ GB
As of pandas 0.17.1, you can also do df.info(memory_usage='deep')
to see memory usage including objects.