Precision lost while using read_csv in pandas

user904976 picture user904976 · Apr 28, 2016 · Viewed 14.6k times · Source

I have files of the below format in a text file which I am trying to read into a pandas dataframe.

895|2015-4-23|19|10000|LA|0.4677978806|0.4773469340|0.4089938425|0.8224291972|0.8652525793|0.6829942860|0.5139162227|

As you can see there are 10 integers after the floating point in the input file.

df = pd.read_csv('mockup.txt',header=None,delimiter='|')

When I try to read it into dataframe, I am not getting the last 4 integers

df[5].head()

0    0.467798
1    0.258165
2    0.860384
3    0.803388
4    0.249820
Name: 5, dtype: float64

How can I get the complete precision as present in the input file? I have some matrix operations that needs to be performed so i cannot cast it as string.

I figured out that I have to do something about dtype but I am not sure where I should use it.

Answer

jezrael picture jezrael · Apr 28, 2016

It is only display problem, see docs:

#temporaly set display precision
with pd.option_context('display.precision', 10):
    print df

     0          1   2      3   4             5            6             7   \
0  895  2015-4-23  19  10000  LA  0.4677978806  0.477346934  0.4089938425   

             8             9            10            11  12  
0  0.8224291972  0.8652525793  0.682994286  0.5139162227 NaN    

EDIT: (Thank you Mark Dickinson):

Pandas uses a dedicated decimal-to-binary converter that sacrifices perfect accuracy for the sake of speed. Passing float_precision='round_trip' to read_csv fixes this. See the documentation for more.