I've got a CSV file with a format that looks like this:
"FieldName1", "FieldName2", "FieldName3", "FieldName4"
"04/13/2010 14:45:07.008", "7.59484916392", "10", "6.552373"
"04/13/2010 14:45:22.010", "6.55478493312", "9", "3.5378543"
...
Note that there are double quote characters at the start and end of each line in the CSV file, and the ","
string is used to delimit fields within each line. The number of fields in the CSV file can vary from file to file.
When I try to read this into numpy via:
import numpy as np
data = np.genfromtxt(csvfile, dtype=None, delimiter=',', names=True)
all the data gets read in as string values, surrounded by double-quote characters. Not unreasonable, but not much use to me as I then have to go back and convert every column to its correct type
When I use delimiter='","'
instead, everything works as I'd like, except for the 1st and last fields. As the start of line and end of line characters are a single double-quote character, this isn't seen as a valid delimiter for the 1st and last fields, so they get read in as e.g. "04/13/2010 14:45:07.008
and 6.552373"
- note the leading and trailing double-quote characters respectively. Because of these redundant characters, numpy assumes the 1st and last fields are both String types; I don't want that to be the case
Is there a way of instructing numpy to read in files formatted in this fashion as I'd like, without having to go back and "fix" the structure of the numpy array after the initial read?
The basic problem is that NumPy doesn't understand the concept of stripping quotes (whereas the csv
module does). When you say delimiter='","'
, you're telling NumPy that the column delimiter is literally a quoted comma, i.e. the quotes are around the comma, not the value, so the extra quotes you get on he first and last columns are expected.
Looking at the function docs, I think you'll need to set the converters
parameter to strip quotes for you (the default does not):
import re
import numpy as np
fieldFilter = re.compile(r'^"?([^"]*)"?$')
def filterTheField(s):
m = fieldFilter.match(s.strip())
if m:
return float(m.group(1))
else:
return 0.0 # or whatever default
#...
# Yes, sorry, you have to know the number of columns, since the NumPy docs
# don't say you can specify a default converter for all columns.
convs = dict((col, filterTheField) for col in range(numColumns))
data = np.genfromtxt(csvfile, dtype=None, delimiter=',', names=True,
converters=convs)
Or abandon np.genfromtxt()
and let csv.csvreader
give you the file's contents a row at a time, as lists of strings, then you just iterate through the elements and build the matrix:
reader = csv.csvreader(csvfile)
result = np.array([[float(col) for col in row] for row in reader])
# BTW, column headings are in reader.fieldnames at this point.
EDIT: Okay, so it looks like your file isn't all floats. In that case, you can set convs
as needed in the genfromtxt
case, or create a vector of conversion functions in the csv.csvreader
case:
reader = csv.csvreader(csvfile)
converters = [datetime, float, int, float]
result = np.array([[conv(col) for col, conv in zip(row, converters)]
for row in reader])
# BTW, column headings are in reader.fieldnames at this point.
EDIT 2: Okay, variable column count... Your data source just wants to make life difficult. Luckily, we can just use magic
...
reader = csv.csvreader(csvfile)
result = np.array([[magic(col) for col in row] for row in reader])
... where magic()
is just a name I got off the top of my head for a function. (Psyche!)
At worst, it could be something like:
def magic(s):
if '/' in s:
return datetime(s)
elif '.' in s:
return float(s)
else:
return int(s)
Maybe NumPy has a function that takes a string and returns a single element with the right type. numpy.fromstring()
looks close, but it might interpret the space in your timestamps as a column separator.
P.S. One downside with csvreader
I see is that it doesn't discard comments; real csv
files don't have comments.