I'm attempting to insert a modified document back to Cassandra DB with a new key. I'm having hard time figuring out what is the issue the error message is pointing at. When looking for others that have had similar problems the answers seem to be related to the keys, and in my case the None is just a value of few of the keys. How do I solve this issue?
keys = ','.join(current.keys())
params = [':' + x for x in current.keys()]
values = ','.join(params)
query = "INSERT INTO wiki.pages (%s) Values (%s)" % (keys, values)
query = query.encode('utf-8')
cursor.execute(query, current)
Here's the data for query and current:
INSERT INTO wiki.pages (changed,content,meta,attachment,revision,page,editor)
VALUES (:changed,:content,:meta,:attachment,:revision,:page,:editor)
{
u'changed': '2013-02-15 16:31:49',
u'content': 'Testing',
u'meta': None,
u'attachment': None,
u'revision': 2,
u'page': u'FrontPage',
u'editor': 'Anonymous'
}
This fails with the following error:
cql.apivalues.ProgrammingError:
Bad Request: line 1:123 no viable alternative at input 'None'
The "no viable alternative" means that the data type for some key doesn't match the schema for that column family column, unfortunately it doesn't plainly say that in the error message.
In my case the data type for meta was:
map<text,text>
for this reason None was considered a bad value at insertion time. I fixed the problem by replacing the None with an empty dict prior to insert:
if current['meta'] is None:
current['meta'] = dict()
The CQL driver accepts empty dict fine as new value for a map type, while None is not allowed, even though querying the map column returns None if it is empty.
Returning None and not accepting None did not feel intuitive, so later I decided to create custom wrapper for cursor.fetchone() that returns a map of columns instead of a list of columns, and also checks if MapType, ListType or SetType has returned None. If there are None values, it replaces them with empty dict(), list() or set() to avoid issues like the one I had when inserting modified data back to Cassandra. This seems to work nicely.