I need to store a big list of integers in Bigtable(db). For efficiency I am storing them as diff between 2 consecutive items.
for eg:
original_list = [1005, 1004, 1003, 1004, 1006]
Storing the above list(which actually contains more than 1000k items) as
start = 1005 diff = [-1, -1, 1, 2]
The closest I could manage is,
ltp = [start] map(lambda x: ltp.append(ltp[-1] + x), tick)
I am looking for an efficient way to convert it back into original list.
For such large data structures numpy will work well. For this example, it's over 200x faster (see below), and a bit easier to code, basically just
add.accumulate(diff)
Comparison between numpy and direct list manipulation:
import numpy as nx
import timeit
N = 10000
diff_nx = nx.zeros(N, dtype=nx.int)
diff_py = list(diff_nx)
start = 1005
def f0():
orig = [start]
for x in diff_py:
orig.append(orig[-1] + x)
def f1():
diff_nx[0] = start
nx.add.accumulate(diff_nx)
t = timeit.Timer("f0()", "from __main__ import f0, f1, diff_nx, diff_py, nx, start")
print t.timeit(number=1000)
t = timeit.Timer("f1()", "from __main__ import f0, f1, diff_nx, diff_py, nx, start")
print t.timeit(number=1000)
gives
13.4044158459 # for list looping
0.0474112033844 # for numpy accumulate
Really, though, it seems better to reuse an established compression algorithm, like can easily be done with PyTables, rather than rolling your own like it seems that you're doing here.
Also, here, I'm suggesting that you read in the data with room for the prepended start term, rather than rebuild the list with the prepended term, of course, so you don't have to do the copy.