In a pylab program (which could probably be a matlab program as well) I have a numpy array of numbers representing distances: d[t]
is the distance at time t
(and the timespan of my data is len(d)
time units).
The events I'm interested in are when the distance is below a certain threshold, and I want to compute the duration of these events. It's easy to get an array of booleans with b = d<threshold
, and the problem comes down to computing the sequence of the lengths of the True-only words in b
. But I do not know how to do that efficiently (i.e. using numpy primitives), and I resorted to walk the array and to do manual change detection (i.e. initialize counter when value goes from False to True, increase counter as long as value is True, and output the counter to the sequence when value goes back to False). But this is tremendously slow.
How to efficienly detect that sort of sequences in numpy arrays ?
Below is some python code that illustrates my problem : the fourth dot takes a very long time to appear (if not, increase the size of the array)
from pylab import *
threshold = 7
print '.'
d = 10*rand(10000000)
print '.'
b = d<threshold
print '.'
durations=[]
for i in xrange(len(b)):
if b[i] and (i==0 or not b[i-1]):
counter=1
if i>0 and b[i-1] and b[i]:
counter+=1
if (b[i-1] and not b[i]) or i==len(b)-1:
durations.append(counter)
print '.'
Fully numpy vectorized and generic RLE for any array (works with strings, booleans etc too).
Outputs tuple of run lengths, start positions, and values.
import numpy as np
def rle(inarray):
""" run length encoding. Partial credit to R rle function.
Multi datatype arrays catered for including non Numpy
returns: tuple (runlengths, startpositions, values) """
ia = np.asarray(inarray) # force numpy
n = len(ia)
if n == 0:
return (None, None, None)
else:
y = np.array(ia[1:] != ia[:-1]) # pairwise unequal (string safe)
i = np.append(np.where(y), n - 1) # must include last element posi
z = np.diff(np.append(-1, i)) # run lengths
p = np.cumsum(np.append(0, z))[:-1] # positions
return(z, p, ia[i])
Pretty fast (i7):
xx = np.random.randint(0, 5, 1000000)
%timeit yy = rle(xx)
100 loops, best of 3: 18.6 ms per loop
Multiple data types:
rle([True, True, True, False, True, False, False])
Out[8]:
(array([3, 1, 1, 2]),
array([0, 3, 4, 5]),
array([ True, False, True, False], dtype=bool))
rle(np.array([5, 4, 4, 4, 4, 0, 0]))
Out[9]: (array([1, 4, 2]), array([0, 1, 5]), array([5, 4, 0]))
rle(["hello", "hello", "my", "friend", "okay", "okay", "bye"])
Out[10]:
(array([2, 1, 1, 2, 1]),
array([0, 2, 3, 4, 6]),
array(['hello', 'my', 'friend', 'okay', 'bye'],
dtype='|S6'))
Same results as Alex Martelli above:
xx = np.random.randint(0, 2, 20)
xx
Out[60]: array([1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1])
am = runs_of_ones_array(xx)
tb = rle(xx)
am
Out[63]: array([4, 5, 2, 5])
tb[0][tb[2] == 1]
Out[64]: array([4, 5, 2, 5])
%timeit runs_of_ones_array(xx)
10000 loops, best of 3: 28.5 µs per loop
%timeit rle(xx)
10000 loops, best of 3: 38.2 µs per loop
Slightly slower than Alex (but still very fast), and much more flexible.