Fastest way to compute entropy of each numpy array row?

erogol picture erogol · Nov 9, 2015 · Viewed 17.9k times · Source

I have a array in size MxN and I like to compute the entropy value of each row. What would be the fastest way to do so ?

Answer

Warren Weckesser picture Warren Weckesser · Nov 9, 2015

scipy.special.entr computes -x*log(x) for each element in an array. After calling that, you can sum the rows.

Here's an example. First, create an array p of positive values whose rows sum to 1:

In [23]: np.random.seed(123)

In [24]: x = np.random.rand(3, 10)

In [25]: p = x/x.sum(axis=1, keepdims=True)

In [26]: p
Out[26]: 
array([[ 0.12798052,  0.05257987,  0.04168536,  0.1013075 ,  0.13220688,
         0.07774843,  0.18022149,  0.1258417 ,  0.08837421,  0.07205402],
       [ 0.08313743,  0.17661773,  0.1062474 ,  0.01445742,  0.09642919,
         0.17878489,  0.04420998,  0.0425045 ,  0.12877228,  0.1288392 ],
       [ 0.11793032,  0.15790292,  0.13467074,  0.11358463,  0.13429674,
         0.06003561,  0.06725376,  0.0424324 ,  0.05459921,  0.11729367]])

In [27]: p.shape
Out[27]: (3, 10)

In [28]: p.sum(axis=1)
Out[28]: array([ 1.,  1.,  1.])

Now compute the entropy of each row. entr uses the natural logarithm, so to get the base-2 log, divide the result by log(2).

In [29]: from scipy.special import entr

In [30]: entr(p).sum(axis=1)
Out[30]: array([ 2.22208731,  2.14586635,  2.22486581])

In [31]: entr(p).sum(axis=1)/np.log(2)
Out[31]: array([ 3.20579434,  3.09583074,  3.20980287])

If you don't want the dependency on scipy, you can use the explicit formula:

In [32]: (-p*np.log2(p)).sum(axis=1)
Out[32]: array([ 3.20579434,  3.09583074,  3.20980287])