I want to add numpy arrays with datatyp uint8. I know that the values in these arrays may be large enough for an overflow to happen. So I get something like:
a = np.array([100, 200, 250], dtype=np.uint8)
b = np.array([50, 50, 50], dtype=np.uint8)
a += b
Now, a is [150 250 44]
. However, instead of an overflow I want values which are too large for uint8 to be the maximum allowed for uint8. So my desired result would be [150 250 255]
.
I could get this result with the following code:
a = np.array([100, 200, 250], dtype=np.uint8)
b = np.array([50, 50, 50], dtype=np.uint8)
c = np.zeros((1,3), dtype=np.uint16)
c += a
c += b
c[c>255] = 255
a = np.array(c, dtype=np.uint8)
The problem is, that my arrays are really big so creating a third array with a larger datatype could be a memory issue. Is there a fast and more memory efficient way to achieve the described result?
You can achieve this by creating a third array of dtype uint8, plus a bool array (which together are more memory efficient that one uint16 array).
np.putmask
is useful for avoiding a temp array.
a = np.array([100, 200, 250], dtype=np.uint8)
b = np.array([50, 50, 50], dtype=np.uint8)
c = 255 - b # a temp uint8 array here
np.putmask(a, c < a, c) # a temp bool array here
a += b
However, as @moarningsun correctly points out, a bool array takes the the same amount of memory as a uint8 array, so this isn't necessarily helpful. It is possible to solve this by avoiding having more than one temp array at any given time:
a = np.array([100, 200, 250], dtype=np.uint8)
b = np.array([50, 50, 50], dtype=np.uint8)
b = 255 - b # old b is gone shortly after new array is created
np.putmask(a, b < a, b) # a temp bool array here, then it's gone
a += 255 - b # a temp array here, then it's gone
This approach trades memory consumption for CPU.
Another approach is to precalculate all possible results, which is O(1) extra memory (i.e. independent of the size of your arrays):
c = np.clip(np.arange(256) + np.arange(256)[..., np.newaxis], 0, 255).astype(np.uint8)
c
=> array([[ 0, 1, 2, ..., 253, 254, 255],
[ 1, 2, 3, ..., 254, 255, 255],
[ 2, 3, 4, ..., 255, 255, 255],
...,
[253, 254, 255, ..., 255, 255, 255],
[254, 255, 255, ..., 255, 255, 255],
[255, 255, 255, ..., 255, 255, 255]], dtype=uint8)
c[a,b]
=> array([150, 250, 255], dtype=uint8)
This approach is the most memory-efficient if your arrays are very big. Again, it is expensive in processing time, because it replace the super-fast integer additions with the slower 2dim-array indexing.
EXPLANATION OF HOW IT WORKS
Construction of the c
array above makes use of a numpy broadcasting trick. Adding an array of shape (N,)
and array of shape (1,N)
broadcast both to be (N,N)
-like, thus the result is an NxN array of all possible sums. Then, we clip it. We get a 2dim array that satisfies: c[i,j]=min(i+j,255)
for each i,j.
Then what's left is using fancy indexing the grab the right values. Working with the input you provided, we access:
c[( [100, 200, 250] , [50, 50, 50] )]
The first index-array refers to the 1st dim, and the second to the 2nd dim. Thus the result is an array of the same shape as the index arrays ((N,)
), consisting of the values [ c[100,50] , c[200,50] , c[250,50] ]
.