How to implement the ReLU function in Numpy

Andoni Zubizarreta picture Andoni Zubizarreta · Aug 20, 2015 · Viewed 113.3k times · Source

I want to make a simple neural network which uses the ReLU function. Can someone give me a clue of how can I implement the function using numpy.

Answer

Sid picture Sid · Aug 20, 2015

There are a couple of ways.

>>> x = np.random.random((3, 2)) - 0.5
>>> x
array([[-0.00590765,  0.18932873],
       [-0.32396051,  0.25586596],
       [ 0.22358098,  0.02217555]])
>>> np.maximum(x, 0)
array([[ 0.        ,  0.18932873],
       [ 0.        ,  0.25586596],
       [ 0.22358098,  0.02217555]])
>>> x * (x > 0)
array([[-0.        ,  0.18932873],
       [-0.        ,  0.25586596],
       [ 0.22358098,  0.02217555]])
>>> (abs(x) + x) / 2
array([[ 0.        ,  0.18932873],
       [ 0.        ,  0.25586596],
       [ 0.22358098,  0.02217555]])

If timing the results with the following code:

import numpy as np

x = np.random.random((5000, 5000)) - 0.5
print("max method:")
%timeit -n10 np.maximum(x, 0)

print("multiplication method:")
%timeit -n10 x * (x > 0)

print("abs method:")
%timeit -n10 (abs(x) + x) / 2

We get:

max method:
10 loops, best of 3: 239 ms per loop
multiplication method:
10 loops, best of 3: 145 ms per loop
abs method:
10 loops, best of 3: 288 ms per loop

So the multiplication seems to be the fastest.