Does np.dot automatically transpose vectors?

Tom picture Tom · Jan 12, 2019 · Viewed 8.8k times · Source

I am trying to calculate the first and second order moments for a portfolio of stocks (i.e. expected return and standard deviation).

expected_returns_annual
Out[54]: 
           ticker
adj_close  CNP       0.091859
           F        -0.007358
           GE        0.095399
           TSLA      0.204873
           WMT      -0.000943
dtype: float64

type(expected_returns_annual)
Out[55]: pandas.core.series.Series



weights = np.random.random(num_assets)
weights /= np.sum(weights)
returns = np.dot(expected_returns_annual, weights)

So normally the expected return is calculated by

(x1,...,xn' * (R1,...,Rn)

with x1,...,xn are weights with a constraint that all the weights have to sum up to 1 and ' means that the vector is transposed.

Now I am wondering a bit about the numpy dot function, because

returns = np.dot(expected_returns_annual, weights)

and

returns = np.dot(expected_returns_annual, weights.T)

give the same results.

I tested also the shape of weights.T and weights.

weights.shape
Out[58]: (5,)
weights.T.shape
Out[59]: (5,)

The shape of weights.T should be (,5) and not (5,), but numpy displays them as equal (I also tried np.transpose, but there is the same result)

Does anybody know why numpy behave this way? In my opinion the np.dot product automatically shape the vector the right why so that the vector product work well. Is that correct?

Best regards Tom

Answer

tel picture tel · Jan 12, 2019

The semantics of np.dot are not great

As Dominique Paul points out, np.dot has very heterogenous behavior depending on the shapes of the inputs. Adding to the confusion, as the OP points out in his question, given that weights is a 1D array, np.array_equal(weights, weights.T) is True (array_equal tests for equality of both value and shape).

Recommendation: use np.matmul or the equivalent @ instead

If you are someone just starting out with Numpy, my advice to you would be to ditch np.dot completely. Don't use it in your code at all. Instead, use np.matmul, or the equivalent operator @. The behavior of @ is more predictable than that of np.dot, while still being convenient to use. For example, you would get the same dot product for the two 1D arrays you have in your code like so:

returns = expected_returns_annual @ weights

You can prove to yourself that this gives the same answer as np.dot with this assert:

assert expected_returns_annual @ weights == expected_returns_annual.dot(weights)

Conceptually, @ handles this case by promoting the two 1D arrays to appropriate 2D arrays (though the implementation doesn't necessarily do this). For example, if you have x with shape (N,) and y with shape (M,), if you do x @ y the shapes will be promoted such that:

x.shape == (1, N)
y.shape == (M, 1)

Complete behavior of matmul/@

Here's what the docs have to say about matmul/@ and the shapes of inputs/outputs:

  • If both arguments are 2-D they are multiplied like conventional matrices.
  • If either argument is N-D, N > 2, it is treated as a stack of matrices residing in the last two indexes and broadcast accordingly.
  • If the first argument is 1-D, it is promoted to a matrix by prepending a 1 to its dimensions. After matrix multiplication the prepended 1 is removed.
  • If the second argument is 1-D, it is promoted to a matrix by appending a 1 to its dimensions. After matrix multiplication the appended 1 is removed.

Notes: the arguments for using @ over dot

As hpaulj points out in the comments, np.array_equal(x.dot(y), x @ y) for all x and y that are 1D or 2D arrays. So why do I (and why should you) prefer @? I think the best argument for using @ is that it helps to improve your code in small but significant ways:

  • @ is explicitly a matrix multiplication operator. x @ y will raise an error if y is a scalar, whereas dot will make the assumption that you actually just wanted elementwise multiplication. This can potentially result in a hard-to-localize bug in which dot silently returns a garbage result (I've personally run into that one). Thus, @ allows you to be explicit about your own intent for the behavior of a line of code.

  • Because @ is an operator, it has some nice short syntax for coercing various sequence types into arrays, without having to explicitly cast them. For example, [0,1,2] @ np.arange(3) is valid syntax.

    • To be fair, while [0,1,2].dot(arr) is obviously not valid, np.dot([0,1,2], arr) is valid (though more verbose than using @).
  • When you do need to extend your code to deal with many matrix multiplications instead of just one, the ND cases for @ are a conceptually straightforward generalization/vectorization of the lower-D cases.