Calculate max draw down with a vectorized solution in python

piRSquared picture piRSquared · Apr 20, 2016 · Viewed 7.7k times · Source

Maximum Drawdown is a common risk metric used in quantitative finance to assess the largest negative return that has been experienced.

Recently, I became impatient with the time to calculate max drawdown using my looped approach.

def max_dd_loop(returns):
    """returns is assumed to be a pandas series"""
    max_so_far = None
    start, end = None, None
    r = returns.add(1).cumprod()
    for r_start in r.index:
        for r_end in r.index:
            if r_start < r_end:
                current = r.ix[r_end] / r.ix[r_start] - 1
                if (max_so_far is None) or (current < max_so_far):
                    max_so_far = current
                    start, end = r_start, r_end
    return max_so_far, start, end

I'm familiar with the common perception that a vectorized solution would be better.

The questions are:

  • can I vectorize this problem?
  • What does this solution look like?
  • How beneficial is it?

Edit

I modified Alexander's answer into the following function:

def max_dd(returns):
    """Assumes returns is a pandas Series"""
    r = returns.add(1).cumprod()
    dd = r.div(r.cummax()).sub(1)
    mdd = dd.min()
    end = dd.argmin()
    start = r.loc[:end].argmax()
    return mdd, start, end

Answer

Alexander picture Alexander · Apr 20, 2016

df_returns is assumed to be a dataframe of returns, where each column is a seperate strategy/manager/security, and each row is a new date (e.g. monthly or daily).

cum_returns = (1 + df_returns).cumprod()
drawdown =  1 - cum_returns.div(cum_returns.cummax())