Below you can see my C# method to calculate Bollinger Bands for each point (moving average, up band, down band).
As you can see this method uses 2 for loops to calculate the moving standard deviation using the moving average. It used to contain an additional loop to calculate the moving average over the last n periods. This one I could remove by adding the new point value to total_average at the beginning of the loop and removing the i - n point value at the end of the loop.
My question now is basically: Can I remove the remaining inner loop in a similar way I managed with the moving average?
public static void AddBollingerBands(SortedList<DateTime, Dictionary<string, double>> data, int period, int factor)
{
double total_average = 0;
for (int i = 0; i < data.Count(); i++)
{
total_average += data.Values[i]["close"];
if (i >= period - 1)
{
double total_bollinger = 0;
double average = total_average / period;
for (int x = i; x > (i - period); x--)
{
total_bollinger += Math.Pow(data.Values[x]["close"] - average, 2);
}
double stdev = Math.Sqrt(total_bollinger / period);
data.Values[i]["bollinger_average"] = average;
data.Values[i]["bollinger_top"] = average + factor * stdev;
data.Values[i]["bollinger_bottom"] = average - factor * stdev;
total_average -= data.Values[i - period + 1]["close"];
}
}
}
The problem with approaches that calculate the sum of squares is that it and the square of sums can get quite large, and the calculation of their difference may introduce a very large error, so let's think of something better. For why this is needed, see the Wikipedia article on Algorithms for computing variance and John Cook on Theoretical explanation for numerical results)
First, instead of calculating the stddev let's focus on the variance. Once we have the variance, stddev is just the square root of the variance.
Suppose the data are in an array called x
; rolling an n-sized window by one can be thought of as removing the value of x[0]
and adding the value of x[n]
. Let's denote the averages of x[0]..x[n-1]
and x[1]..x[n]
by µ and µ’ respectively. The difference between the variances of x[0]..x[n-1]
and x[1]..x[n]
is, after canceling out some terms and applying (a²-b²) = (a+b)(a-b)
:
Var[x[1],..,x[n]] - Var[x[0],..,x[n-1]]
= (\sum_1^n x[i]² - n µ’²)/(n-1) - (\sum_0^{n-1} x[i]² - n µ²)/(n-1)
= (x[n]² - x[0]² - n(µ’² - µ²))/(n-1)
= (x[n]-µ’ + x[0]-µ)(x[n]-x[0])/(n-1)
Therefore the variance is perturbed by something that doesn't require you to maintain the sum of squares, which is better for numerical accuracy.
You can calculate the mean and variance once in the beginning with a proper algorithm (Welford's method). After that, every time you have to replace a value in the window x[0]
by another x[n]
you update the average and variance like this:
new_Avg = Avg + (x[n]-x[0])/n
new_Var = Var + (x[n]-new_Avg + x[0]-Avg)(x[n] - x[0])/(n-1)
new_StdDev = sqrt(new_Var)