Making Fibonacci faster

alainlompo picture alainlompo · Mar 28, 2015 · Viewed 13.8k times · Source

I was required to write a simple implementation of Fibonacci's algorithm and then to make it faster.

Here is my initial implementation

public class Fibonacci {

    public static long getFibonacciOf(long n) {
        if (n== 0) {
            return 0;
        } else if (n == 1) {
            return 1;
        } else {
            return getFibonacciOf(n-2) + getFibonacciOf(n-1);
        }
    }

    public static void main(String[] args) {
        Scanner scanner = new Scanner (System.in);
        while (true) {
            System.out.println("Enter n :");
            long n = scanner.nextLong();
            if (n >= 0) {
                long beginTime = System.currentTimeMillis();
                long fibo = getFibonacciOf(n);
                long endTime = System.currentTimeMillis();

                long delta = endTime - beginTime;

                System.out.println("F(" + n + ") = " + fibo + " ... computed     in " + delta + " milliseconds");
            } else {
                break;

            }
        }

    }

}

As you can see I am using System.currentTimeMillis() to get a simple measure of the time elapsed while computed Fibonacci.

This implementation get rapidly kind of exponentially slow as you can see on the following picture

simple version of fibonacci's algorithm

So I've got a simple optimisation idea. To put previous values in a HashMap and instead of re-computing them each time, to simply take them back from the HashMap if they exist. If they don't exist, we then put them in the HashMap.

Here is the new version of the code

public class FasterFibonacci {

    private static Map<Long, Long> previousValuesHolder;
    static {
        previousValuesHolder = new HashMap<Long, Long>();
        previousValuesHolder.put(Long.valueOf(0), Long.valueOf(0));
        previousValuesHolder.put(Long.valueOf(1), Long.valueOf(1));
    }
    public static long getFibonacciOf(long n) {
        if (n== 0) {

            return 0;
        } else if (n == 1) {
            return 1;
        } else {
            if (previousValuesHolder.containsKey(Long.valueOf(n))) {
                return previousValuesHolder.get(n);
            } {

                long newValue = getFibonacciOf(n-2) + getFibonacciOf(n-1);
                previousValuesHolder.put(Long.valueOf(n),     Long.valueOf(newValue));
                return newValue;
            }

        }
    }

    public static void main(String[] args) {
        Scanner scanner = new Scanner (System.in);
        while (true) {
            System.out.println("Enter n :");
            long n = scanner.nextLong();
            if (n >= 0) {
                long beginTime = System.currentTimeMillis();
                long fibo = getFibonacciOf(n);
                long endTime = System.currentTimeMillis();

                long delta = endTime - beginTime;

                System.out.println("F(" + n + ") = " + fibo + " ... computed     in " + delta + " milliseconds");
            } else {
                break;

            }
        }

    }

This change makes the computing extremely fast. I computes all the values from 2 to 103 in no time at all and I get a long overflow at F(104) (Gives me F(104) = -7076989329685730859, which is wrong). I find it so fast that **I wonder if there is any mistakes in my code (Thank your checking and let me know please) **. Please take a look at the second picture:

Faster Fibonacci

Is my faster fibonacci's algorithm's implementation correct (It seems it is to me because it gets the same values as the first version, but since the first version was too slow I could not compute bigger values with it such as F(75))? What other way can I use to make it faster? Or is there a better way to make it faster? Also how can I compute Fibonacci for greater values (such as 150, 200) without getting a **long overflow**? Though it seems fast I would like to push it to the limits. I remember Mr Abrash saying 'The best optimiser is between your two ears', so I believe it can still be improved. Thank you for helping

[Edition Note:] Though this question adresses one of the main point in my question, you can see from above that I have additionnal issues.

Answer

user2736738 picture user2736738 · Mar 28, 2015

Dynamic programming

Idea:Instead of recomputing the same value multiple times you just store the value calculated and use them as you go along.

f(n)=f(n-1)+f(n-2) with f(0)=0,f(1)=1. So at the point when you have calculated f(n-1) you can easily calculate f(n) if you store the values of f(n) and f(n-1).

Let's take an array of Bignums first. A[1..200]. Initialize them to -1.

Pseudocode

fact(n)
{
if(A[n]!=-1) return A[n];
A[0]=0;
A[1]=1;
for i=2 to n
  A[i]= addition of A[i],A[i-1];
return A[n]
}

This runs in O(n) time. Check it out yourself.

This technique is also called memoization.

The IDEA

Dynamic programming (usually referred to as DP ) is a very powerful technique to solve a particular class of problems. It demands very elegant formulation of the approach and simple thinking and the coding part is very easy. The idea is very simple, If you have solved a problem with the given input, then save the result for future reference, so as to avoid solving the same problem again.. shortly 'Remember your Past'.

If the given problem can be broken up in to smaller sub-problems and these smaller subproblems are in turn divided in to still-smaller ones, and in this process, if you observe some over-lappping subproblems, then its a big hint for DP. Also, the optimal solutions to the subproblems contribute to the optimal solution of the given problem ( referred to as the Optimal Substructure Property ).

There are two ways of doing this.

1.) Top-Down : Start solving the given problem by breaking it down. If you see that the problem has been solved already, then just return the saved answer. If it has not been solved, solve it and save the answer. This is usually easy to think of and very intuitive. This is referred to as Memoization. (I have used this idea).

2.) Bottom-Up : Analyze the problem and see the order in which the sub-problems are solved and start solving from the trivial subproblem, up towards the given problem. In this process, it is guaranteed that the subproblems are solved before solving the problem. This is referred to as Dynamic Programming. (MinecraftShamrock used this idea)


There's more!

(Other ways to do this)

Look our quest to get a better solution doesn't end here. You will see a different approach-

If you know how to solve recurrence relation then you will find a solution to this relation

f(n)=f(n-1)+f(n-2) given f(0)=0,f(1)=1

You will arrive at the formula after solving it-

f(n)= (1/sqrt(5))((1+sqrt(5))/2)^n - (1/sqrt(5))((1-sqrt(5))/2)^n

which can be written in more compact form

f(n)=floor((((1+sqrt(5))/2)^n) /sqrt(5) + 1/2)

Complexity

You can get the power a number in O(logn) operations. You have to learn the Exponentiation by squaring.

EDIT: It is good to point out that this doesn't necessarily mean that the fibonacci number can be found in O(logn). Actually the number of digits we need to calculate frows linearly. Probably because of the position where I stated that it seems to claim the wrong idea that factorial of a number can be calculated in O(logn) time. [Bakurui,MinecraftShamrock commented on this]