I have read these words:
There are two key attributes that a problem must have in order for dynamic programming to be applicable: optimal substructure and overlapping subproblems. If a problem can be solved by combining optimal solutions to non-overlapping subproblems, the strategy is called "divide and conquer". This is why mergesort and quicksort are not classified as dynamic programming problems.
I have the 3 questions:
The key words here are "overlapping subproblems" and "optimal substructure". When you execute quicksort or mergesort, you are recursively breaking down your array into smaller pieces that do not overlap. You never operate over the same elements of the original array twice during any given level of the recursion. This means there is no opportunity to re-use previous calculations. On the other hand, many problems DO involve performing the same calculations over overlapping subsets, and have the useful characteristic that an optimal solution to a subproblem can be re-used when computing the optimal solution to a larger problem.
Dijkstra's algorithm is a classic example of dynamic programming, as it re-uses prior computations to discover the shortest path between two nodes A and Z. Say that A's immediate neighbors are B and C. We can find the shortest path from A to Z by summing the distance between A and B with our computed shortest path from B to Z; and do similarly for finding the shortest path from C to Z. Then the shortest path from A to Z will be the shorter of these two paths. The key insight here is that we can re-use the shortest path computations for paths of length 2 when computing the shortest paths of length 3, and so on. Doing so results in a much more efficient algorithm.
Dynamic programming can be used to solve many types of problems -- see http://en.wikipedia.org/wiki/Dynamic_programming#Examples:_Computer_algorithms for some examples.