Given a target amount and a list of coin denominations, my code is supposed to find the fewest coins needed to reach the target amount.
Examples:
C(78, [1, 5, 10, 25, 50]) = 6
C(48, [1, 7, 24, 42]) = 2
C(35, [1, 3, 16, 30, 50]) = 3
I made the code with for loops, but how do I make it recursive?
def C(i, coins, cdict = None):
if cdict == None:
cdict = {}
if i <= 0:
cdict[i] = 0
return cdict[i]
elif i in cdict:
return cdict[i]
elif i in coins:
cdict[i] = 1
return cdict[i]
else:
min = 0
for cj in coins:
result = C(i - cj, coins)
if result != 0:
if min == 0 or (result + 1) < min:
min = 1 + result
cdict[i] = min
return cdict[i]
It's the change-making problem. Here's the standard recursive solution, V
is the list of coins and C
the target amount of money:
def min_change(V, C):
def min_coins(i, aC):
if aC == 0:
return 0
elif i == -1 or aC < 0:
return float('inf')
else:
return min(min_coins(i-1, aC), 1 + min_coins(i, aC-V[i]))
return min_coins(len(V)-1, C)
And this is an optimized version, using dynamic programming:
def min_change(V, C):
m, n = len(V)+1, C+1
table = [[0] * n for x in xrange(m)]
for j in xrange(1, n):
table[0][j] = float('inf')
for i in xrange(1, m):
for j in xrange(1, n):
aC = table[i][j - V[i-1]] if j - V[i-1] >= 0 else float('inf')
table[i][j] = min(table[i-1][j], 1 + aC)
return table[m-1][n-1]
Both implementations will always return the optimal solution, but the second one will be much faster for large inputs. Notice that the greedy algorithm suggested in other answers gives an optimal solution only for certain combinations of currency - for instance, it works for the American coins.