When I first started to hear about dynamic programming and memoizing in college, it sounded great. Then I read:

“Dynamic programming is a technique for efficiently implementing a recursive algorithm by storing partial results.”

in “The Algorithm Design Manual” and that really got my attention.

More specifically, also taken from the same book:

“The trick is seeing whether the naive recursive algorithm computes the same subproblems over and over and over again. If so, storing the answer for each subproblems in a table to look up instead of recompute can lead to an efficient algorithm. Start with a recursive algorithm or definition. Only once we have a correct recursive algorithm do we worry about speeding it up by using a results matrix.”

That’s a good approach to solve a problem. First find a solution to a problem. Whether it is the most efficient solution or not is not the issue. The main issue is to solve the problem and worry about efficiency later. Let’s use this approach for solving a problem, attacking optimization latter. We’ll use Problem 14, taken from Project Euler, to better illustrate the concept.

The problem reads like this:

The following iterative sequence is defined for the set of positive integers:

n n/2 (n is even)

n 3n + 1 (n is odd)Using the rule above and starting with 13, we generate the following sequence:

13 -> 40 -> 20 -> 10 -> 5 -> 16 -> 8 -> 4 -> 2 -> 1

It can be seen that this sequence (starting at 13 and finishing at 1) contains 10 terms. Although it has not been proved yet (Collatz Problem), it is thought that all starting numbers finish at 1.

Which starting number, under one million, produces the longest chain?

NOTE: Once the chain starts the terms are allowed to go above one million.

A brute-force approach to solve this problem seems pretty easy. Just iterate over all the numbers until 1000000, apply the rules and check the sequence size. Pretty easy. The code below, written in Python, illustrates the idea:

```
def find_sequence(start, end):
longest_sequence = 0
number = 0
for i in range(start, end+1):
aux = i
seq = 1
while(aux != 1):
if aux % 2 == 0:
aux /= 2
else:
aux = aux * 3 + 1
seq += 1
if seq > longest_sequence:
longest_sequence = seq
number = i
return number
print find_sequence(1000000)
```

So what’s the problem? It’s very slow! The are just too many numbers to check. So, let’s use memoizing to speed this up.

The idea is to store some intermediate result, so that it can be used to substitute some computation. How can we do that? Let’s observe these to sequences:

13 -> 40 -> 20 -> 10 -> 5 -> 16 -> 8 -> 4 -> 2 -> 1

52 -> 26 -> **13 -> 40 -> 20 -> 10 -> 5 -> 16 -> 8 -> 4 -> 2 -> 1**

What can we spot here? It’s easy to see that, the second sequence has a sub-sequence, that is exactly the same as the first one. So, if we already know that the first sequence has size 10, why should we calculate it again? The trick here is to store these calculations. And when calculating the size of a new sequence, check at each step, if we already have it calculated. If so, get it and terminate the calculation. A possible approach is presented below, written in Python.

```
memory = {1 : 1}
def get_sequence_size(start):
if start in memory:
return memory[start]
elif start % 2 == 0:
memory[start] = 1 + get_sequence_size(int(start / 2))
else:
memory[start] = 1 + get_sequence_size(int(start * 3 + 1))
return memory[start]
def find_sequence_memoizing(value):
largest_so_far = 1
highest = 0
for i in range(1, value):
temp = get_sequence_size(i)
if temp > largest_so_far:
highest = i
largest_so_far = temp
return highest
print find_sequence_memoizing(1000000)
```

In just a few seconds, we have the desired result. As we can see, the trick is to know what are the results that we can cache, and then take advantage if that. As in this example, this solution can be used to optimize problems on combinatorial objects.