17

Given any arbitrarily double-recursive function, how would one calculate its run time?

For Example (in pseudocode):

int a(int x){
  if (x < = 0)
    return 1010;
  else
    return b(x-1) + a(x-1);
}
int b(int y){
  if (y <= -5)
    return -2;
  else
    return b(a(y-1));
}

Or something along those lines.

What methods could or should one use to determine something like this?

7 Answers7

13

You keep on changing your function. But keep picking ones that will run forever without conversion..

Recursion gets complicated, fast. The first step to analyzing a proposed doubly recursive function is to try to trace it through on some sample values, to see what it does. If your calculation gets into an infinite loop, the function is not well defined. If your calculation gets into a spiral that keeps on getting bigger numbers (which happens very easily), it is probably not well defined.

If tracing it through gives an answer, you then try to come up with some pattern or recurrence relation between the answers. Once you have that, you can try to figure out its runtime. Figuring it out can be very, very complicated, but we have results such as the Master theorem that let us figure out the answer in many cases.

Beware that even with single recursion, it is easy to come up with functions whose runtime we do not know how to calculate. For instance consider the following:

def recursive (n):
    if 0 == n%2:
        return 1 + recursive(n/2)
    elif 1 == n:
        return 0
    else:
        return recursive(3*n + 1)

It is currently unknown whether this function is always well-defined, let alone what its runtime is.

btilly
  • 18,340
6

The runtime of that particular pair of functions is infinite because neither returns without calling the other. The return value of a is always dependent on the return value of a call to b which always calls a... and that's what's known as infinite recursion.

jimreed
  • 580
  • 3
  • 7
5

The obvious method is to run the function and measure how long it takes. This only tells you how long it takes on a particular input, though. And if you don't know beforehand that the function terminates, tough: there's no mechanical way to figure out whether the function terminates — that's the halting problem, and it's undecidable.

Finding the run time of a function is similarly undecidable, by Rice's theorem. In fact, Rice's theorem shows that even deciding whether a function runs in O(f(n)) time is undecidable.

So the best you can do in general is use your human intelligence (which, as far as we know, is not bound by the limits of Turing machines) and try to recognize a pattern, or invent one. A typical way to analyse the run time of a function is to turn the function's recursive definition into a recursive equation on its run time (or a set of equations for mutually recursive functions):

T_a(x) = if x ≤ 0 then 1 else T_b(x-1) + T_a(x-1)
T_b(x) = if x ≤ -5 then 1 else T_b(T_a(x-1))

What next? You now have a math problem: you need to solve these functional equations. An approach that often works is to turn these equations on integer functions into equations on analytic functions and use calculus to solve these, interpreting the functions T_a and T_b as generating functions.

On generating functions and other discrete mathematics topics, I recommend the book Concrete Mathematics, by Ronald Graham, Donald Knuth and Oren Patashnik.

2

As others pointed out, analyzing recursion can get very hard very fast. Here is another example of such thing: https://rosettacode.org/wiki/Mutual_recursion https://en.wikipedia.org/wiki/Hofstadter_sequence#Hofstadter_Female_and_Male_sequences it is hard to compute an answer and a running time for these. This is due to these mutually-recursive functions having a "difficult form".

Anyhow, let's look at this easy example:

http://pramode.net/clojure/2010/05/08/clojure-trampoline/

(declare funa funb)
(defn funa [n]
  (if (= n 0)
    0
    (funb (dec n))))
(defn funb [n]
  (if (= n 0)
    0
    (funa (dec n))))

Let's start by trying to compute funa(m), m > 0:

funa(m) = funb(m - 1) = funa(m - 2) = ... funa(0) or funb(0) = 0 either way.

The run-time is:

R(funa(m)) = 1 + R(funb(m - 1)) = 2 + R(funa(m - 2)) = ... m + R(funa(0)) or m + R(funb(0)) = m + 1 steps either way

Now let's pick another, slightly more complicated example:

Inspired by this page, which is a good read by itself, let's look at: """Fibonacci numbers can be interpreted via mutual recursion: F(0) = 1 and G(0) = 1 , with F(n + 1) = F(n) + G(n) and G(n + 1) = F(n)."""

So, what is the runtime of F? We will go the other way.
Well, R(F(0)) = 1 = F(0); R(G(0)) = 1 = G(0)
Now R(F(1)) = R(F(0)) + R(G(0)) = F(0) + G(0) = F(1)
...
It is not hard to see that R(F(m)) = F(m) - e.g. the number of function calls needed to compute a Fibonacci number at index i is equal to the value of a Fibonacci number at index i. This assumed that adding two numbers together is much faster than a function call. If this was not the case, then this would be true: R(F(1)) = R(F(0)) + 1 + R(G(0)), and the analysis of this would have been more complicated, possibly without an easy closed form solution.

The closed form for the Fibonacci sequence is not necessarily easy to reinvent, not to mention some more complicated examples.

Glorfindel
  • 3,167
Job
  • 6,459
1

First thing to do is to show that the functions you have defined terminate and for which values exactly. In the example you have defined

int a(int x){
  if (x < = 0)
    return 1010;
  else
    return b(x-1) + a(x-1);
}
int b(int y){
  if (y <= -5)
    return -2;
  else
    return b(a(y-1));
}

b only terminates for y <= -5 because if you plug in any other value then you will have a term of the form b(a(y-1)). If you do a little bit more expanding you'll see that a term of the form b(a(y-1)) eventually leads to the term b(1010) which leads to a term b(a(1009)) which again leads to the term b(1010). This means you can't plug in any value into a that doesn't satisfy x <= -4 because if you do you end up with an infinite loop where the value to be computed depends on the value to be computed. So essentially this example has constant run time.

So the simple answer is that there is no general method to determine the run time of recursive functions because there is no general procedure that determines whether a recursively defined function terminates.

0

You create a table where you write down what calculations are done for a, b. Then you take the numbers from the table and guess a hypothesis what the number of calls is depending on x, y and proof it by induction. Your hypothesis may be that the recursion never ends.

gnasher729
  • 49,096
-6

Runtime as in Big-O?

That's easy: O(N) -- assuming that there is a termination condition.

Recursion is just looping, and a simple loop is O(N) no matter how many things you do in that loop (and calling another method is just another step in the loop).

Where it would get interesting is if you have a loop within one or more of the recursive methods. In that case you would end up with some sort of exponential performance (multiplying by O(N) on each pass through the method).

Anon
  • 29