131

There are times where using recursion is better than using a loop, and times where using a loop is better than using recursion. Choosing the "right" one can save resources and/or result in fewer lines of code.

Are there any cases where a task can only be done using recursion, rather than a loop?

Pikamander2
  • 1,279

11 Answers11

167

Yes and no. Ultimately, there's nothing recursion can compute that looping can't, but looping takes a lot more plumbing. Therefore, the one thing recursion can do that loops can't is make some tasks super easy.

Take walking a tree. Walking a tree with recursion is stupid-easy. It's the most natural thing in the world. Walking a tree with loops is a lot less straightforward. You have to maintain a stack or some other data structure to keep track of what you've done.

Often, the recursive solution to a problem is prettier. That's a technical term, and it matters.

Scant Roger
  • 9,086
79

No.

Getting down to the very basics of the necessary minimums in order to compute, you just need to be able to loop (this alone isn't sufficient, but rather is a necessary component). It doesn't matter how.

Any programming language that can implement a Turing Machine, is called Turing complete. And there are lots of languages that are turing complete.

My favorite language in the way down there of "that actually works?" Turing completeness is that of FRACTRAN, which is Turing complete. It has one loop structure, and you can implement a Turing machine in it. Thus, anything that is computable, can be implemented in a language that doesn't have recursion. Therefore, there is nothing that recursion can give you in terms of computability that simple looping cannot.

This really boils down to a few points:

  • Anything that is computable is computable on a Turing machine
  • Any language that can implement a Turing machine (called Turing complete), can compute anything that any other language can
  • Since there are Turing machines in languages that lack recursion (and there are others that only have recursion when you get into some of the other esolangs), it is necessarily true that there is nothing that you can do with recursion that you cannot do with a loop (and nothing you can do with a loop that you can't do with recursion).

This isn't to say that there are some problem classes that more easily be thought of with recursion rather than with looping, or with looping rather than with recursion. However, these too tools are equally powerful.

And while I took this to the 'esolang' extreme (mostly because you can find things that are Turing complete and implemented in rather strange ways), this doesn't mean that the esolangs are by any means optional. There is a whole list of things that are accidentally Turing complete including Magic the Gathering, Sendmail, MediaWiki templates, and the Scala type system. Many of these are far from optimal when it comes to actually doing anything practical, its just that you can compute anything that is computable using these tools.


This equivalence can get particularly interesting when you get into a particular type of recursion known as tail call.

If you have, lets say, a factorial method written as:

int fact(int n) {
    return fact(n, 1);
}

int fact(int n, int accum) {
    if(n == 0) { return 1; }
    if(n == 1) { return accum; }
    return fact(n-1, n * accum);
}

This type of recursion will be rewritten as a loop - no stack used. Such approaches are indeed often more elegant and easier to understand than the equivalent loop being written, but again, for every recursive call there can be an equivalent loop written and for every loop there can be a recursive call written.

There are also times where converting the simple loop into a tail call recursive call can be convoluted and more difficult to understand.


If you want to get into the theory side of it, see the Church Turing thesis. You may also find the church-turing-thesis on CS.SE to be useful.

32

Are there any cases where a task can only be done using recursion, rather than a loop?

You can always turn recursive algorithm into a loop, which uses a Last-In-First-Out data structure (AKA stack) to store temporary state, because recursive call is exactly that, storing current state in a stack, proceeding with the algorithm, then later restoring the state. So short answer is: No, there are no such cases.

However, an argument can be made for "yes". Let's take a concrete, easy example: merge sort. You need to divide data in two parts, merge sort the parts, and then combine them. Even if you don't do an actual programming language function call to merge sort in order to do merge sort on the parts, you need to implement functionality which is identical to actually doing a function call (push state to your own stack, jump to start of loop with different starting parameters, then later pop the state from your stack).

Is it recursion, if you implement the recursion call yourself, as separate "push state" and "jump to beginning" and "pop state" steps? And the answer to that is: No, it still isn't called recursion, it is called iteration with explicit stack (if you want to use established terminology).


Note, this also depends on definition of "task". If task is to sort, then you can do it with many algorithms, many of which don't need any kind of recursion. If task is to implement specific algorithm, like merge sort , then above ambiguity applies.

So let's consider question, are there general tasks, for which there are only recursion-like algorithms. From comment of @WizardOfMenlo under the question, Ackermann function is a simple example of that. So the concept of recursion stands on its own, even if it can be implemented with a different computer program construct (iteration with explicit stack).

hyde
  • 3,754
21

It depends on how strictly you define "recursion".

If we strictly require it to involve the call-stack (or whatever mechanism for maintaining program state is used), then we can always replace it with something that doesn't. Indeed, languages that lead naturally to heavy use of recursion tend to have compilers that make heavy use of tail-call optimisation, so what you write is recursive but what you run is iterative.

But lets consider a case where we make a recursive call and use the result of a recursive call for that recursive call.

public static BigInteger Ackermann(BigInteger m, BigInteger n)
{
  if (m == 0)
    return  n+1;
  if (n == 0)
    return Ackermann(m - 1, 1);
  else
    return Ackermann(m - 1, Ackermann(m, n - 1));
}

Making the first recursive call iterative is easy:

public static BigInteger Ackermann(BigInteger m, BigInteger n)
{
restart:
  if (m == 0)
    return  n+1;
  if (n == 0)
  {
    m--;
    n = 1;
    goto restart;
  }
  else
    return Ackermann(m - 1, Ackermann(m, n - 1));
}

We can then clean-up remove the goto to ward off velociraptors and the shade of Dijkstra:

public static BigInteger Ackermann(BigInteger m, BigInteger n)
{
  while(m != 0)
  {
    if (n == 0)
    {
      m--;
      n = 1;
    }
    else
      return Ackermann(m - 1, Ackermann(m, n - 1));
  }
  return  n+1;
}

But to remove the other recursive calls we're going to have to store the values of some calls into a stack:

public static BigInteger Ackermann(BigInteger m, BigInteger n)
{
  Stack<BigInteger> stack = new Stack<BigInteger>();
  stack.Push(m);
  while(stack.Count != 0)
  {
    m = stack.Pop();
    if(m == 0)
      n = n + 1;
    else if(n == 0)
    {
      stack.Push(m - 1);
      n = 1;
    }
    else
    {
      stack.Push(m - 1);
      stack.Push(m);
      --n;
    }
  }
  return n;
}

Now, when we consider the source code, we have certainly turned our recursive method into an iterative one.

Considering what this has been compiled to, we have turned code that uses the call stack to implement recursion into code that does not (and in doing so turned code that will throw a stack-overflow exception for even quite small values into code that will merely take an excruciatingly long time to return [see How can I prevent my Ackerman function from overflowing the stack? for some further optimisations that make it actually return for many more possible inputs]).

Considering how recursion is implemented generally, we have turned code that uses the call-stack into code that uses a different stack to hold pending operations. We could therefore argue that it is still recursive, when considered at that low level.

And at that level, there are indeed no other ways around it. So if you do consider that method to be recursive, then there are indeed things we cannot do without it. Generally though we do not label such code recursive. The term recursion is useful because it covers a certain set of approaches and gives us a way to talk about them, and we are no longer using one of them.

Of course, all of this assumes you have a choice. There are both languages that prohibit recursive calls, and languages that lack the looping structures necessary for iterating.

Jon Hanna
  • 2,135
9

The classical answer is "no", but allow me to elaborate on why I think "yes" is a better answer.


Before going on, let's get something out of the way: from a computability & complexity standpoint:

  • The answer is "no" if you are permitted to have an auxiliary stack when looping.
  • The answer is "yes" if you are not permitted any extra data when looping.

Okay, now, let's put one foot in practice-land, keeping the other foot in theory-land.


The call stack is a control structure, whereas a manual stack is a data structure. Control and data are not equal concepts, but they're equivalent in the sense that they can be reduced to each other (or "emulated" via one another) from a computability or complexity standpoint.

When might this distinction matter? When you're working with real-world tools. Here's an example:

Say you're implementing N-way mergesort. You might have a for loop that goes through each of the N segments, calls mergesort on them separately, then merges the results.

How might you parallelize this with OpenMP?

In the recursive realm, it's extremely simple: just put #pragma omp parallel for around your loop that goes from 1 to N, and you're done. In the iterative realm, you can't do this. You have to spawn threads manually and pass them the appropriate data manually so that they know what to do.

On the other hand, there are others tools (such as automatic vectorizers, e.g. #pragma vector) that work with loops but are utterly useless with recursion.

Point being, just because you can prove the two paradigms are equivalent mathematically, that doesn't mean they are equal in practice. A problem that might be trivial to automate in one paradigm (say, loop parallelization) might be much more difficult to solve in the other paradigm.

i.e.: Tools for one paradigm do not automatically translate to other paradigms.

Consequently, if you require a tool to solve a problem, chances are that the tool will only work with one particular kind of approach, and consequently you will fail to solve the problem with a different approach, even if you can mathematically prove the problem can be solved either way.

user541686
  • 8,178
8

Setting aside theoretical reasoning, let's have a look at what recursion and loops look like from a (hardware or virtual) machine point of view. Recursion is a combination of control flow that allows to start execution of some code and to return on completion (in a simplistic view when signals and exceptions are ignored) and of data that is passed to that other code (arguments) and that is returned from it (result). Usually no explicit memory management is involved, however there is implicit allocation of stack memory to save return addresses, arguments, results and intermediate local data.

A loop is a combination of control flow and local data. Comparing this to recursion we can see that the amount of data in this case is fixed. The only way to break this limitation is to use dynamic memory (also known as heap) that can be allocated (and freed) whenever needed.

To summarize:

  • Recursion case = Control flow + Stack (+ Heap)
  • Loop case = Control flow + Heap

Assuming that control flow part is reasonably powerful, the only difference is in available memory types. So, we are left with 4 cases (expressiveness power is listed in parentheses):

  1. No stack, no heap: recursion and dynamic structures are impossible. (recursion = loop)
  2. Stack, no heap: recursion is OK, dynamic structures are impossible. (recursion > loop)
  3. No stack, heap: recursion is impossible, dynamic structures are OK. (recursion = loop)
  4. Stack, heap: recursion and dynamic structures are OK. (recursion = loop)

If rules of the game are a bit stricter and recursive implementation is disallowed to use loops, we get this instead:

  1. No stack, no heap: recursion and dynamic structures are impossible. (recursion < loop)
  2. Stack, no heap: recursion is OK, dynamic structures are impossible. (recursion > loop)
  3. No stack, heap: recursion is impossible, dynamic structures are OK. (recursion < loop)
  4. Stack, heap: recursion and dynamic structures are OK. (recursion = loop)

The key difference with the previous scenario is that lack of stack memory does not allow recursion without loops to do more steps during execution than there are lines of code.

2

Yes. There are several common tasks that are easy to accomplish using recursion but impossible with just loops:

  • Causing stack overflows.
  • Totally confusing beginner programmers.
  • Creating fast looking functions that actually are O(n^n).
jpa
  • 1,408
1

There's a difference between recursive functions and primitive recursive functions. Primitive recursive functions are those that are calculated using loops, where the maximum iteration count of each loop is calculated before the loop execution starts. (And "recursive" here has nothing to do with the use of recursion).

Primitive recursive functions are strictly less powerful than recursive functions. You would get the same result if you took functions that use recursion, where the maximum depth of the recursion has to be calculated beforehand.

gnasher729
  • 49,096
1

If you are programming in c++, and use c++11, then there is one thing that has to be done using recursions : constexpr functions. But the standard limits this to 512, as explained in this answer. Using loops in this case is not possible, since in that case the function can not be constexpr, but this is changed in c++14.

BЈовић
  • 14,049
0
  • If the recursive call is the very first or very last statement(excluding condition checking) of a recursive function, it is pretty easy to translate into a looping structure.
  • But if the function does some other things before and after the recursive call, then it would be cumbersome to convert it to loops.
  • If the function have multiple recursive calls, the converting it to code which is using just loops will be pretty much impossible. Some stack will be needed to keep up with the data. In recursion the call-stack itself will work as the data-stack.
Gulshan
  • 9,532
-6

I agree with the other questions. There is nothing you can do with recursion you can't do with a loop.

BUT, in my opinion recursion can be very dangerous. First, for some its more difficult to understand what is actually happening in the code. Second, at least for C++ (Java I am not sure) each recursion step has an impact on the memory because each method call causes memory accumulation and initialization of the methods header. This way you can blow up your stack. Simply try recursion of the Fibonacci numbers with a high input value.

Ben1980
  • 111
  • 2