People actually say things about Haskell like:
When you see something like '(n-1)' and 'y + a1/6', it is a red flag. These are exactly the kinds of expressions that lead to memory exhaustion.
Those expressions mean the same thing as they would in C: simple arithmetic.
The reason they exhaust memory is that Haskell is lazy. When Haskell sees a calculation, it never performs it right away; at most it creates a thunk consisting of the calculation to be performed and references to the arguments. Then it can do the calculation later when and if the result value is needed. Sometimes it takes more memory to keep the thunk around than to just calculate it and be done. In the case of
(n-1), Haskell has to remember two values and the
- operation, even if
n has already been calculated. But it gets worse. If you run a calculation many times in a loop, and each iteration uses the result of previous iterations, Haskell can end up building a huge tree of nested thunks, just because it's too lazy to do the calculations (which it'll eventually have to do anyway) and free up the memory.
Another programmer lists techniques for avoiding laziness in Haskell. She sounds a bit disenchanted with it.
What good is laziness? It's very clever for implementing backtracking. Just write code to produce all the possible results at each step. At runtime, you'll only ask for one result; the rest will never be evaluated. Laziness is also good for any case where there's an infinite (or just fantastically large) amount of analysis you could do, if you had infinite memory and CPU—like lookahead in a chess game—but you'd rather write unconstrained code first and apply pruning as an afterthought. And of course I think lazy lists are a brilliant programming technique—the Haskell equivalent of a generator is just an ordinary Haskell list.