In this chapter, I'll discuss procedure calling and recursion in more depth. [ blah blah blah ] Scheme's procedure-calling mechanism supports efficient tail-recursive programming, where recursion is used instead of iteration. In conventional programming languages, you can't generally use recursion to get the effect of iteration, because you may get activation stack overflows if the recursion is too deep.
After clarifying how recursion works, I'll give examples of how to program recursively in Scheme.
(In a later chapter, I'll show how the mechanisms that support tail
recursion also support a powerful control feature called
call-with-
current-
continuation
that lets
you implement novel control structures like backtracking and coroutines.)
In most implementations of most programming languages, an activation stack is used to implement procedure calling. At a call, the state of the "caller" (calling procedure) is saved on the stack, and then control is transferred to the callee.
Because each procedure call requires saving state on the stack, recursion is limited by the stack depth. In many systems, deep recursions cause stack overflow and program crashes, or use up unnecessary virtual memory swap space. In most systems, recursion is unnecessarily expensive in space and/or time. This limits the usefulness of recursion.
In Scheme, things are somewhat different. As I noted earlier, recursive calls may be tail recursive, in which case the state of the caller needn't be saved before calling the callee.
More generally, whether a procedure is recursive or not, the calls it makes can be classified as subproblems or reductions If the last thing a procedure does is to call another procedure, that's known as a reduction--the work being done by the caller is complete, because it "reduces to" the work being done by the callee.
For example, consider the following procedures:
(define (foo) (bar) (baz))
(define (baz) (bar) (foo))
Notice that when foo
is called, it does two things: it calls
bar
and then calls baz
. After the call to bar
,
control must return to foo
, so that it can continue and call
baz
. The call to bar
is therefore a subproblem--a step
in the overall plan of executing foo
. When foo
calls
baz
, however, that's all it needs to do--all of its other
work is done. The result of the call to foo
is just the
result of foo
's call to baz
.
In a conventional programming language implementation, foo
's state
would be saved before the call to baz
, as well as before the call to
bar
. Each call would return control to foo
. In the case
of the call to baz
, all foo
will do is return the result
of the call to its caller. That is, all foo
does after the
return from baz
is to leave the result wherever its caller expects
it, and return again to pop a stack frame off the activation stack.
In Scheme, things are actually simpler. If the last thing a procedure does is to call another procedure, the caller doesn't save its own state on the stack. When the callee returns, it will return to its caller's caller directly, rather than to its caller. After all, there's no reason to return to the caller if all the caller is going to do is pass the return value along to its caller.
This optimizes away the unnecessary state saving and returning at tail calls. You don't have to do anything special to get this optimization--Scheme implementations always do it for tail calls.
Consider both foo
and baz
above. Neither ever returns--each
just calls the other. In Scheme, these two procedures will repeatedly
call each other, without saving their state on the stack, producing an
infinite mutual recursion. Will the stack overflow? No. Each will save
its state before calling bar
, but the return from bar
will
pop that information off of the stack. The infinite tail-calling
between foo
and baz
will not increase the stack height
at all.
Above I said that a callee may return to its caller's caller, but that doesn't really capture the extent of what's going on. In general a procedure may return to its caller (if it was non-tail called), or it's caller's caller (if it was tail-called but its caller wasn't) or it's caller's caller's caller (if it and it's caller were both tail-called), and so on. A procedure returns to the last caller that did a non-tail call.
Because of this "tail call optimization," you can use recursion very freely in Scheme, which is a good thing--many problems have a natural recursive structure, and recursion is the easiest way to solve them.
Notice that this tail call optimization is a feature of the language, not just some implementations. Any implementation of standard Scheme is required to support it, so that you can count on it and write portable programs that rely on it. (In fact, the definition of the Scheme language itself depends on this, because the special forms for iteration are defined in terms of equivalent tail-calling--a loop is really a kind of procedure, that procedure tail-calls itself ot iterate the loop.)
Also notice that the interpreter we presented earlier is tail-recursive.
The recursive calls to eval
are tail calls, and since it's
implemented in Scheme, the interpreter relies on the underlying Scheme's
tail-call optimization. The evaluator thus snarfs the tail-call
optimization from the underlying Scheme system. If you implement a Scheme
interpreter in another language, you have to be more careful, and implement
the tail call optimization yourself.
It's something of a misnomer to call Scheme's procedure calling mechanism an "optimization." What's really going on is that Scheme simply distinguishes between two things that most languages lump together--saving the caller's state, and actually transferring control to the callee. Scheme notices that these things are distinct, and doesn't bother to do the former when only the latter is necessary.
A procedure call is really rather like a (safe) goto that can pass arguments: control is transferred directly to the callee, and the caller has the option of saving its state beforehand. (This is safer than unrestricted goto's, because when a procedure does return, it returns to the right ancestor in the dynamic calling pattern, just as though it had done a whole bunch of returns to get there.)
In this section, I'll describe a straightforward implementation of
Scheme's state-saving for procedure calling. It may clarify things
that are discussed later, however, such as
call-
with-
current-
continuation
and our example compiler's code generation strategy.
In most conventional language implementations, as noted above, calling a procedure allocates an activation record (or "stack frame") that holds the return address for the call and the variable bindings for the procedure being called. The stack is a contiguous area of memory, and pushing an activation record onto the stack is done by incrementing a pointer into this contiguous area by the size of the stack frame. Removing a stack frame is done by decrementing the pointer by the size of the stack frame.
Scheme implementations are quite different. As we've explained previously, variable bindings are not allocated in a stack, but instead in environment frames on the garbage-collected heap. This is necessary so that closures can have indefinite extent, and can count on the environments they use living as long as is necessary. The garbage collector will eventually reclaim the space for variable bindings in frames that aren't captured by closures.
(Actually, I'm oversimplifying a bit here. Some implementations of Scheme do use a relatively conventional stack, often so that they can compile Scheme straightforwardly to C. They must provide tail-call optimization somehow, though. I won't go into alternative implementation strategies here.)
Most Scheme implementations also differ from conventional language implementations in how they represent the saved state of callers. (In a conventional language implementation, the callers' state is in two places: the variable bindings are in the callers' own stack frames, and the return address is stored in the callee's stack frame.)
In most Scheme implementations, the caller's state is saved in a record on the garbage-collected heap, called a partial continuation. It's called a continuation because it says how to resume the caller when we return into it--i.e., how to continue the computation when control returns. It's called a partial continuation because that record, by itself, it only tells us how to resume the caller, not the caller's caller or the caller's caller's caller. On the other hand, each partial continuation holds a pointer to the partial continuation for its caller, so a chain of continuations represents how to continue the whole computation: how to resume the caller, and when it returns, how to resume its caller, and so on until the computation is complete. This chain is therefore called a full continuation.
(Notice that the relationship between the partial continuations in a full continuation chain is similar to the relationship between an environment frame and an environment chain. The former represents control information while the latter represents scope information.)
In most Scheme implementations, a special register called the continuation register is used to hold the pointer to the partial continuation for the caller of the currently-executing procedure. When we call a procedure, we can package up the state of the caller as a record on the heap (a partial continuation), and push that partial continuation onto the chain of continuations hanging off the continuation register.
part. cont. (saved state of caller's /|\ caller's caller) | | part. cont. (saved state of caller's caller) /|\ | +-------+ | CONT | +---+-----> part. cont. (saved state of caller) +-------+
(It is often convenient to draw stacks and continuations as growing
downward, which is our convention here--the newer elements are on
the bottom.)
Note that the continuation register may be a register in the
CPU, or it may just be a particular memory location that our
implementation uses for this purpose. The point is just that
when we're executing a procedure, we always know where to find
a pointer to the partial continuation that lets us resume its
caller (or whichever procedure last did a non-tail call). We will
sometimes abbreviate this register's name as CONT
. A typical
implementation of Scheme using a compiler has several important
registers that encode the state of the
currently-executing procedure:
ENVT
) holds the pointer
to the chain of environment frames that make up the environment
that the procedure is executing in.
PC
) holds the pointer
to the next instruction to execute. In a normal system that compiles
to normal machine code, this is the actual program counter of
the underlying hardware.
CONT
), as we've said, holds
the pointer to the chain of partial continuations that lets us
resume callers. This is very roughly the equivalent of
an activation stack pointer.
Before we call a procedure, we must save a continuation if we want to resume the current procedure after the callee returns.
Since the important state of the currently-executing procedure
is in the registers listed above, we will create a record that
has fields to hold them, and push that on the continuation chain.
We will save the value of the CONT
, ENVT
, and
PC
registers in the partial continuation, then put a pointer
to this new partial continuation in the continuation registers.
We also need to save any other state that the caller will need when
it resumes, as suggested by the ellipsis below. (We'll discuss
what else goes in a partial continuation when we talk about
compilers in detail.)
old cont. /|\ | +-------+ | +-------+ |p.cont.| | CONT | +---+------->+=======+ | +-------+ cont | +---+-------+ +-------+ envt | +---+-------->old envt +-------+ pc | +---+-------->return address +-------+ | + ... | | +-------+
Notice that since we saved the old value of the continuation register in the partial continuation, that serves as the "next" pointer in the linked list that makes up the full continuation. This is exactly as it should be. The value of the continuation register is part of the caller's state, and saving it naturally constructs a linked list, because each procedure's state is fundamentally linked to the state of its caller. Saving the return address is a little bit special--rather than just copying the program counter and saving it, we must save the address we want to resume at when we resume this procedure.
Once a procedure has pushed a continuation, it has saved its
state and can call another procedure. The other procedure
can use the ENVT
, CONT
, and PC
registers
without destroying the values of those registers needed by the caller.
This is called a caller saves register usage convention; the
assumption is that the callee is allowed to freely clobber the values
in the registers, so it's the caller's responsibility to save
any values it will need when it resumes.
To do a procedure return, it is only necessary to copy the
register values out of the continuation that's pointed to
by the CONT
register. This will restore the caller's environment
and its pointer to its caller's continuation, and setting the
PC
register will branch to the code in the caller where execution
should resume. We often call this "popping" a continuation,
because it's a stacklike operation--saving a (partial) continuation
pushes the values in registers onto the front of the "stack,"
and restoring one pops the values back into the registers. (As
we will explain later, however, Scheme continuation chains don't
always observe this simple stack discipline, which is why they
can't be implemented efficiently as contiguous arrays.)
If we save state and do a procedure call, and before returning our caller saves its state and does a procedure call, the chain of continuations gets longer. For the most part, this is like pushing activation information on a stack.
/|\ | +---------+ | | p.cont. | | +=========+ | cont | +----+-------+ +---------+ envt | +----+-------->old envt +---------+ pc | +----+-------->return address +---------+ | + ... | | +---------+ _ |\ \ \ \ \ . +---------+ | +-------+ | p.cont. | | cont | +---+------->+=========+ / +-------+ cont | +----+---' +---------+ envt | +----+-------->old envt +---------+ pc | +----+-------->return address +---------+ | + ... | | +---------+
Notice that when we say we save the "state" of the caller, we mean the values in our important registers, but we don't directly save particular variable values--when we save the environment pointer, we don't make copies of the values in the bindings in the environment. In effect, saving the environment pointer records which names refer to which pieces of storage. If other code then executes in that same environment and changes those values, the new values will be seen by this procedure when it returns and restores the environment pointer. This policy has two important consequences:
Executing a return ("popping a continuation") does not modify the partial continuation being popped--it just involves getting values out of the continuation and putting them into registers. Continuations are thus created and used nondestructively, and the continuations on the heap form a graph that reflects the pattern of non-tail procedure calls. Usually, that graph is just a tree, because of the tree-like pattern of call graphs, and the current "stack" of partial continuations is just the rightmost path through that graph, i.e., the path from the newest record all the way back to the root.
Consider the following procedures, where a
calls b
twice,
and each time b
is called, it calls c
twice:
(define (a) (b) (b) #t)
(define (b) (c) (c) #t)
(define (c) #f)
All of these calls are non-tail calls, because none of the procedures ever ends in a (tail) call.
Suppose we call a
after pushing a continuation for
a
's caller, then a
calls b
the first time.
a
will push a continuation to save its state, then call
b
. While executing b
, b
's state will
be in the registers, including a pointer to the continuation for
a
in the CONT
register.
cont. for a's caller / / cont. for a /|\ +---+ | CONT | +-+----+ +---+
b
will then push a continuation and call c
.
cont. for a's caller / / cont. for a / / cont. for b /|\ | +-|-+ CONT | + | +---+
When c
returns, it will restore b
's state by popping the
partial continuation's values into registers. At this point, the
CONT
register will point past the continuation for b
to the
continuation for a
.
cont for a's caller / / cont. for a / /|\ / | cont. for b | | +---+ | CONT | +-+-------+ +---+
Then b
will push another continuation and call c
again.
cont for a's caller / / cont. for a / \ / \ cont. for b cont for b /|\ | +---+ | CONT | +-+----------+ +---+
Again, c
will return, restoring b
's state, and the
CONT
register will point past the continuation for b
to the
continuation for a
.
cont for a's caller / / cont. for a <-------+ / \ | / \ | cont. for b cont for b | | +---+ | CONT | +-+-------------------+ +---+
After returning to a
, the CONT
register will point past the
continuation for a
to the continuation for a
's caller. Then
before a
calls b
again, it will push another continuation to
save its state.
cont for a's caller / \ / \ cont. for a cont for a / \ /|\ / \ | cont. for b cont for b | | +---+ | CONT | +-+----------------------+ +---+
Then a
will return and the CONT
register will point past the
continuation for a
to the continuation for a
's caller.
cont for a's caller <--+ / | / | cont. for a | / \ | / \ | cont. for b cont for a | | +---+ | CONT | +-+----------------------------+ +---+
This continues in the obvious way, so that at the time of the fourth and last call to C, the continuations on the heap look like this:
cont for a's caller / \ / \ cont. for a cont. for a / \ / \ / \ / \ cont for b cont for b cont for b cont for b /|\ | +---+ | CONT | +-+--------------------------------+ +---+
Most of the time, the rest of this graph becomes garbage quickly--each continuation holds pointers back up the tree, but nothing holds pointers down the tree. Partial continuations therefore usually become garbage the first time they're returned through.
The fact that this graph is created on the heap will allow us to implement
call-
with-
current-
continuation
, a.k.a.
call/cc
, a very powerful control construct. call/cc
can capture the control state of a program at a particular point
in its execution by pushing a partial continuation and saving
a pointer to it. Later, it can magically return control to that
point by restoring that continuation, instead of the one in the
continuation register. (We will discuss call/cc
in detail in
Chapter XX.)
In an earlier section, we presented example recursive implementations of several Scheme functions; some of them were tail recursive, and some not.
At first glance, many routines seem as though they can't conveniently be coded tail recursively. On closer inspection, many of them can in fact be coded this way.
Suppose we want to sum a list of numbers. The most obvious way of doing it is the way we did it earlier, like this:
(define (list-sum lis) (if (null? lis) 0 (+ (car lis) (list-sum (cdr lis)))))
The problem with this code is that it's not particularly efficient,
because it's not tail recursive. After each recursive call to
list-sum
, we must return to do the addition that adds one
element to the sum of the rest of the list. We're adding the
elements of the list back-to-front, on the way back up from nested
recursion. (This means that Scheme must push a partial continuation
before every recursive call, and each one must be popped when we're
finished, to return the sum back from each call to its caller.)
The problem here is that evaluation of the arguments to a combination isn't a tail call, even if the combination as a whole is. Control must always return so that the actual call can be done.
We can write a tail-recursive version of list-sum
that adds
things in front-to-back order instead. The trick is to do the
addition before the tail call, and to pass the sum so
far to the recursive call, i.e., to pass it forward as an
argument until a non-tail call returns it.
To do this, we have to keep a running sum, and each recursive call must pass it as an argument to the next. To start it off, we have to have a "running sum" of 0.
We can do this by defining two procedures. The one that does the
real work takes a list and a running sum, adds one element to
the running sum, and tail-calls itself to add the rest of the
elements to the running sum. When it reaches the end of the list,
it just returns the value. (Scheme doesn't need to save a partial
continuation before each call, since only the last call ever returns.)
We'll call this procedure loop
, because we're using tail
recursion to implement looping.
For convenience, we also wrap this procedure up in a friendlier procedure
that will start off the recursion, by supplying an initial "running sum"
of 0.
;; a tail-recursive list summing procedure (define (loop lis sum-so-far) (cond ((null? lis) sum-so-far) (else (loop (cdr lis) (+ sum-so-far (car lis)))))) ;; a friendly wrapper that supplies an initial running sum of 0 (define (list-sum lis) (loop lis 0))
We can make this cleaner by encapsulating loop
, since
it's only used by list-sum. We make loop
a local procedure
using letrec
and lambda
.
(define (list-sum lis) ;; define a local, tail-recursive list summing procedure (letrec ((loop (lambda (lis sum-so-far) (cond ((null? lis) sum-so-far) (else (loop (cdr lis) (+ sum-so-far (car lis)))))))) (loop lis 0))) ;; start off recursive summing with a sum of 0
We can write this more clearly using named let
. Named let
is one of Scheme's two special forms for looping (the other is do
).
A named let
looks like a let
, but it's really a shorthand
for the kind of thing we did above--defining a local procedure
that can tail-call itself to give the effect of iteration, and
starting it off with the appropriate initial value.
(define (list-sum lis) (let loop ((lis lis) (sum-so-far 0)) (cond ((null? lis) sum-so-far) (else (loop (cdr lis) (+ sum-so-far (car lis)))))))
Notice that here we're using two loop variables, lis
and
sum-so-far
, rebound at each iteration. One keeps track of the
remaining part of the original list, and the other the sum of the list
items we've seen so far.
Be sure you understand that this version using named let
is
exactly equivalent to the version using letrec
and
lambda
. The named let
binds the variable loop
,
and initializes it with a first-class procedure that takes two arguments,
list
and sum-so-far
. When we used the name loop
for the named let
, we're really giving the name of the procedure
that implements the loop body. Each time we iterate the loop,
we're really calling this procedure--the call to loop
looks
like a procedure call because it is a procedure call.
The argument expressions provide the new values for the next iteration
of the loop, and the loop variables are rebound and initialized
to those values at the next iteration of the loop. As in the
version with an explicit letrec and lambda, the loop is started
off by evaluating the initial value expressions for the loop
variables (which look like let
variables) and calling the
loop
procedure.
Since we re-bind the loop variables at each iteration of the loop, it generally doesn't make sense to side-effect loop variables. The old binding goes out of scope, and new bindings are created at each iteration, initialized with whatever values are passed to the looping procedure.
length
tail-recursively
Recall that in [ Chapter whatever ] we implemented length
this
way:
(define (length lis) (if (null? lis) 0 (+ 1 (length (cdr lis)))))
This definition looks a lot like the definition of list-sum
,
and has the same basic problem. By using straightforward recursion
(adding one to the length of the rest of the list), we're ensuring
the addition happens back-to-front. We can compute the list
length front to back by passing the running sum forward through
tail recursions, as an argument. Each tail call will add to the
running sum, and pass it forward. When the last tail call returns
to its caller, it just returns the sum.
To do this, it's convenient to write the length
procedure as a
wrapper around a two-argument procedure that passes the running
sum (as well as the remainder of list) to recursive calls to itself.
(define (length lis) (letrec ((len (lambda (lis length-so-far) (if (null? lis) length-so-far (len (cdr lis) (+ 1 length-so-far))))))) (len lis 0))
Or equivalently, using named let
:
(define (length lis) (let loop ((lis lis) (length-so-far 0)) (if (null? lis) len-so-far (loop (cdr lis) (+ (car lis) length-so-far)))))
reduce
In this section, I'll give an extended example of the use of higher-order functions to express patterns common to many functions, and customizing general procedures with procedural arguments and closure creation.
Consider the following function to sum the elements of a list
(define (list-sum lis) (if (null? lis) 0 (+ (car lis) (list-sum (cdr lis)))))
Given this definition,
(list-sum '(10 15 20 25))
is equivalent to
(+ 10 (+ 15 (+ 20 (+ 25 0)))).
[ the following couple of examples are now redundant with earlier material... trim and refer back. ]
Now consider a very similar function to multiply the elements of a list, where we've adopted the convention that the product of a null list is 1. (1 is probably the right value to use, because if you multiply something by 1 you get back the same thing--just as if you add something to 0 you get back the same thing.)
(define (list-prod lis) (if (null? lis) 1 (+ (car lis) (list-prod (cdr lis)))))
Given this definition,
(list-prod '(2 3 4 5))
is equivalent to
(* 2 (* 3 (* 4 (* 5 1))))
Given these definitions, you can probably imagine a very similar function to subtract the elements of a list, or to divide the elements of a list. For subtraction, the base value for an empty list should probably be zero, because subtracting zero doesn't change anything. For division it should probably be one.
At any rate, what we want is a single function that captures the pattern
(
op thing1(
op thing2 ...(
op thingn
base-thing)
...))
We can write a higher-order procedure reduce
that implements this
pattern in a general way, taking three arguments: any procedure you
want successively applied to the elements of a list, an appropriate
base value to use on reaching the end of the list, and the
list to do it to.
(define (reduce fn base-value lis) (if (null? lis) base-value (fn (car lis) (reduce fn base-value (cdr lis)))))
This is a very general procedure, that can be used for lots of things besides numerical operations on lists of numbers: it can be used for any computation over successive items in a list.
[ need to check the following couple of examples--they're off the top of my head ]
What does (reduce cons '() '(a b c d))
do? It's equivalent to
(cons 'a (cons 'b (cons 'c (cons 'd '())))
. That is,
(reduce cons '()
list)
copies a list. We could
define list-copy
that way:
(define (list-copy lis) (reduce cons '() lis))
We could also define append
that way, because reduce
allows you to specify what goes at the end of a list--we don't
have to end our list with '()
. Here's a two-argument
version of append
:
(define (append list1 list2) (reduce cons list2 list1))
The reduction of a list using (lambda
(x rest)
(cons (* x 2) rest))
constructs a new list whose elements are twice the values of the
corresponding elements in the original list.
Scheme>(reduce (lambda (x rest) (cons (* x 2) rest)) '() '(1 2 3 4)) (2 4 6 8)
[ show tail-recursive version... that'd make a good exercise ]
The reduce
procedure above is handy, because you can use it for
many different kinds of computations over different kinds of
lists values, as long as you can process the elements (and construct
the result) front-to-back. It's a little awkward, though, in that each
time you use it, you have to remember the appropriate base value for
the operation you're applying to a list.
Sometimes it would be preferable to come up with a single
specialized procedure like list-sum
, which implicitly remembers
which function it should apply to the list elements (e.g., +)
and what base value to return for an empty list (e.g., 0).
We can write a procedure make-reducer
that will automatically
construct a reducer procedure, given a function and a base value. Here's
an example usage:
Scheme> (define list-sum (make-reducer + 0)) list-sum Scheme> (define list-product (make-reducer * 1)) list-copy Scheme> (list-sum '(1 2 3 4)) 10 Scheme> (list-product '(1 2 3 4)) 24
Make sure you understand the expressions above. The define forms
are not
using procedure definition syntax--they're using plain
variable definition syntax, but the initial value expressions
return procedures constructed by make-reducer. If we hadn't
wanted to define procedures named list-sum and list-product,
and hang on to them, we could have just taken the procedures
returned by make-reducer and called them immediately:
Scheme> ((make-reducer + 0) '(1 2 3 4)) 10 Scheme> ((make-reducer * 1) '(1 2 3 4)) 24
This is very much like calling our original reduce
procedure,
except that each time we're constructing a specialized procedure
(closure) that's like reduce
customized for particular values
of its first two arguments; then we call that new, specialized procedure
to do the work on a particular list.
Here's a simple definition of make-reducer
in terms of
reduce
:
(define (make-reducer fn base-value) (lambda (lis) (reduce fn base-value lis)))
Notice that we are using procedure definition syntax here,
so the lambda
in the body will create and return a closure.
[ can also do this with curry
. ]
But suppose we don't already have a reduce procedure, and we don't
want to leave one lying around. A cleaner solution is to define
the general reduce
procedure as a local procedure, and create
closures of it in different environments to customize it for
different functions and base values.
(define (make-reducer fn base-value) (letrec ((reduce (lambda (lis) (if (null? lis) base-value (fn (car lis) (reduce (cdr lis))))))) reduce)) ;; return new closure of local procedure
This procedure uses closure creation to create a customized version
of reduce
When make-reducer
is entered, its arguments
are bound and initialized to the argument values--i.e., the function and
base value we want the custom reducer to use. In this environment,
we create a closure of the reducer procedure using lambda
.
We wrap the lambda in a letrec so that the reducer can refer to
(and call) itself. Notice that since reduce
is a local procedure,
it can see the arguments to make-reducer
, and we don't have to
pass it those arguments explicitly.
By using local procedure definition syntax--which not all Schemes
support--we can write this as:
(define (make-reducer fn base-value) (define (reduce lis) (if (null? lis) base-value (fn (car lis) (reduce (cdr lis))))) reduce)) ;; return new closure of local procedure
Make sure that you understand that these are equivalent--the
local procedure define
is equivalent to a letrec
and a
lambda
, and in either case the closure created (by the lambda
or the define
) will capture the environment where the arguments
to make-reducer
are bound.
let
[ move earlier discussion here? ]
do