PLAI §11 (without the last part about recursion)
An alternative representation for an environment.
We’ve already seen how first-class functions can be used to implement
“objects” that contain some information. We can use the same idea to
represent an environment. The basic intuition is — an environment is
a mapping (a function) between an identifier and some value. For
example, we can represent the environment that maps 'a
to 1
and 'b
to 2
(using just numbers for simplicity) using this function:
An empty mapping that is implemented in this way has the same type:
We can use this idea to implement our environments: we only need to
define three things — EmptyEnv
, Extend
, and lookup
. If we
manage to keep the contract to these functions intact, we will be able
to simply plug it into the same evaluator code with no other changes.
It will also be more convenient to define ENV
as the appropriate
function type for use in the VAL
type definition instead of using the
actual type:
Now we get to EmptyEnv
— this is expected to be a function that
expects no arguments and creates an empty environment, one that behaves
like the empty-mapping
function defined above. We could define it
like this (changing the empty-mapping
type to return a VAL
):
but we can skip the need for an extra definition and simply return an empty mapping function:
(The un-Rackety name is to avoid replacing previous code that used the
EmptyEnv
name for the constructor that was created by the type
definition.)
The next thing we tackle is lookup
. The previous definition that was
used is:
How should it be modified now? Easy — an environment is a mapping: a
Racket function that will do the searching job itself. We don’t need to
modify the contract since we’re still using ENV
, except a different
implementation for it. The new definition is:
Note that lookup
does almost nothing — it simply delegates the real
work to the env
argument. This is a good hint for the error message
that empty mappings should throw —
Finally, Extend
— this was previously created by the variant case of
the ENV type definition:
keeping the same type that is implied by this variant means that the new
Extend
should look like this:
The question is — how do we extend a given environment? Well, first,
we know that the result should be mapping — a symbol -> VAL
function
that expects an identifier to look for:
Next, we know that in the generated mapping, if we look for id
then
the result should be val
:
If the name
that we’re looking for is not the same as id
, then we
need to search through the previous environment:
But we know what lookup
does — it simply delegates back to the
mapping function (which is our rest
argument), so we can take a direct
route instead:
To see how all this works, try out extending an empty environment a few times and examine the result. For example, the environment that we began with:
behaves in the same way (if the type of values is numbers) as
The new code is now the same, except for the environment code:
Racket closures (= functions) can be used in other places too, and as we have seen, they can do more than encapsulate various values — they can also hold the behavior that is expected of these values.
To demonstrate this we will deal with closures in our language. We currently use a variant that holds the three pieces of relevant information:
We can replace this by a functional object, which will hold the three
values. First, change the VAL
type to hold functions for FunV
values:
And note that the function should somehow encapsulate the same
information that was there previously, the question is how this
information is going to be done, and this will determine the actual
type. This information plays a role in two places in our evaluator —
generating a closure in the Fun
case, and using it in the Call
case:
we can simply fold the marked functionality bit of Call
into a Racket
function that will be stored in a FunV
object — this piece of
functionality takes an argument value, extends the closure’s environment
with its value and the function’s name, and continues to evaluate the
function body. Folding all of this into a function gives us:
where the values of bound-body
, bound-id
, and val
are known at the
time that the FunV
is constructed. Doing this gives us the
following code for the two cases:
And now the type of the function is clear:
And again, the rest of the code is unmodified:
What we did just now is implement lexical environments and closures in the language we implement using lexical environments and closures in our own language (Racket)!
This is another example of embedding a feature of the host language in the implemented language, an issue that we have already discussed.
There are many examples of this, even when the two languages involved are different. For example, if we have this bit in the C implementation of Racket:
then the special semantics of evaluating a Racket and
form is being
inherited from C’s special treatment of &&
. You can see this by the
fact that if there is a bug in the C compiler, then it will propagate to
the resulting Racket implementation too. A different solution is to not
use &&
at all:
and we can say that this is even better since it evaluates the second expression in tail position. But in this case we don’t really get that benefit, since C itself is not doing tail-call optimization as a standard feature (though some compilers do so under some circumstances).
We have seen a few different implementations of evaluators that are quite different in flavor. They suggest the following taxonomy.
A syntactic evaluator is one that uses its own language to represent expressions and semantic runtime values of the evaluated language, implementing all the corresponding behavior explicitly.
A meta evaluator is an evaluator that uses language features of its own language to directly implement behavior of the evaluated language.
While our substitution-based FLANG evaluator was close to being a syntactic evaluator, we haven’t written any purely syntactic evaluators so far: we still relied on things like Racket arithmetics etc. The most recent evaluator that we have studied, is even more of a meta evaluator than the preceding ones: it doesn’t even implement closures and lexical scope, and instead, it uses the fact that Racket itself has them.
With a good match between the evaluated language and the implementation language, writing a meta evaluator can be very easy. With a bad match, though, it can be very hard. With a syntactic evaluator, implementing each semantic feature will be somewhat hard, but in return you don’t have to worry as much about how well the implementation and the evaluated languages match up. In particular, if there is a particularly strong mismatch between the implementation and the evaluated language, it may take less effort to write a syntactic evaluator than a meta evaluator. As an exercise, we can build upon our latest evaluator to remove the encapsulation of the evaluator’s response in the VAL type. The resulting evaluator is shown below. This is a true meta evaluator: it uses Racket closures to implement FLANG closures, Racket function application for FLANG function application, Racket numbers for FLANG numbers, and Racket arithmetic for FLANG arithmetic. In fact, ignoring some small syntactic differences between Racket and FLANG, this latest evaluator can be classified as something more specific than a meta evaluator:
(Put differently, the trivial nature of the evaluator clues us in to the deep connection between the two languages, whatever their syntactic differences may be.)
We saw that the difference between lazy evaluation and eager evaluation
is in the evaluation rules for with
forms, function applications, etc:
is eager, and
is lazy. But is the first rule really eager? The fact is that the only thing that makes it eager is the fact that our understanding of the mathematical notation is eager — if we were to take math as lazy, then the description of the rule becomes a description of lazy evaluation.
Another way to look at this is — take the piece of code that implements this evaluation:
and the same question applies: is this really implementing eager
evaluation? We know that this is indeed eager — we can simply try it
and check that it is, but it is only eager because we are using an eager
language for the implementation! If our own language was lazy, then the
evaluator’s implementation would run lazily, which means that the above
applications of the eval
and the subst
functions would also be lazy,
making our evaluator lazy as well.
This is a general phenomena where some of the semantic features of the language we use (math in the formal description, Racket in our code) gets embedded into the language we implement.
Here’s another example — consider the code that implements arithmetics:
what if it was written like this:
Would it still implement unlimited integers and exact fractions? That depends on the language that was used to implement it: the above syntax suggests C, C++, Java, or some other relative, which usually come with limited integers and no exact fractions. But this really depends on the language — even our own code has unlimited integers and exact rationals only because Racket has them. If we were using a language that didn’t have such features (there are such Scheme implementations), then our implemented language would absorb these (lack of) features too, and its own numbers would be limited in just the same way. (And this includes the syntax for numbers, which we embedded intentionally, like the syntax for identifiers).
The bottom line is that we should be aware of such issues, and be very careful when we talk about semantics. Even the language that we use to communicate (semi-formal logic) can mean different things.
Aside: read “Reflections on Trusting Trust” by Ken Thompson (You can skip to the “Stage II” part to get to the interesting stuff.)
(And when you’re done, look for “XcodeGhost” to see a relevant example, and don’t miss the leaked document on the wikipedia page…)
Here is yet another variation of our evaluator that is even closer to a meta-circular evaluator. It uses Racket values directly to implement values, so arithmetic operations become straightforward. Note especially how the case for function application is similar to arithmetics: a FLANG function application translates to a Racket function application. In both cases (applications and arithmetics) we don’t even check the objects since they are simple Racket objects — if our language happens to have some meaning for arithmetics with functions, or for applying numbers, then we will inherit the same semantics in our language. This means that we now specify less behavior and fall back more often on what Racket does.
We use Racket values with this type definition:
And the evaluation function can now be:
Note how the arithmetics implementation is simple — it’s a direct translation of the FLANG syntax to Racket operations, and since we don’t check the inputs to the Racket operations, we let Racket throw type errors for us. Note also how function application is just like the arithmetic operations: a FLANG application is directly translated to a Racket application.
However, this does not work quite as simply in Typed Racket. The whole
point of typechecking is that we never run into type errors — so we
cannot throw back on Racket errors since code that might produce them is
forbidden! A way around this is to perform explicit checks that
guarantee that Racket cannot run into type errors. We do this with the
following two helpers that are defined inside eval
:
Note that Typed Racket is “smart enough” to figure out that in evalF
the result of the recursive evaluation has to be either Number
or
(VAL -> VAL)
; and since the if
throws out on numbers, we’re left
with (VAL -> VAL)
functions, not just any function.
There is one major feature that is still missing from our language: we
have no way to perform recursion (therefore no kind of loops). So far,
we could only use recursion when we had names. In FLANG, the only way
we can have names is through with
which not good enough for recursion.
To discuss the issue of recursion, we switch to a “broken” version of
(untyped) Racket — one where a define
has a different scoping rules:
the scope of the defined name does not cover the defined expression.
Specifically, in this language, this doesn’t work:
In our language, this translation would also not work (assuming we have
if
etc):
And similarly, in plain Racket this won’t work if let
is the only tool
you use to create bindings:
In the broken-scope language, the define
form is more similar to a
mathematical definition. For example, when we write:
it is actually shorthand for
we can then replace defined names with their definitions:
and this can go on, until we get to the actual code that we wrote:
This means that the above fact
definition is similar to writing:
which is not a well-formed definition — it is meaningless (this is a
formal use of the word “meaningless”). What we’d really want, is to
take the equation (using =
instead of :=
)
and find a solution which will be a value for fact
that makes this
true.
If you look at the Racket evaluation rules handout on the web page, you
will see that this problem is related to the way that we introduced the
Racket define
: there is a hand-wavy explanation that talks about
knowing things.
The big question is: can we define recursive functions without Racket’s
magical define
form?
Note: This question is a little different than the question of implementing recursion in our language — in the Racket case we have no control over the implementation of the language. As it will eventually turn out, implementing recursion in our own language will be quite easy when we use mutation in a specific way. So the question that we’re now facing can be phrased as either “can we get recursion in Racket without Racket’s magical definition forms?” or “can we get recursion in our interpreter without mutation?”.
PLAI §22.4 (we go much deeper)
Note: This explanation is similar to the one you can find in “The Why of Y”, by Richard Gabriel.
To implement recursion without the define
magic, we first make an
observation: this problem does not come up in a dynamically-scoped
language. Consider the let
-version of the problem:
This works fine — because by the time we get to evaluate the body of
the function, fact
is already bound to itself in the current dynamic
scope. (This is another reason why dynamic scope is perceived as a
convenient approach in new languages.)
Regardless, the problem that we have with lexical scope is still there,
but the way things work in a dynamic scope suggest a solution that we
can use now. Just like in the dynamic scope case, when fact
is
called, it does have a value — the only problem is that this value is
inaccessible in the lexical scope of its body.
Instead of trying to get the value in via lexical scope, we can imitate
what happens in the dynamically scoped language by passing the fact
value to itself so it can call itself (going back to the original code
in the broken-scope language):
except that now the recursive call should still send itself along:
The problem is that this required rewriting calls to fact
— both
outside and recursive calls inside. To make this an acceptable
solution, calls from both places should not change. Eventually, we
should be able to get a working fact
definition that uses just
The first step in resolving this problem is to curry the fact
definition.
Now fact
is no longer our factorial function — it’s a function that
constructs it. So call it make-fact
, and bind fact
to the actual
factorial function.
We can try to do the same thing in the body of the factorial function:
instead of calling (self self)
, just bind fact
to it:
This works fine, but if we consider our original goal, we need to get
that local fact
binding outside of the (lambda (n) ...)
— so we’re
left with a definition that uses the factorial expression as is. So,
swap the two lines:
But the problem is that this gets us into an infinite loop because we’re
trying to evaluate (self self)
too ea(ge)rly. In fact, if we ignore
the body of the let
and other details, we basically do this:
And this expression has an interesting property: it reduces to itself, so evaluating it gets stuck in an infinite loop.
So how do we solve this? Well, we know that (self self)
should be
the same value that is the factorial function itself — so it must be a
one-argument function. If it’s such a function, we can use a value that
is equivalent, except that it will not get evaluated until it is needed,
when the function is called. The trick here is the observation that
(lambda (n) (add1 n))
is really the same as add1
(provided that
add1
is a one-argument function), except that the add1
part doesn’t
get evaluated until the function is called. Applying this trick to our
code produces a version that does not get stuck in the same infinite
loop:
Continuing from here — we know that
(remember how we derived fun
from a with
), so we can turn that let
into the equivalent function application form:
And note now that the (lambda (fact) …) expression is everything that
we need for a recursive definition of fact
— it has the proper
factorial body with a plain recursive call. It’s almost like the usual
value that we’d want to define fact
as, except that we still have to
abstract on the recursive value itself. So lets move this code into a
separate definition for fact-step
:
We can now proceed by moving the (make-fact make-fact)
self
application into its own function which is what creates the real
factorial:
Rewrite the make-fact
definition using an explicit lambda
:
and fold the functionality of make-fact
and make-real-fact
into a
single make-fact
function by just using the value of make-fact
explicitly instead of through a definition:
We can now observe that make-real-fact
has nothing that is specific to
factorial — we can make it take a “core function” as an argument:
and call it make-recursive
:
We’re almost done now — there’s no real need for a separate
fact-step
definition, just use the value for the definition of fact
:
turn the let
into a function form:
do some renamings to make things simpler — make
and self
turn to
x
, and core
to f
:
or we can manually expand that first (lambda (x) (x x)) application to
make the symmetry more obvious (not really surprising because it started
with a let
whose purpose was to do a self-application):
And we finally got what we were looking for: a general way to define
any recursive function without any magical define
tricks. This also
work for other recursive functions:
A convenient tool that people often use on paper is to perform a kind of
a syntactic abstraction: “assume that whenever I write (twice foo) I
really meant to write (foo foo)”. This can often be done as plain
abstractions (that is, using functions), but in some cases — for
example, if we want to abstract over definitions — we just want such a
rewrite rule. (More on this towards the end of the course.) The
broken-scope language does provide such a tool — rewrite
extends the
language with a rewrite rule. Using this, and our make-recursive
, we
can make up a recursive definition form:
In other words, we’ve created our own “magical definition” form. The above code can now be written in almost the same way it is written in plain Racket:
Finally, note that make-recursive is limited to 1-argument functions only because of the protection from eager evaluation. In any case, it can be used in any way you want, for example,
is a function that returns itself rather than calling itself. Using the rewrite rule, this would be:
which is the same as:
in plain Racket.
make-recursive
As in Racket, being able to express recursive functions is a fundamental property of the language. It means that we can have loops in our language, and that’s the essence of making a language powerful enough to be TM-equivalent — able to express undecidable problems, where we don’t know whether there is an answer or not.
The core of what makes this possible is the expression that we have seen in our derivation:
which reduces to itself, and therefore has no value: trying to evaluate it gets stuck in an infinite loop. (This expression is often called “Omega”.)
This is the key for creating a loop — we use it to make recursion
possible. Looking at our final make-recursive
definition and ignoring
for a moment the “protection” that we need against being stuck
prematurely in an infinite loop:
we can see that this is almost the same as the Omega expression — the
only difference is that application of f
. Indeed, this expression
(the result of (make-recursive F) for some F
) reduces in a similar way
to Omega:
which means that the actual value of this expression is:
This definition would be sufficient if we had a lazy language, but to
get things working in a strict one we need to bring back the protection.
This makes things a little different — if we use (protect f)
to be a
shorthand for the protection trick,
then we have:
which makes the (make-recursive F) evaluation reduce to
and this is still the same result (as long as F
is a single-argument
function).
(Note that protect
cannot be implemented as a plain function!)
Note: This explanation is similar to the one you can find in “The Little Schemer” called “(Y Y) Works!”, by Dan Friedman and Matthias Felleisen.
The explanation that we have now for how to derive the make-recursive
definition is fine — after all, we did manage to get it working. But
this explanation was done from a kind of an operational point of view:
we knew a certain trick that can make things work and we pushed things
around until we got it working like we wanted. Instead of doing this,
we can re-approach the problem from a more declarative point of view.
So, start again from the same broken code that we had (using the broken-scope language):
This is as broken as it was when we started: the occurrence of fact
in
the body of the function is free, which means that this code is
meaningless. To avoid the compilation error that we get when we run
this code, we can substitute anything for that fact
— it’s even
better to use a replacement that will lead to a runtime error:
This function will not work in a similar way to the original one — but
there is one case where it does work: when the input value is 0
(since then we do not reach the bogus application). We note this by
calling this function fact0
:
Now that we have this function defined, we can use it to write fact1
which is the factorial function for arguments of 0
or 1
:
And remember that this is actually just shorthand for:
We can continue in this way and write fact2
that will work for n<=2:
or, in full form:
If we continue this way, we will get the true factorial function, but the problem is that to handle any possible integer argument, it will have to be an infinite definition! Here is what it is supposed to look like:
The true factorial function is fact-infinity
, with an infinite size.
So, we’re back at the original problem…
To help make things more concise, we can observe the repeated pattern in
the above, and extract a function that abstracts this pattern. This
function is the same as the fact-step
that we have seen previously:
which is actually:
Do this a little differently — rewrite fact0
as:
Similarly, fact1
is written as:
and so on, until the real factorial, which is still infinite at this stage:
Now, look at that (lambda (mk) ...)
— it is an infinite expression,
but for every actual application of the resulting factorial function we
only need a finite number of mk
applications. We can guess how many,
and as soon as we hit an application of 777
we know that our guess is
too small. So instead of 777
, we can try to use the maker function to
create and use the next.
To make things more explicit, here is the expression that is our
fact0
, without the definition form:
This function has a very low guess — it works for 0, but with 1 it
will run into the 777
application. At this point, we want to somehow
invoke mk
again to get the next level — and since 777
does get
applied, we can just replace it with mk
:
The resulting function works just the same for an input of 0
because
it does not attempt a recursive call — but if we give it 1
, then
instead of running into the error of applying 777
:
we get to apply fact-step
there:
and this is still wrong, because fact-step
expects a function as an
input. To see what happens more clearly, write fact-step
explicitly:
The problem is in what we’re going to pass into fact-step
— its
fact
argument will not be the factorial function, but the mk
function constructor. Renaming the fact
argument as mk
will make
this more obvious (but not change the meaning):
It should now be obvious that this application of mk
will not work,
instead, we need to apply it on some function and then apply the
result on (- n 1)
. To get what we had before, we can use 777
as a
bogus function:
This will allow one recursive call — so the definition works for both
inputs of 0
and 1
— but not more. But that 777
is used as a
maker function now, so instead, we can just use mk
itself again:
And this is a working version of the real factorial function, so make it into a (non-magical) definition:
But we’re not done — we “broke” into the factorial code to insert that
(mk mk)
application — that’s why we dragged in the actual value of
fact-step
. We now need to fix this. The expression on that last line
is close enough — it is (fact-step (mk mk))
. So we can now try to
rewrite our fact
as:
… and would fail in a familiar way! If it’s not familiar enough, just
rename all those mk
s as x
s:
We’ve run into the eagerness of our language again, as we did before.
The solution is the same — the (x x)
is the factorial function, so
protect it as we did before, and we have a working version:
The rest should not be surprising now… Abstract the recursive making
bit in a new make-recursive
function:
and now we can do the first reduction inside make-recursive
and write
the fact-step
expression explicitly:
and this is the same code we had before.
Our make-recursive
function is usually called the fixpoint operator
or the Y combinator.
It looks really simple when using the lazy version (remember: our version is the eager one):
Note that if we do allow a recursive definition for Y itself, then the definition can follow the definition that we’ve seen:
(define (Y f) (f (Y f)))
And this all comes from the loop generated by:
This expression, which is also called Omega (the (lambda (x) (x x))
part by itself is usually called omega and then (omega omega)
is
Omega), is also the idea behind many deep mathematical facts. As an
example for what it does, follow the next rule:
(Note the usage of colon for the first and quotes for the second — what is the equivalent of that in the lambda expression?)
By itself, this just gets you stuck in an infinite loop, as Omega does,
and the Y combinator adds F
to that to get an infinite chain of
applications — which is similar to:
Sidenote: see this SO question and my answer, which came from the PLQ implementation.
fact-step
is a function that given any limited factorial, will
generate a factorial that is good for one more integer input. Start
with 777
, which is a factorial that is good for nothing (because it’s
not a function), and you can get fact0
as
and that’s a good factorial function only for an input of 0
. Use that
with fact-step
again, and you get
which is the factorial function when you only look at input values of
0
or 1
. In a similar way
is good for 0
…2
— and we can continue as much as we want, except
that we need to have an infinite number of applications — in the
general case, we have:
which is good for 0
…n
. The real factorial would be the result
of running fact-step
on itself infinitely, it is fact-infinity
.
In other words (here fact
is the real factorial):
but note that since this is really infinity, then
so we get an equation:
and a solution for this is going to be the real factorial. The solution
is the fixed-point of the fact-step
function, in the same sense that
0
is the fixed point of the sin
function because
And the Y combinator does just that — it has this property:
or, using the more common name:
This property encapsulates the real magical power of Y. You can see how
it works: since (Y f) = (f (Y f))
, we can add an f
application to
both sides, giving us (f (Y f)) = (f (f (Y f)))
, so we get:
and we can conclude that
Here’s another explanation of how the Y combinator works. Remember that
our fact-step
function was actually a function that generates a
factorial function based on some input, which is supposed to be the
factorial function:
As we’ve seen, you can apply this function on a version of factorial
that is good for inputs up to some n, and the result will be a factorial
that is good for those values up to n+1. The question is what is the
fixpoint of fact-step
? And the answer is that if it maps factₙ
factorial to factₙ₊₁, then the input will be equal to the output on the
infinitieth fact
, which is the actual factorial. Since Y is a
fixpoint combinator, it gives us exactly that answer:
Typing the Y combinator is a tricky issue. For example, in standard ML you must write a new type definition to do this:
Can you find a pattern in the places where
T
is used? — Roughly speaking, that type definition is;; `t' is the type name, `T' is the constructor (aka the variant)
(define-type (RecTypeOf t)
[T ((RecTypeOf t) -> t)])First note that the two
fn a => ...
parts are the same as our protection, so ignoring that we get:val y = fn f => (fn (T x) => (f (x (T x))))
(T (fn (T x) => (f (x (T x)))))if you now replace
T
withQuote
, things make more sense:val y = fn f => (fn (Quote x) => (f (x (Quote x))))
(Quote (fn (Quote x) => (f (x (Quote x)))))and with our syntax, this would be:
(define (Y f)
((lambda (qx)
(cases qx
[(Quote x) (f (x qx))]))
(Quote
(lambda (qx)
(cases qx
[(Quote x) (f (x qx))])))))it’s not really quotation — but the analogy should help: it uses
Quote
to distinguish functions as values that are applied (thex
s) from functions that are passed as arguments.
In OCaml, this looks a little different:
but OCaml has also a -rectypes
command line argument, which will make
it infer the type by itself:
The translation of this to #lang pl
is a little verbose because we
don’t have auto-currying, and because we need to declare input types to
functions, but it’s essentially a direct translation of the above:
It is also possible to write this expression in “plain” Typed Racket,
without a user-defined type — and we need to start with a proper type
definition. First of all, the type of Y should be straightforward: it
is a fixpoint operation, so it takes a T -> T
function and produces
its fixpoint. The fixpoint itself is some T
(such that applying the
function on it results in itself). So this gives us:
However, in our case make-recursive
computes a functional fixpoint,
for unary S -> T
functions, so we should narrow down the type
Now, in the body of make-recursive
we need to add a type for the x
argument which is behaving in a weird way: it is used both as a function
and as its own argument. (Remember — I will say the next sentence
twice: “I will say the next sentence twice”.) We need a recursive type
definition helper (not a new type) for that:
This type is tailored for our use of x
: it is a type for a function
that will consume itself (hence the Rec
) and spit out the value that
the f
argument consumes — an S -> T
function.
The resulting full version of the code: