Getting our thing closer to a compiler is done in a similar way — we
push the (lambda (env) ...)
inside the various cases. (Note that
compile*
depends on the env
argument, so it also needs to move
inside — this is done for all cases that use it, and will eventually
go away.) We actually need to use (lambda ([env : ENV]) ...)
though,
to avoid upsetting the type checker:
and with this we shifted a bit of actual work to compile time — the
code that checks what structure we have, and extracts its different
slots. But this is still not good enough — it’s only the first
top-level cases
that is moved to compile-time — recursive calls to
compile
are still there in the resulting closures. This can be seen
by the fact that we have those calls to compile
in the Racket closures
that are the results of our compiler, which, as discussed above, mean
that it’s not an actual compiler yet.
For example, consider the Bind
case:
At compile-time we identify and deconstruct the Bind structure, then
create a the runtime closure that will access these parts when the code
runs. But this closure will itself call compile
on bound-body
and
each of the expressions. Both of these calls can be done at compile
time, since they only need the expressions — they don’t depend on the
environment. Note that compile*
turns to run
here, since all it
does is run a compiled expression on the current environment.
We can move it back up, out of the resulting functions, by making it a function that consumes an environment and returns a “caller” function:
Once this is done, we have a bunch of work that can happen at compile time: we pre-scan the main “bind spine” of the code.
We can deal in a similar way with other occurrences of compile
calls
in compiled code. The two branches that need to be fixed are:
In the If
branch, there is not much to do. After we make it
pre-compile the cond-expr
, we also need to make it pre-compile both
the then-expr
and the else-expr
. This might seem like doing more
work (since before changing it only one would get compiled), but
since this is compile-time work, then it’s not as important. Also,
if
expressions are evaluated many times (being part of a loop, for
example), so overall we still win.
The Call
branch is a little trickier: the problem here is that the
expressions that are compiled are coming from the closure that is
being applied. The solution for this is obvious: we need to change
the closure type so that it closes over compiled expressions
instead of over plain ones. This makes sense because closures are
run-time values, so they need to close over the compiled expressions
since this is what we use as “code” at run-time.
Again, the goal is to have no compile
calls that happen at runtime:
they should all happen before that. This would allow, for example, to
obliterate the compiler once it has done its work, similar to how you
don’t need GCC to run a C application. Yet another way to look at this
is that we shouldn’t look at the AST at runtime — again, the analogy
to GCC is that the AST is a data structure that the compiler uses, and
it does not exist at runtime. Any runtime reference to the TOY AST is,
therefore, as bad as any runtime reference to compile
.
When we’re done with this process we’ll have something that is an actual compiler: translating TOY programs into Racket closures. To see how this is an actual compiler consider the fact that Racket uses a JIT to translate bytecode into machine code when it’s running functions. This means that the compiled version of our TOY programs are, in fact, translated all the way down to machine code.
Yet another way to see this is to change the compiler code so instead of producing a Racket closure it spits out the Racket code that makes up these closures when evaluated. For example, change
into
so we get a string that is a Racket program. But since we’re using a Lisp dialect, it’s generally better to use S-expressions instead:
(Later in the course we’ll talk about these “`
“s and “,
“s. For
now, it’s enough to know that “`
” is kind of like a quote, and
“,
” is an unquote.)
PLAI §7 (done with Haskell)
For this part, we will use a new language, Lazy Racket.
As the name suggests, this is a version of the normal (untyped) Racket language that is lazy.
First of all, let’s verify that this is indeed a lazy language:
That went without a problem — the argument expression was indeed not
evaluated. In this language, you can treat all expressions as future
promises
to evaluate. There are certain points where such promises
are actually forced
, all of these stem from some need to print a
resulting value, in our case, it’s the REPL that prints such values:
The expression by itself only generates a promise, but when we want to print it, this promise is forced to evaluate — this forces the addition, which forces its arguments (plain values rather than computation promises), and at this stage we get an error. (If we never want to see any results, then the language will never do anything at all.) So a promise is forced either when a value printout is needed, or if it is needed to recursively compute a value to print:
Note that the error was raised by the internal expression: the outer
expression uses *
, and +
requires actual values not promises.
Another example, which is now obvious, is that we can now define an if
function:
Actually, in this language if
, and
, and or
are all function values
instead of special forms:
(By now, you should know that these have no value in Racket — using
them like this in plain will lead to syntax errors.) There are some
primitives that do not force their arguments. Constructors fall in this
category, for example cons
and list
:
Nothing — the definition simply worked, but that’s expected, since nothing is printed. If we try to inspect this value, we can get some of its parts, provided we do not force the bogus one:
The same holds for cons:
Now if this is the case, then how about this:
Everything is fine, as expected — but what is the value of ones
now?
Clearly, it is a list that has 1 as its first element:
But what do we have in the tail of this list? We have ones
which we
already know is a list that has 1 in its first place — so following
Racket’s usual rules, it means that the second element of ones
is,
again, 1. If we continue this, we can see that ones
is, in fact, an
infinite list of 1s:
In this sense, the way define
behaves is that it defines a true
equation: if ones is defined as (cons 1 ones), then the real value does
satisfy
which means that the value is the fixpoint of the defined expression.
We can use append
in a similar way:
This looks like it has some common theme with the discussion of
implementing recursive environments — it actually demonstrates that in
this language, letrec
can be used for simple values too. First of
all, a side note — here an expression that indicated a bug in our
substituting evaluator:
When our evaluator returned 1
for this, we noticed that this was a
bug: it does not obey the lexical scoping rules. As seen above, Lazy
Racket is correctly using lexical scope. Now we can go back to the use
of letrec
— what do we get by this definition:
we get an error about xs
being undefined.
xs
is unbound because of the usual scope that let
uses. How can we
make this work? — We simply use letrec
:
As expected, if we try to print an infinite list will cause an infinite loop, which DrRacket catches and prints in that weird way:
How would we inspect an infinite list? We write a function that returns part of it:
Dealing with infinite lists can lead to lots of interesting things, for example:
To see how it works, see what you know about fibs[n]
which will be our
notation for the nth element of fibs
(starting from 1
):
and for all n>2
:
so it follows the exact definition of Fibonacci numbers.
Note that the list examples demonstrate that laziness applies to nested values (actually, nested computations) too: a value that is not needed is not computed, even if it is contained in a value that is needed. For example, in:
the if
needs to know only whether its first argument (note: it is an
argument, since this if
is a function) is #f
or not. Once it is
determined that it is a pair (a cons
cell), there is no need to
actually look at the values inside the pair, and therefore (+ 1 x)
(and more specifically, x
) is never evaluated and we see no error.