There are a few issues that we need to be aware of when we’re dealing with a lazy language. First of all, remember that our previous attempt at lazy evaluation has made
evaluate to 1, which does not follow the rules of lexical scope. This is not a problem with lazy evaluation, but rather a problem with our naive implementation. We will shortly see a way to resolve this problem. In the meanwhile, remember that when we try the same in Lazy Racket we do get the expected error:
A second issue is a subtle point that you might have noticed when we
played with Lazy Racket: for some of the list values we have see a “.
”
printed. This is part of the usual way Racket displays an improper
list — any list that does not terminate with a null value. For
example, in plain Racket:
In the dialect that we’re using in this course, this is not possible.
The secret is that the cons
that we use first checks that its second
argument is a proper list, and it will raise an error if not. So how
come Lazy Racket’s cons
is not doing the same thing? The problem is
that to know that something is a proper list, we will have to force it,
which will make it not behave like a constructor.
As a side note, we can achieve some of this protection if we don’t insist on immediately checking the second argument completely, and instead we do the check when needed — lazily:
(define (safe-cons x l)
(cons x (if (pair? l) l (error "poof"))))
Finally, there are two consequences of using a lazy language that make it more difficult to debug (or at lease take some time to get used to). First of all, control tends to flow in surprising ways. For example, enter the following into DrRacket, and run it in the normal language level for the course:
In the normal language level, we get an error, and red arrows that show
us how where in the computation the error was raised. The arrows are all
expected, except that foo2
is not in the path — why is that?
Remember the discussion about tail-calls and how they are important in
Racket since they are the only tool to generate loops? This is what
we’re seeing here: foo2
calls foo3
in a tail position, so there is
no need to keep the foo2
context anymore — it is simply replaced by
foo3
. (Incidentally, there is also no arrow that goes through foo1
:
Racket does some smart inlining, and it figures out that foo0
+foo1
are simply returning the same value, so it skips foo1
.)
Now switch to Lazy Racket and re-run — you’ll see no arrows at all.
What’s the problem? The call of foo0
creates a promise that is forced
in the top-level expression, that simply returns the first
of the
list
that foo1
created — and all of that can be done without
forcing the foo2
call. Going this way, the computation is finally
running into an error after the calls to foo0
, foo1
, and foo2
are done — so we get the seemingly out-of-context error.
To follow what’s happening here, we need to follow how promise are forced: when we have code like
then the foo
call is a strict point, since we need an actual value
to display on the REPL. Since it’s in a strict position, we do the call,
but when we’re in the function there is no need to compute the division
result — so it is returned as a lazy promise value back to the
toplevel. It is only then that we continue the process of getting an
actual value, which leads to trying to compute the division and get the
error.
Finally, there are also potential problems when you’re not careful about memory use. A common technique when using a lazy language is to generate an infinite list and pull out its Nth element. For example, to compute the Nth Fibonacci number, we’ve seen how we can do this:
and we can also do this (reminder: letrec
is the same as an internal
definition):
but the problem here is that when list-ref
is making its way down the
list, it might still hold a reference to fibs
, which means that as the
list is forced, all intermediate values are held in memory. In the first
of these two, this is guaranteed to happen since we have a binding that
points at the head of the fibs
list. With the second form things can
be confusing: it might be that our language implementation is smart
enough to see that fibs
is not really needed anymore and release the
offending reference. If it isn’t, then we’d have to do something like
to eliminate it. But even if the implementation does know that there is no need for that reference, there are other tricky situations that are hard to avoid.
Side note: Racket didn’t use to do this optimization, but now it does, and the lazy language helped in clarifying more cases where references should be released. To see that, consider these two variants:
If we try to use them with a big input:
then nat1
would work fine, whereas nat2
will likely run into
DrRacket’s memory limit and the computation will be terminated. The
problem is that nat2
uses the nats
value after the list-ref
call, which will make a reference to the head of the list, preventing it
from being garbage-collected while list-ref
is cdr
-ing down the list
and making more cons cells materialize.
It’s still possible to show the extra information though – just save it:
It looks like it’s spending a redundant runtime cycle in the extra computation, but it’s a lazy language so this is not a problem.
There is a very simple and elegant principle in shell programming — we get a single data type, a character stream, and many small functions, each doing a single simple job. With these small building blocks, we can construct more sequences that achieve more complex tasks, for example — a sorted frequency table of lines in a file:
This is very much like a programming language — we get small blocks,
and build stuff out of them. Of course there are swiss army knives like
awk that try to do a whole bunch of stuff, (the same attitude that
brought Perl to the world…) and even these respect the “stream” data
type. For example, a simple { print $1 }
statement will work over all
lines, one by one, making it a program over an infinite input stream,
which is what happens in reality in something like:
But there is something else in shell programming that makes so effective: it is implementing a sort of a lazy evaluation. For example, compare this:
to:
Each element in the pipe is doing its own small job, and it is always doing just enough to feed its output. Each basic block is designed to work even on infinite inputs! (Even sort works on unlimited inputs…) (Soon we will see a stronger connection with lazy evaluation.)
Side note: (Alan Perlis) “It is better to have 100 functions operate on one data structure than 10 functions on 10 data structures”… But the uniformity comes at a price: the biggest problem shells have is in their lack of a recursive structure, contaminating the world with way too many hacked up solutions. More than that, it is extremely inefficient and usually leads to data being re-parsed over and over and over — each small Unix command needs to always output stuff that is human readable, but the next command in the pipe will need to re-parse that, eg, rereading decimal numbers. If you look at pipelines as composing functions, then a pipe of numeric commands translates to something like:
itoa(baz(atoi(itoa(bar(atoi(itoa(foo(atoi(inp)))))))))and it is impossible to get rid of the redundant
atoi(itoa(...))
s.
We already know that when we use lazy evaluation, we are guaranteed to have more robust programs. For example, a function like:
is completely useless in Racket because all functions are eager, but in a lazy language, it would behave exactly like the real if. Note that we still need some primitive conditional, but this primitive can be a function (and it is, in Lazy Racket).
But we get more than that. If we have a lazy language, then computations are pushed around as if they were values (computations, because these are expressions that are yet to be evaluated). In fact, there is no distinction between computations and values, it just happens that some values contain “computational promises”, things that will do something in the future.
To see how this happens, we write a simple program to compute the (infinite) list of prime numbers using the sieve of Eratosthenes. To do this, we begin by defining the list of all natural numbers:
And now define a sift
function: it receives an integer n
and an
infinite list of integers l
, and returns a list without the numbers
that can be divided by n
. This is simple to write using filter
:
and it requires a definition for divides?
— we use Racket’s modulo
for this:
Now, a sieve
is a function that consumes a list that begins with a
prime number, and returns the prime numbers from this list. To do this,
it returns a list that has the same first number, and for its tail it
sifts out numbers that are divisible by the first from the original
list’s tail, and calls itself recursively on the result:
Finally, the list of prime numbers is the result of applying sieve
on
the list of numbers from 2
. The whole program is now:
To see how this runs, we trace modulo
to see which tests are being
used. The effect of this is that each time divides?
is actually
required to return a value, we will see a line with its inputs, and its
output. This output looks quite tricky — things are computed only on a
“need to know” basis, meaning that debugging lazy programs can be
difficult, since things happen when they are needed which takes time to
get used to. However, note that the program actually performs the same
tests that you’d do using any eager-language implementation of the sieve
of Eratosthenes, and the advantage is that we don’t need to decide in
advance how many values we want to compute — all values will be
computed when you want to see the corresponding result. Implementing
this behavior in an eager language is more difficult than a simple
program, yet we don’t need such complex code when we use lazy
evaluation.
Note that if we trace divides?
we see results that are some promise
struct — these are unevaluated expressions, and they point at the fact
that when divides?
is used, it doesn’t really force its arguments —
this happens later when these results are forced.
The analogy with shell programming using pipes should be clear now — for example, we have seen this:
The last head -5
means that no computation is done on parts of the
original file that are not needed. It is similar to a (take 5 l)
expression in Lazy Racket.
Using infinite lists is similar to using channels — a tool for
synchronizing threads and (see a Rob Pike’s talk), and generators (as
they exist in Python). Here are examples of both, note how similar they
both are, and how similar they are to the above definition of primes
.
(But note that there is an important difference, can you see it? It has
to be with whether a stream is reusable or not.)
First, the threads & channels version:
And here is the generator version:
Finally, note that on requiring different parts of the primes
, the
same calls are not repeated. This indicates that our language implements
“call by need” rather than “call by name”: once an expression is forced,
its value is remembered, so subsequent usages of this value do not
require further computations.
Using “call by name” means that we actually use expressions which can lead to confusing code. An old programming language that used this is Algol. A confusing example that demonstrates this evaluation strategy is:
x
and i
are arguments that are passed by name, which means that they
can use the same memory location. This is called aliasing, a problem
that happens when pointers are involved (eg, pointers in C and
reference
arguments in C++). The code, BTW, is called “Jensen’s
device”.
Another interesting behavior that we can now observe, is that the TOY
evaluation rule for with
:
is specifying an eager evaluator only if the language that this rule is written in is itself eager. Indeed, if we run the TOY interpreter in Lazy Racket (or other interpreters we have implemented), we can verify that running:
is perfectly fine — the call to Racket’s division is done in the evaluation of the TOY division expression, but since Lazy Racket is lazy, then if this value is never used then we never get to do this division! On the other hand, if we evaluate
we do get an error when DrRacket tries to display the result, which
forces strictness. Note how the arrows in DrRacket that show where the
computation is are quite confusing: the computation seem to go directly
to the point of the arithmetic operations (arith-op
) since the rest of
the evaluation that the evaluator performed was already done, and
succeeded. The actual failure happens when we try to force the resulting
promise which contains only the strict points in our code.