PL: Future notes  Tuesday, November 19th

This is provided for reference, and can change once we go over the material in class.


Designing Domain Specific Languages (DSLs)

PLAI §35

Programming languages differ in numerous ways:

  1. Each uses different notations for writing down programs. As we’ve observed, however, syntax is only partially interesting. (This is, however, less true of languages that are trying to mirror the notation of a particular domain.)

  2. Control constructs: for instance, early languages didn’t even support recursion, while most modern languages still don’t have continuations.

  3. The kinds of data they support. Indeed, sophisticated languages like Racket blur the distinction between control and data by making fragments of control into data values (such as first-class functions and continuations).

  4. The means of organizing programs: do they have functions, modules, classes, namespaces, …?

  5. Automation such as memory management, run-time safety checks, and so on.

Each of these items suggests natural questions to ask when you design your own languages in particular domains.

Obviously, there are a lot of domain specific languages these days — and that’s not new. For example, four of the oldest languages were conceived as domain specific languages:

Only in the late 60s / early 70s languages began to get free from their special purpose domain and become general purpose languages (GPLs). These days, we usually use some GPL for our programs and often come up with small domain specific languages (DSLs) for specific jobs. The problem is designing such a specific language. There are lots of decisions to make, and as should be clear now, many ways of shooting your self in the foot. You need to know:

And very important:

To clarify why this can be applicable in more situations than you think, consider what programming languages are used for. One example that should not be ignored is using a programming language to implement a programming language — for example, what we did so far (or any other interpreter or compiler). In the same way that some piece of code in a PL represent functions about the “real world”, there are other programs that represent things in a language — possibly even the same one. To make a side-effect-full example, the meaning of one-brick might abstract over laying a brick when making a wall — it abstracts all the little details into a function:

(define (one-brick wall brick-pile)
  (move-eye (location brick-pile))
  (let ([pos (find-available-brick-position brick-pile)])
    (move-hand pos)
    (grab-object))
  (move-eye wall)
  (let ([pos (find-next-brick-position wall)])
    (move-hand pos)
    (drop-object)))

and we can now write

(one-brick my-wall my-brick-pile)

instead of all of the above. We might use that in a loop:

(define (build-wall wall pile)
  (define (loop n)
    (when (< n 500)
      (one-brick wall pile)
      (loop (add1 n))))
  (loop 0))

This is a common piece of looping code that we’ve seen in many forms, and a common complaint of newcomers to functional languages is the lack of some kind of a loop. But once you know the template, writing such loops is easy — and in fact, you can write code that would take something like:

(define (build-wall wall pile)
  (loop-for i from 1 to 500
    (one-brick wall pile)))

and produce the previous code. Note the main point here: we switch from code that deals with bricks to code that deals with code.

Now, a viable option for implementing a new DSL is to do so by transforming it into an existing language. Such a process is usually tedious and error prone — tedious because you need to deal with the boring parts of a language (making a parser etc), and error prone because it’s easy to generate bad code (especially when you’re dealing with strings) and you get bad errors in terms of the translated code instead of the actual code, resorting to debugging the intermediate generated programs. Lisp languages traditionally have taken this idea one level further than other languages: instead of writing a new transformer for your language, you use the host language, but you extend and customize it by adding you own forms.

Syntax Transformations: Macros

PLAI §36

Macros are one of the biggest advantages of all Lisps, and specifically even more so an advantage of Scheme implementations, and yet more specifically, it is a major Racket feature: this section is therefore specific to Racket (which has this unique feature), although most of this is the same in most Schemes.

As we have previously seen, it is possible to implement one language construct using another. What we did could be described as bits of a compiler, since they translate one language to another.

We will see how this can be done in Racket: implementing some new linguistic forms in terms of ones that are already known. In essence, we will be translating Racket constructs to other Racket constructs — and all that is done in Racket, no need to go back to the language Racket was implemented in (C).

This is possible with a simple “trick”: the Racket implementation uses some syntax objects. These objects are implemented somehow inside Racket’s own source code. But these objects are also directly available for our use — part of the implementation is exposed to our level. This is quite similar to the way we have implemented pairs in our language — a TOY or a SLOTH pair is implemented using a Racket pair, so the same data object is available at both levels.

This is the big idea in Lisp, which Scheme (and therefore Racket) inherited from (to some extent): programs are made of numbers, strings, symbols and lists of these — and these are all used both at the meta-level as well as the user level. This means that instead of having no meta-language at all (locking away a lot of useful stuff), and instead of having some crippled half-baked meta language (CPP being both the most obvious (as well as the most popular) example for such a meta language), instead of all this we get exactly the same language at both levels.

How is this used? Well, the principle is simple. For example, say we want to write a macro that will evaluate two forms in sequence, but if the first one returns a result that is not false then it returns it instead of evaluating the second one too. This is exactly how or behaves, so pretend we don’t have it — call our version orelse:

(orelse <expr1> <expr2>)

in effect, we add a new special form to our language, with its own evaluation rule, only we know how to express this evaluation rule by translating it to things that are already part of our language.

We could do this as a simple function — only if we’re willing to explicitly delay the arguments with a lambda, and use the thunks in the function:

(define (orelse thunk1 thunk2)
  (if (thunk1)
    (thunk1) ; ignore the double evaluation for now
    (thunk2)))

or:

(define (orelse thunk1 thunk2)
  ((if (thunk1)
    thunk1
    thunk2)))

and using it like this:

(orelse (lambda () 1) (lambda () (error "boom")))

But this is clearly not the right way to do this: whoever uses this code must be aware of the need to send us thunks, and it’s verbose and inconvenient.

Note that this could be a feasible solution if there was a uniform way to have an easy syntactic way to say “a chunk of code” instead of immediately execute it — this is exactly what (lambda () ...) does. So we could, for example, make {...} be a shorthand for that, which is what Perl-6 is doing. However, we will soon see examples where we want more than just delay the evaluation of some code.

We want to translate

(orelse <expr1> <expr2>)

--to-->

(if <expr1>
  <expr1>
  <expr2>)

If we look at the code as an s-expression, then we can write the following function:

(define (translate-orelse l)
  (if (and (list? l)
          (= 3 (length l))
          (eq? 'orelse (first l)))
    (list 'if (second l) (second l) (third l))
    (error 'translate-orelse "bad input: ~s" l)))

We can now try it with a simple list:

(translate-orelse '(orelse foo1 foo2))

and note that the result is correct.

How is this used? Well, all we need is to hook our function into our implementation’s evaluator. In Lisp, we get a defmacro form for this, and many Schemes inherited it or something similar. In Racket, we need to

(require compatibility/defmacro)

but it requires the transformation to be a little different in a way that makes life easier: the above contains a lot of boilerplate code. Usually, we will require the input to be a list of some known length, the first element to be a symbol that specifies our form, and then do something with the other arguments. So we’d want to always follow a template that looks like:

(define (translate-??? exprs)
  (if (and (list? exprs)
          (= N (length exprs))
          (eq? '??? (car exprs)))
    (let ([E1 (cadr exprs)]
          [E2 (caddr exprs)]
          ...)
      ...make result expression...)
    (error ...)))

But this looks very similar to making sure that a function call is a specific function call (and for a good reason — macro usages look just like function calls). So make the translation function get a number of arguments one each for each part of the input, an s-expression. For example, the above translation and test become:

(define (translate-orelse <expr1> <expr2>)
  (list 'if <expr1> <expr1> <expr2>))

(translate-orelse 'foo1 'foo2)

The number of arguments is used to check the input (turning an arity error for the macro to an arity error for the translator function call), and we don’t need to “caddr our way” to arguments.

This gives us the simple definition — but what about the promised hook? — All we need is to use define-macro instead of define, and change the name to the name that will trigger this translation (providing the last missing test of the input):

(define-macro (orelse <expr1> <expr2>)
  (list 'if <expr1> <expr1> <expr2>))

and test it:

(orelse 1 (error "boom"))

Note that this is basically a (usually purely functional) lazy language of transformations which is built on top of Racket. It is possible for macros to generate pieces of code that contain references to these same macros, and they will be used to expand those instances again.

Now we start with the problems, one by one.

Macro Problems

There is an inherent problem when macros are being used, in any form and any language (even in CPP): you must remember that you are playing with expressions, not with values — which is why this is problematic:

(define (foo x) (printf "foo ~s!\n" x) x)

(or (foo 1) (foo 2))

(orelse (foo 1) (foo 2))

And the reason for this should be clear. The standard solution for this is to save the value as a binding — so back to the drawing board, we want this transformation instead:

(orelse <expr1> <expr2>)
-->
(let ((val <expr1>))
  (if val
    val
    <expr2>))

(Note that we would have the same problem in the version that used simple functions and thunks.)

And to write the new code:

(define-macro (orelse <expr1> <expr2>)
  (list 'let (list (list 'val <expr1>))
    (list 'if 'val
      'val
      <expr2>)))

(orelse (foo 1) (foo 2))

and this works like we want it to.

Complexity of S-expression transformations

As can be seen, writing a simple macro doesn’t look too good — what if we want to write a more complicated macro? A solution to this is to look at the above macro and realize that it almost looks like the code we want — we basically want to return a list of a certain fixed shape, we just want some parts to be filled in by the given arguments. Something like:

(define-macro (orelse <expr1> <expr2>)
  '(let ((val <expr1>))
    (if val
      val
      <expr2>)))

if we had a way to make the <...>s not be a fixed part of the result, but we actually want the values that the transformation function received. (Remember that the < and > are just a part of the name, no magic, just something to make these names stand out.) This is related to notational problems that logicians and philosophers had problems with for centuries. One solution that Lisp uses for this is: instead of a quote, use backquote (called quasiquote in Racket) which is almost like quote, except that you can unquote parts of the value inside. This is done with a “,” comma. Using this, the above code can be written like this:

(define-macro (orelse <expr1> <expr2>)
  `(let ((val ,<expr1>))
    (if val
      val
      ,<expr2>)))

Scoping problems

You should be able to guess what’s this problem about. The basic problem of these macros is that they cannot be used reliably — the name that is produced by the macro can shadow a name that is in a completely different place, therefore destroying lexical scope. For example, in:

(let ((val 4))
  (orelse #f val))

the val in the macro shadows the use of this name in the above. One way to solve this is to write macros that look like this:

(define-macro (orelse <expr1> <expr2>)
  `(let ((%%!my*internal*var-do-not-use!%% ,<expr1>))
    (if %%!my*internal*var-do-not-use!%%
      %%!my*internal*var-do-not-use!%%
      ,<expr2>)))

or:

(define-macro (orelse <expr1> <expr2>)
  `(let ((i-am-using-orelse-so-i-should-not-use-this-name ,<expr1>))
    (if i-am-using-orelse-so-i-should-not-use-this-name
      i-am-using-orelse-so-i-should-not-use-this-name
      ,<expr2>)))

or (this is actually similar to using UUIDs):

(define-macro (orelse <expr1> <expr2>)
  `(let ((eli@barzilay.org/foo/bar/2002-02-02-10:22:22 ,<expr1>))
    (if eli@barzilay.org/foo/bar/2002-02-02-10:22:22
      eli@barzilay.org/foo/bar/2002-02-02-10:22:22
      ,<expr2>)))

Which is really not too good because such obscure variables tend to clobber each other too, in all kinds of unexpected ways.

Another way is to have a function that gives you a different variable name every time you call it:

(define-macro (orelse <expr1> <expr2>)
  (let ((temp (gensym)))
    `(let ((,temp ,<expr1>))
      (if ,temp
        ,temp
        ,<expr2>))))

but this is not safe either since there might still be clashes of these names (eg, if they’re using a counter that is specific to the current process, and you start a new process and load code that was generated earlier). The Lisp solution for this (which Racket’s gensym function implements as well) is to use uninterned symbols — symbols that have their own identity, much like strings, and even if two have the same name, they are not eq?.

Note also that there is the mirror side of this problem — what happens if we try this:

(let ([if 123]) (orelse #f #f))

? This leads to capture in the other direction — the code above shadows the if binding that the macro produces.

Some Schemes will allow something like

(define-macro (foo x)
  `(,mul-list ,x))

but this is a hack since the macro outputs something that is not a pure s-expression (and it cannot work for a syntactic keyword like if). Specifically, it is not possible to write the resulting expression (to a compiled file, for example).

We will ignore this for a moment.


Another problem — manageability of these transformations.

Quasiquotes gets us a long way, but it is still insufficient.

For example, lets write a Racket bind that uses lambda for binding. The transformation we now want is:

(bind ((var expr) ...)
  body)
-->
((lambda (var ...) body)
expr ...)

The code for this looks like this:

(define-macro (bind var-expr-list body)
  (cons (list 'lambda (map car var-expr-list) body)
        (map cadr var-expr-list)))

This already has a lot more pitfalls. There are lists and conses that you should be careful of, there are maps and there are cadrs that would be catastrophic if you use cars instead. The quasiquote syntax is a little more capable — you can write this:

(define-macro (bind var-expr-list body)
  `((lambda ,(map car var-expr-list) ,body)
    ,@(map cadr var-expr-list)))

where “,@” is similar to “,” but the unquoted expression should evaluate to a list that is spliced into its surrounding list (that is, its own parens are removed and it’s made into elements in the containing list). But this is still not as readable as the transformation you actually want, and worse, it is not checking that the input syntax is valid, which can lead to very confusing errors.

This is yet another problem — if there is an error in the resulting syntax, the error will be reported in terms of this result rather than the syntax of the code. There is no easy way to tell where these errors are coming from. For example, say that we make a common mistake: forget the “@” character in the above macro:

(define-macro (bind var-expr-list body)
  `((lambda ,(map car var-expr-list) ,body)
    ,(map cadr var-expr-list)))

Now, someone else (the client of this macro), tries to use it:

> (bind ((x 1) (y 2)) (+ x y))
procedure application: expected procedure,
given: 1; arguments were: 2

Yes? Now what? Debugging this is difficult, since in most cases it is not even clear that you were using a macro, and in any case the macro comes from code that you have no knowledge of and no control over. [The problem in this specific case is that the macro expands the code to:

((lambda (x y) (+ x y))
(1 2))

so Racket will to use 1 as a function and throw a runtime error.]

Adding error checking to the macro results in this code:

(define-macro (bind var-expr-list body)
  (if (andmap (lambda (var-expr)
                (and (list? var-expr)
                    (= 2 (length var-expr))
                    (symbol? (car var-expr))))
              var-expr-list)
    `((lambda ,(map car var-expr-list) ,body)
      ,@(map cadr var-expr-list))
    (error 'bind "bad syntax whaaaa!")))

Such checks are very important, yet writing this is extremely tedious.

Scheme (and Racket) Macros

Scheme, Racket included (and much extended), has a solution that is better than defmacro: it has define-syntax and syntax-rules. First of all, define-syntax is used to create the “magical connection” between user code and some macro transformation code that does some rewriting. This definition:

(define-syntax foo
  ...something...)

makes foo be a special syntax that, when used in the head of an expression, will lead to transforming the expression itself, where the result of this transformation is what gets used instead of the original expression. The “...something...” in this code fragment should be a transformation function — one that consumes the expression that is to be transformed, and returns the new expression to run.

Next, syntax-rules is used to create such a transformation in an easy way. The idea is that what we thought to be an informal specification of such rewrites, for example:

can actually be formalized by automatically creating a syntax transformation function from these rule specifications. (Note that this example has round parentheses so we don’t fall into the illusion that square brackets are different: the resulting transformation would be the same.) The main point is to view the left hand side as a pattern that can match some forms of syntax, and the right hand side as producing an output that can use some matched patterns.

syntax-rules is used with such rewrite specifications, and it produces the corresponding transformation function. For example, this:

(syntax-rules () ;*** ignore this "()" for now
  [(x y) (y x)])

evaluates to a function that is somewhat similar to:

(lambda (expr)
  (if (and (list? expr) (= 2 (length expr)))
    (list (second expr) (first expr))
    (error "bad syntax")))

but match is a little closer, since it uses similar input patterns:

(lambda (expr)
  (match expr
    [(list x y) (list y x)]
    [else (error "bad syntax")]))

Such transformations are used in a define-syntax expression to tie the transformer back into the compiler by hooking it on a specific keyword. You can now appreciate how all this work when you see how easy it is to define macros that are very tedious with define-macro. For example, the above bind:

(define-syntax bind
  (syntax-rules ()
    [(bind ((x v) ...) body ...)
    ((lambda (x ...) body ...) v ...)]))

and let* with its two rules:

(define-syntax let*
  (syntax-rules ()
    [(let* () body ...)
    (let () body ...)]
    [(let* ((x v) (xs vs) ...) body ...)
    (let ((x v)) (let* ((xs vs) ...) body ...))]))

These transformations are so convenient to follow, that Scheme specifications (and reference manuals) describe forms by specifying their definition. For example, the Scheme report, specifies let* as a “derived form”, and explains its semantics via this transformation.

The input patterns in these rules are similar to match patterns, and the output patterns assemble an s-expression using the matched parts in the input. For example:

(x y) --> (y x)

does the thing you expect it to do — matches a parenthesized form with two sub-forms, and produces a form with the two sub-forms swapped. The rules for “...” on the left side are similar to match, as we have seen many times, and on the right side it is used to splice a matched sequence into the resulting expression and it is required to use the ... for sequence-matched pattern variables. For example, here is a list of some patterns, and a description of how they match an input when used on the left side of a transformation rule and how they produce an output expression when they appear on the right side:

Some examples of transformations that would be very tedious to write code manually for:


This is solving the problems of easy code — no need for list, cons etc, not even for quasiquotes and tedious syntax massaging. But there were other problems. First, there was a problem of bad scope, one that was previously solved with a gensym:

(define-macro (orelse <expr1> <expr2>)
  (let ((temp (gensym)))
    `(let ((,temp ,<expr1>))
      (if ,temp ,temp ,<expr2>))))

Translating this to define-syntax and syntax-rules we get something simpler:

(define-syntax orelse
  (syntax-rules ()
    [(orelse <expr1> <expr2>)
    (let ((temp <expr1>))
      (if temp temp <expr2>))]))

Even simpler, Racket has a macro called define-syntax-rule that expands to a define-syntax combined with a syntax-rules — using it, we can write:

(define-syntax-rule (orelse <expr1> <expr2>)
  (let ((temp <expr1>))
    (if temp temp <expr2>)))

This looks like like a function — but you must remember that it is a transformation rule specification which is a very different beast, as we’ll see.

The main thing here is that Racket takes care of making bindings follow the lexical scope rules:

(let ([temp 4])
  (orelse #f temp))

works fine. In fact, it fully respects the scoping rules: there is no confusion between bindings that the macro introduces and bindings that are introduced where the macro is used. (Think about different colors for bindings introduced by the macro and other bindings.) It’s also fine with many cases that are much harder to cope with otherwise (eg, cases where there is no gensym magic solution):

(let ([if +])
  (orelse 1 1))

(let ([if +])
  (if (orelse 1 1) 10 100)) ; two different `if's here

or combining both:

(let ([if #f] [temp 4])
  (orelse if temp))

(You can try DrRacket’s macro debugger to see how the various bindings get colored differently.)

define-macro advocates will claim that it is difficult to make a macro that intentionally plants a known identifier. Think about a loop macro that has an abort that can be used inside its body. Or an if-it form that is like if, but makes it possible to use the condition’s value in the “then” branch as an it binding. It is possible with all Scheme macro systems to “break hygiene” in such ways, and we will later see how to do this in Racket. However, Racket also provides a better way to deal with such problems (think about it being always “bound to a syntax error”, but locally rebound in an if-it form).

Scheme macros are said to be hygienic — a term used to specify that they respect lexical scope. There are several implementations of hygienic macro systems across Scheme implementations, Racket uses the one known as “syntax-case system”, named after the syntax-case construct that we discuss below.

All of this can get much more important in the presence of a module system, since you can write a module that provides transformations rules, not just values and functions. This means that the concept of “a library” in Racket is something that doesn’t exist in other languages: it’s a library that has values, functions, as well as macros (or, “compiler plugins”).


The way that Scheme implementations achieve hygiene in a macro system is by making it deal with more than just raw S-expressions. Roughly speaking, it deals with syntax objects that are sort of a wrapper structure around S-expression, carrying additional information. The important part of this information when it gets to dealing with hygiene is the “lexical scope” — which can roughly be described as having identifiers be represented as symbols plus a “color” which represents the scope. This way such systems can properly avoid confusing identifiers with the same name that come from different scopes.

There was also the problem of making debugging difficult, because a macro can introduce errors that are “coming out of nowhere”. In the implementation that we work with, this is solved by adding yet more information to these syntax objects — in addition to the underlying S-expression and the lexical scope, they also contain source location information. This allows Racket (and DrRacket) to locate the source of a specific syntax error, so locating the offending code is easy. DrRacket’s macro debugger heavily relies on this information to provide a very useful tool — since writing macros can easily become a hard job.

Finally, there was the problem of writing bad macros. For example, it is easy to forget that you’re dealing with a macro definition and write:

(define-syntax-rule (twice x) (+ x x))

just because you want to inline the addition — but in this case you end up duplicating the input expression which can have a disastrous effect. For example:

(twice (twice (twice (twice (twice (twice (twice (twice 1))))))))

expands to a lot of code to compile.

Another example is:

(define-syntax-rule (with-increment var expr)
  (let ([var (add1 var)]) expr))
...
(with-increment (* foo 2)
  ...code...)

the problem here is that (* foo 2) will be used as an identifier to be bound by the let expression — which can lead to a confusing syntax error.

Racket provides many tools to help macro programmers — in addition to a user-interface tool like the macro debugger there are also programmer-level tools where you can reject an input if it doesn’t contain an identifier at a certain place etc. Still, writing macros is much harder than writing functions — some of these problems are inherent to the problem that macros solve; for example, you may want a twice macro that replicates an expression. By specifying a transformation to the core language, a macro writer has full control over which expressions get evaluated and how, which identifiers are binding instances, and how is the scope of the given expression is shaped.

Meta Macros

One of the nice results of syntax-rules dealing with the subtle points of identifiers and scope is that things works fine even when we “go up a level”. For example, the short define-syntax-rule form that we’ve seen is itself a defined as a simple macro:

(define-syntax define-syntax-rule
  (syntax-rules ()
    [(define-syntax-rule (name P ...) B)
    (define-syntax name
      (syntax-rules ()
        [(name P ...) B]))]))

In fact, this is very similar to something that we have already seen: the rewrite form that we have used in Schlac is implemented in just this way. The only difference is that rewrite requires an actual => token to separate the input pattern from the output template. If we just use it in a syntax rule:

(define-syntax rewrite
  (syntax-rules ()
    [(rewrite (name P ...) => B)
    (define-syntax name
      (syntax-rules ()
        [(name P ...) B]))]))

it won’t work. Racket treats the above => just like any identifier, which in this case acts as a pattern variable which matches anything. The solution to this is to list the => as a keyword which is expected to appear in the macro use as-is — and that’s what the mysterious () of syntax-rules is used for: any identifier listed there is taken to be such a keyword. This makes the following version

(define-syntax rewrite
  (syntax-rules (=>)
    [(rewrite (name P ...) => B)
    (define-syntax name
      (syntax-rules ()
        [(name P ...) B]))]))

do what we want and throw a syntax error unless rewrite is used with an actual => in the proper place.

Lazy Constructions in an Eager Language

PLAI §37 (has some examples)

This is not really lazy evaluation, but it gets close, and provides the core useful property of easy-to-use infinite lists.

(define-syntax-rule (cons-stream x y)
  (cons x (lambda () y)))
(define stream? pair?)
(define null-stream null)
(define null-stream? null?)
;; note that there are not proper lists in racket,
;; so we use car and cdr here
(define stream-first car)
(define (stream-rest s) ((cdr s)))

Using it:

(define ones (cons-stream 1 ones))
(define (stream-map f s)
  (if (null-stream? s)
    null-stream
    (cons-stream (f (stream-first s))
                (stream-map f (stream-rest s)))))
(define (stream-map2 f s1 s2)
  (if (null-stream? s1)
    null-stream
    (cons-stream (f (stream-first s1) (stream-first s2))
                (stream-map2 f (stream-rest s1)
                                (stream-rest s2)))))
(define ints (cons-stream 0 (stream-map2 + ones ints)))

Actually, all Scheme implementations come with a generalized tool for (local) laziness: a delay form that delays computation of its body expression, and a force function that forces such promises. Here is a naive implementation of this:

(define-type promise
  [make-promise (-> Any)])

(define-syntax-rule (delay expr)
  (make-promise (lambda () expr)))

(define (force p)
  (cases p [(make-promise thunk) (thunk)]))

Proper definitions of delay/force cache the result — and practical ones can get pretty complex, for example, in order to allow tail calls via promises.

Recursive Macros

Syntax transformations can be recursive. For example, we have seen how let* can be implemented by a transformation that uses two rules, one of which expands to another use of let*:

(define-syntax let*
  (syntax-rules ()
    [(let* () body ...)
    (let () body ...)]
    [(let* ((x v) (xs vs) ...) body ...)
    (let ((x v)) (let* ((xs vs) ...) body ...))]))

When Racket expands a let* expression, the result may contain a new let* which needs extending as well. An important implication of this is that recursive macros are fine, as long as the recursive case is using a smaller expression. This is just like any form of recursion (or loop), where you need to be looping over a well-founded set of values — where each iteration uses a new value that is closer to some base case.

For example, consider the following macro:

(define-syntax-rule (while condition body ...)
  (when condition
    body ...
    (while condition body ...)))

It seems like this is a good implementation of a while loop — after all, if you were to implement it as a function using thunks, you’d write very similar code:

(define (while condition-thunk body-thunk)
  (when (condition-thunk)
    (body-thunk)
    (while condition-thunk body-thunk)))

But if you look at the nested while form in the transformation rule, you’ll see that it is exactly the same as the input form. This means that this macro can never be completely expanded — it specifies infinite code! In practice, this makes the (Racket) compiler loop forever, consuming more and more memory. This is unlike, for example, the recursive let* rule which uses one less binding-value pair than specified as its input.

The reason that the function version of while is fine is that it iterates using the same code, and the condition thunk will depend on some state that converges to a base case (usually the body thunk will perform some side-effects that makes the loop converge). But in the macro case there is no evaluation happening, if the transformed syntax contains the same input pattern, we end up having a macro that expands infinitely.

The correct solution for a while macro is therefore to use plain recursion using a local recursive function:

(define-syntax-rule (while condition body ...)
  (letrec ([loop (lambda ()
                  (when condition
                    body ...
                    (loop)))])
    (loop)))

A popular way to deal with macros like this that revolve around a specific control flow is to separate them into a function that uses thunks, and a macro that does nothing except wrap input expressions as thunks. In this case, we get this solution:

(define (while/proc condition-thunk body-thunk)
  (when (condition-thunk)
    (body-thunk)
    (while/proc condition-thunk body-thunk)))

(define-syntax-rule (while condition body ...)
  (while/proc (lambda () condition)
              (lambda () body ...)))

Another example: a simple loop

Here is an implementation of a macro that does a simple arithmetic loop:

(define-syntax for
  (syntax-rules (= to do)
    [(for x = m to n do body ...)
    (letrec ([loop (lambda (x)
                      (when (<= x n)
                        body ...
                        (loop (+ x 1))))])
      (loop m))]))

(Note that this is not complete code: it suffers from the usual problem of multiple evaluations of the n expression. We’ll deal with it soon.)

This macro combines both control flow and lexical scope. Control flow is specified by the loop (done, as usual in Racket, as a tail-recursive function) — for example, it determines how code is iterated, and it also determines what the for form will evaluate to (it evaluates to whatever when evaluates to, the void value in this case). Scope is also specified here, by translating the code to a function — this code makes x have a scope that covers the body so this is valid:

(for i = 1 to 3 do (printf "i = ~s\n" i))

but it also makes the boundary expression n be in this scope, making this:

(for i = 1 to (if (even? i) 10 20) do (printf "i = ~s\n" i))

valid. In addition, while evaluating the condition on each iteration might be desirable, in most cases it’s not — consider this example:

(for i = 1 to (read) do (printf "i = ~s\n" i))

This is easily solved by using a let to make the expression evaluate just once:

(define-syntax for
  (syntax-rules (= to do)
    [(for x = m to n do body ...)
    (let ([m* m]  ; execution order
          [n* n])
      (letrec ([loop (lambda (x)
                        (when (<= x n*)
                          body ...
                          (loop (+ x 1))))])
        (loop m*)))]))

which makes the previous use result in a “reference to undefined identifier: i” error.

Furthermore, the fact that we have a hygienic macro system means that it is perfectly fine to use nested for expressions:

(for a = 1 to 9 do
  (for b = 1 to 9 do (printf "~s,~s " a b))
  (newline))

The transformation is, therefore, completely specifying the semantics of this new form.

Extending this syntax is easy using multiple transformation rules — for example, say that we want to extend it to have a step optional keyword. The standard idiom is to have the step-less pattern translated into one that uses step 1:

(for x = m to n do body ...)
--> (for x = m to n step 1 do body ...)

Usually, you should remember that syntax-rules tries the patterns one by one until a match is found, but in this case there is no problems because the keywords make the choice unambiguous:

(define-syntax for
  (syntax-rules (= to do step)
    [(for x = m to n do body ...)
    (for x = m to n step 1 do body ...)]
    [(for x = m to n step d do body ...)
    (let ([m* m]
          [n* n]
          [d* d])
      (letrec ([loop (lambda (x)
                        (when (<= x n*)
                          body ...
                          (loop (+ x d*))))])
        (loop m*)))]))

(for i = 1 to 10 do (printf "i = ~s\n" i))
(for i = 1 to 10 step 2 do (printf "i = ~s\n" i))

We can even extend it to do a different kind of iteration, for example, iterate over list:

(define-syntax for
  (syntax-rules (= to do step in)
    [(for x = m to n do body ...)
    (for x = m to n step 1 do body ...)]
    [(for x = m to n step d do body ...)
    (let ([m* m]
          [n* n]
          [d* d])
      (letrec ([loop (lambda (x)
                        (when (<= x n*)
                          body ...
                          (loop (+ x d*))))])
        (loop m*)))]
    ;; list
    [(for x in l do body ...)
    (for-each (lambda (x) body ...) l)]))

(for i in (list 1 2 3 4) do (printf "i = ~s\n" i))

(for i in (list 1 2 3 4) do
  (for i = 0 to i do (printf "i = ~s  " i))
  (newline))

Yet Another: List Comprehension

At this point it’s clear that macros are a powerful language feature that makes it relatively easy to implement new features, making it a language that is easy to use as a tool for quick experimentation with new language features. As an example of a practical feature rather than a toy, let’s see how we can implement Python’s list comprehenions. These are expressions that conveniently combine map, filter, and nested uses of both.

First, a simple implementation that uses only the map feature:

(define-syntax list-of
  (syntax-rules (for in)
    [(list-of EXPR for ID in LIST)
    (map (lambda (ID) EXPR)
          LIST)]))

(list-of (* x x) for x in (range 10))

It is a good exercise to see how everything that we’ve seen above plays a role here. For example, how we get the ID to be bound in EXPR.

Next, add a condition expression with an if keyword, and implemented using a filter:

(define-syntax list-of
  (syntax-rules (for in if)
    [(list-of EXPR for ID in LIST if COND)
    (map (lambda (ID) EXPR)
          (filter (lambda (ID) COND) LIST))]
    [(list-of EXPR for ID in LIST)
    (list-of EXPR for ID in LIST if #t)]))

(list-of (* x x) for x in (range 10) if (odd? x))

Again, go over it and see how the binding structure makes the identifier available in both expressions. Note that since we’re just playing around we’re not paying too much attention to performance etc. (For example, if we cared, we could have implemented the if-less case by not using filter at all, or we could implement a filter that accepts #t as a predicate and in that case just returns the list, or even implementing it as a macro that identifies a (lambda (_) #t) pattern and expands to just the list (a bad idea in general).)

The last step: Python’s comprehension accepts multiple for-ins for nested loops, possibly with if filters at each level:

(define-syntax list-of
  (syntax-rules (for in if)
    [(list-of EXPR for ID in LIST if COND)
    (map (lambda (ID) EXPR)
          (filter (lambda (ID) COND) LIST))]
    [(list-of EXPR for ID in LIST)
    (list-of EXPR for ID in LIST if #t)]
    [(list-of EXPR for ID in LIST for MORE ...)
    (list-of EXPR for ID in LIST if #t for MORE ...)]
    [(list-of EXPR for ID in LIST if COND for MORE ...)
    (apply append (map (lambda (ID) (list-of EXPR for MORE ...))
                        (filter (lambda (ID) COND) LIST)))]))

A collection of examples that I found in the Python docs and elsewhere, demonstrating all of these:

;; [x**2 for x in range(10)]
(list-of (* x x) for x in (range 10))
;; [(x, y) for x in [1,2,3] for y in [3,1,4] if x != y]
(list-of (list x y) for x in '(1 2 3) for y in '(3 1 4)
                    if (not (= x y)))

(define (round-n x n) ; python-like round to n digits
  (define 10^n (expt 10 n))
  (/ (round (* x 10^n)) 10^n))
;; [str(round(pi, i)) for i in range(1, 6)]
(list-of (number->string (round-n pi i)) for i in (range 1 6))

(define matrix
  '((1 2 3 4)
    (5 6 7 8)
    (9 10 11 12)))
;; [[row[i] for row in matrix] for i in range(4)]
(list-of (list-of (list-ref row i) for row in matrix)
        for i in (range 4))

(define text '(("bar" "foo" "fooba")
              ("Rome" "Madrid" "Houston")
              ("aa" "bb" "cc" "dd")))
;; [y for x in text if len(x)>3 for y in x]
(list-of y for x in text if (> (length x) 3) for y in x)
;; [y for x in text for y in x if len(y)>4]
(list-of y for x in text for y in x if (> (string-length y) 4))
;; [y.upper() for x in text if len(x) == 3
;;            for y in x if y.startswith('f')]
(list-of (string-upcase y) for x in text if (= (length x) 3)
                          for y in x if (regexp-match? #rx"^f" y))

Problems of syntax-rules Macros

As we’ve seen, using syntax-rules solves many of the problems of macros, but it comes with a high price tag: the macros are “just” rewrite rules. As rewrite rules they’re pretty sophisticated, but it still loses a huge advantage of what we had with define-macro — the macro code is no longer Racket code but a simple language of rewrite rules.

There are two big problems with this which we will look into now. (DrRacket’s macro stepper tool can be very useful in clarifying these examples.) The first problem is that in some cases we want to perform computations at the macro level — for example, consider a repeat macro that needs to expand like this:

(repeat 1 E)  -->  (begin E)
(repeat 2 E)  -->  (begin E E)
(repeat 3 E)  -->  (begin E E E)
...

With a syntax-rules macro we can match over specific integers, but we just cannot do this with any integer. Note that this specific case can be done better via a function — better by not replicating the expression:

(define (repeat/proc n thunk)
  (when (> n 0) (thunk) (repeat/proc (sub1 n) thunk)))
(define-syntax-rule (repeat N E)
  (repeat/proc N (lambda () E)))

or even better, assuming the above for is already implemented:

(define-syntax-rule (repeat N E)
  (for i = 1 to N do E))

But still, we want to have the ability to do such computation. A similar, and perhaps better example, is better error reporting. For example, the above for implementation blindly expands its input, so:

> (for 1 = 1 to 3 do (printf "i = ~s\n" i))
lambda: not an identifier in: 1

we get a bad error message in terms of lambda, which is breaking abstraction (it comes from the expansion of for, which is an implementation detail), and worse — it is an error about something that the user didn’t write.

Yet another aspect of this problem is that sometimes we need to get creative solutions where it would be very simple to write the corresponding Racket code. For example, consider the problem of writing a rev-app macro — (rev-app F E …) should evaluate to a function similar to (F E …), except that we want the evaluation to go from right to left instead of the usual left-to-right that Racket does. This code is obviously very broken:

(define-syntax-rule (rev-app F E ...)
  (let (reverse ([x E] ...))
    (F x ...)))

because it generates a malformed let form — there is no way for the macro expander to somehow know that the reverse should happen at the transformation level. In this case, we can actually solve this using a helper macro to do the reversing:

(define-syntax-rule (rev-app F E ...)
  (rev-app-helper F (E ...) ()))
(define-syntax rev-app-helper
  (syntax-rules ()
    ;; this rule does the reversing, collecting the reversed
    ;; sequence in the last part
    [(rev-app-helper F (E0 E ...) (E* ...))
    (rev-app-helper F (E ...) (E0 E* ...))]
    ;; and this rule fires up when we're done with the reversal
    [(rev-app-helper F () (E ...))
    (let ([x E] ...)
      (F x ...))]))

There are still problems with this — it complains about x ... because there is a single x there rather than a sequence of them; and even if it did somehow work, we also need the xs in that last line in the original order rather than the reversed one. So the solution is complicated by collecting new xs while reversing — and since we need them in both orders, we’re going to collect both orders:

(define-syntax-rule (rev-app F E ...)
  (rev-app-helper F (E ...) () () ()))
(define-syntax rev-app-helper
  (syntax-rules ()
    ;; this rule does the reversing, collecting the reversed
    ;; sequence in the last part -- also make up new identifiers
    ;; and collect them in *both* directions (`X' is the straight
    ;; sequence of identifiers, `X*' is the reversed one, and `E*'
    ;; is the reversed expression sequence); note that each
    ;; iteration introduces a new identifier called `t'
    [(rev-app-helper F (E0 E ...) (X ...  ) (  X* ...) (  E* ...))
    (rev-app-helper F (  E ...) (X ... t) (t X* ...) (E0 E* ...))]
    ;; and this rule fires up when we're done with the reversal and
    ;; the generation
    [(rev-app-helper F () (x ...) (x* ...) (E* ...))
    (let ([x* E*] ...)
      (F x ...))]))

;; see that it works
(define (show x) (printf ">>> ~s\n" x) x)
(rev-app list (show 1) (show 2) (show 3))

So, this worked, but in this case the simplicity of the syntax-rules rewrite language worked against us, and made a very inconvenient solution. This could have been much easier if we could just write a “meta-level” reverse, and a use of map to generate the names.

… And all of that was just the first problem. The second one is even harder: syntax-rules is designed to avoid all name captures, but what if we want to break hygiene? There are some cases where you want a macro that “injects” a user-visible identifier into its result. The most common (and therefore the classic) example of this is an anaphoric if macro, that binds it to the result of the test (which can be any value, not just a boolean):

;; find the element of `l' that is immediately following `x'
;; (assumes that if `x' is found, it is not the last one)
(define (after x l)
  (let ([m (member x l)])
    (if m
      (second m)
      (error 'after "~s not found in ~s" x l))))

which we want to turn to:

;; find the element of `l' that is immediately following `x'
;; (assumes that if `x' is found, it is not the last one)
(define (after x l)
  (if (member x l)
    (second it)
    (error 'after "~s not found in ~s" x l)))

The obvious definition of `if-it’ doesn’t work:

(define-syntax-rule (if-it E1 E2 E3)
  (let ([it E1]) (if it E2 E3)))

The reason it doesn’t work should be obvious now — it is designed to avoid the it that the macro introduced from interfering with the it that the user code uses.

Next, we’ll see Racket’s “low level” macro system, which can later be used to solve these problems.

Racket’s “Low-Level” Macros

As we’ve previously seen, syntax-rules creates transformation functions — but there are other more direct ways to write these functions. These involve writing a function directly rather than creating one with syntax-rules — and because this is a more low-level approach than using syntax-rules to generate a transformer, it is called a “low level macro system”. All Scheme implementations have such low-level systems, and these systems vary from one to the other. They all involve some particular type that is used as “syntax” — this type is always related to S-expressions, but it cannot be the simple define-macro tool that we’ve seen earlier if we want to avoid the problems of capturing identifiers.

Historical note: For a very long time the Scheme standard had avoided a concrete specification of this low-level system, leaving syntax-rules as the only way to write portable Scheme code. This had lead some people to explore more thoroughly the things that can be done with just syntax-rules rewrites, even beyond the examples we’ve seen. As it turns out, there’s a lot that can be done with it — in fact, it is possible to write rewrite rules that implement a lambda calculus, making it possible to write things that look kind of like “real code”. This is, however, awkward to say the least, and redundant with a macro system that can use the full language for arbitrary computations. It has also became less popular recently, since R6RS dictates something that is known as a “syntax-case macro system” (not really a good name, since syntax-case is just a tool in this system).

Racket uses an extended version of this syntax-case system, which is what we will discuss now. In the Racket macro system, “syntax” is a new type, not just S-expressions as is the case with define-macro. The way to think about this type is as a wrapper around S-expressions, where the S-expression is the “raw” symbolic form of the syntax, and a bunch of “stuff” is added. Two important bits of this “stuff” are the source location information for the syntax, and its lexical scope. The source location is what you’d expect: the file name for the syntax (if it was read from a file), its position in the file, and its line and column numbers; this information is mostly useful for reporting errors. The lexical scope information is used in a somewhat peculiar way: there is no direct way to access it, since usually you don’t need to do so — instead, for the rare cases where you do need to manipulate it, you copy the lexical scope of an existing syntax to create another. This allows the macro interface to be usable without specification of a specific representation for the scope.

The main piece of functionality in this system is syntax-case (which has lead to its common name) — a form that is used to deconstruct the input via pattern-matching similar to syntax-rules. In fact, the syntax of syntax-case looks very similar to the syntax of syntax-rules — there are zero or more parenthesized keywords, and then clauses that begin with the same kind of patterns to match the syntax against. The first obvious difference is that the syntax to be matched is given explicitly:

(syntax-case <value-to-match> (<keywords>)
  [<pattern> <result>]
  ...)

A macro is written as a plain function, usually used as the value in a define-syntax form (but it could also be used in plain helper functions). For example, here’s how the orelse macro is written using this:

(define-syntax orelse
  (lambda (stx)
    (syntax-case stx ()
      [(orelse x y) ???])))

Racket’s define-syntax can also use the same syntactic sugar for functions as define:

(define-syntax (orelse stx)
  (syntax-case stx ()
    [(orelse x y) ???]))

The second significant change from syntax-rules is that the right-hand-side expressions in the branches are not patterns. Instead, they’re plain Racket expressions. In this example (as in most uses of syntax-case) the result of the syntax-case form is going to be the result of the macro, and therefore it should return a piece of syntax. So far, the only piece of syntax that we see in this code is just the input stx — and returning that will get the macro expander in an infinite loop (because we’re essentially making a transformer for orelse expressions that expands the syntax to itself).

To return a new piece of syntax, we need a way to write new syntax values. The common way to do this is using a new special form: syntax. This form is similar to quote — except that instead of an S-expression, it results in a syntax. For example, in this code:

(define-syntax (orelse stx)
  (printf "Expanding ~s\n" stx)
  (syntax-case stx ()
    [(orelse x y) (syntax (printf "Running an orelse\n"))]))

the first printout happens during macro expansion, and the second is part of the generated code. Like quote, syntax has a convenient abbreviation — “#'”:

(define-syntax (orelse stx)
  (printf "Expanding ~s\n" stx)
  (syntax-case stx ()
    [(orelse x y) #'(printf "Running an orelse\n")]))

Now the question is how we can get the actual macro working. The thing is that syntax is not completely quoting its contents as a syntax — there could be some meta identifiers that are bound as “pattern variables” in the syntax-case pattern that was matched for the current clause — in this case, we have x and y as such pattern variables. (Actually, orelse is a pattern variable too, but this doesn’t matter for our purpose.) Using these inside a syntax will have them replaced by the syntax that they matched against. The complete orelse definition is therefore very easy:

(define-syntax (orelse stx)
  (syntax-case stx ()
    [(orelse <expr1> <expr2>)
    #'(let ((temp <expr1>))
        (if temp temp <expr2>))]))

The same treatment of ... holds here too — in the matching pattern they specify 0 or more occurrences of the preceding pattern, and in the output template they mean that the matching sequence is “spliced” in. Note that syntax-rules is now easy to define as a macro that expands to a function that uses syntax-case to do the actual rewrite work:

(define-syntax (syntax-rules stx)
  (syntax-case stx ()
    [(syntax-rules (keyword ...)
      [pattern template]
      ...)
    #'(lambda (stx)
        (syntax-case stx (keyword ...)
          [pattern #'template]
          ...))]))

extra Solving the syntax-rules problems

So far it looks like we didn’t do anything new, but the important change is already in: the fact that the results of a macro is a plain Racket expression mean that we can now add more API functionality for dealing with syntax values. There is no longer a problem with running “meta-level” code vs generated runtime code: anything that is inside a syntax (anything that is quoted with a “#'”) is generated code, and the rest is code that is executed when the macro expands. We will now introduce some of the Racket macro API by demonstrating the solutions to the syntax-rules problem that were mentioned earlier.

First of all, we’ve talked about the problem of reporting good errors. For example, make this:

(for 1 = 1 to 3 do ...)

throw a proper error instead of leaving it for lambda to complain about. To make it easier to play with, we’ll use a simpler macro:

(define-syntax fun
  (syntax-rules (->)
    [(_ id -> E) (lambda (id) E)])) ; _ matches the head `fun'

and using an explicit function:

(define-syntax (fun stx)
  (syntax-case stx (->)
    [(_ id -> E) #'(lambda (id) E)]))

One of the basic API functions is syntax-e — it takes in a syntax value and returns the S-expression that it wraps. In this case, we can pull out the identifier from this, and check that it is a valid identifier using symbol? on what it wraps:

(define-syntax (fun stx)
  (syntax-case stx (->)
    [(_ id -> E)
    (if (symbol? (syntax-e (cadr (syntax-e stx))))
      #'(lambda (id) E)
      (error 'fun "bad syntax: expecting an identifier, got ~s"
                  (cadr (syntax-e stx))))]))

The error is awkward though — it doesn’t look like the usual kind of syntax errors that Racket throws: it’s shown in an ugly way, and its source is not properly highlighted. A better way to do this is to use `raise-syntax-error’ — it takes an error message, the offending syntax, and the offending part of this syntax:

(define-syntax (fun stx)
  (syntax-case stx (->)
    [(_ id -> E)
    (if (symbol? (syntax-e (cadr (syntax-e stx))))
      #'(lambda (id) E)
      (raise-syntax-error
        'fun "bad syntax: expecting an identifier"
        stx (cadr (syntax-e stx))))]))

Another inconvenient issue is with pulling out the identifier. Consider that #'(lambda (id) E) is a new piece of syntax that has the supposed identifier in it — we pull it from that instead of from stx, but it would be even easier with #'(id), and even easier than that with just #'id which will be just the identifier:

(define-syntax (fun stx)
  (syntax-case stx (->)
    [(_ id -> E)
    (if (symbol? (syntax-e #'id))
      #'(lambda (id) E)
      (raise-syntax-error
        'fun "bad syntax: expecting an identifier"
        stx #'id))]))

Also, checking that something is an identifier is common enough that there is another predicate for this (the combination of syntax-e and symbol?) — identifier?:

(define-syntax (fun stx)
  (syntax-case stx (->)
    [(_ id -> E)
    (if (identifier? #'id)
      #'(lambda (id) E)
      (raise-syntax-error
        'fun "bad syntax: expecting an identifier"
        stx #'id))]))

As a side note, checking the input pattern for validity is very common, and in some cases might be needed to discriminate patterns (eg, one result when id is an identifier, another when it’s not). For this, syntax-cases clauses have “guard expressions” — so we can write the above more simply as:

(define-syntax (fun stx)
  (syntax-case stx (->)
    [(_ id -> E)
    (identifier? #'id)
    #'(lambda (id) E)]))

This, however, produces a less informative “bad syntax” error, since there is no way to tell what the error message should be. (There is a relatively new Racket tool called syntax-parse where such requirements can be specified and a proper error message is generated on bad inputs.)

We can now resolve the repeat problem — create a (repeat N E) macro:

(define-syntax (repeat stx)
  (define (n-copies n expr)
    (if (> n 0) (cons expr (n-copies (sub1 n) expr)) null))
  (syntax-case stx ()
    [(_ N E)
    (integer? (syntax-e #'N))
    #'(begin (n-copies (syntax-e #'N) #'E))]))

(Note that we can define an internal helper function, just like we do with plain functions.) But this doesn’t quite work (and if you try it, you’ll see an interesting error message) — the problem is that we’re generating code with a call to n-copies in it, instead of actually calling it. The problem is that we need to take the list that n-copies generates, and somehow “plant” it in the resulting syntax. So far the only things that were planted in it are pattern variables — and we can actually use another syntax-case to do just that: match the result of n-copies against a pattern variable, and then use that variable in the final syntax:

(define-syntax (repeat stx)
  (define (n-copies n expr)
    (if (> n 0) (cons expr (n-copies (sub1 n) expr)) null))
  (syntax-case stx ()
    [(_ N E)
    (number? (syntax-e #'N))
    (syntax-case (n-copies (syntax-e #'N) #'E) ()
      [(expr ...) #'(begin expr ...)])]))

This works — but one thing to note here is that n-copies returns a list, not a syntax. The thing is that syntax-case will automatically “coerce” S-expressions into a syntax in some way, easy to do in this case since we only care about the elements of the list, and those are all syntaxes.

However, this use of syntax-case as a pattern variable binder is rather indirect, enough that it’s hard to read the code. Since this is a common use case, there is a shorthand for that too: with-syntax. It looks as a kind of a let-like form, but instead of binding plain identifiers, it binds pattern identifiers — and in fact, the things to be bound are themselves patterns:

(define-syntax (repeat stx)
  (define (n-copies n expr)
    (if (> n 0) (cons expr (n-copies (sub1 n) expr)) null))
  (syntax-case stx ()
    [(_ N E)
    (number? (syntax-e #'N))
    (with-syntax ([(expr ...) (n-copies (syntax-e #'N) #'E)])
      #'(begin expr ...))]))

Note that there is no need to implement with-syntax as a primitive form — it is not too hard to implement it as a macro that expands to the actual use of syntax-case. (In fact, you can probably guess now that the Racket core language is much smaller than it seems, with large parts that are implemented as layers of macros.)

There is one more related group of shorthands that is relevant here: quasisyntax, unsyntax, and unsyntax-splicing. These are analogous to the quoting forms by the same names, and they have similar shorthands: “#`”, “#,” and “#,@”. They could be used to implement this macro:

(define-syntax (repeat stx)
  (define (n-copies n expr)
    (if (> n 0) (cons expr (n-copies (sub1 n) expr)) null))
  (syntax-case stx ()
    [(_ N E)
    (number? (syntax-e #'N))
    #`(begin #,@(n-copies (syntax-e #'N) #'E))]))

[As you might suspect now, these new forms are also implemented as macros, which expand to the corresponding uses of with-syntax, which in turn expand into syntax-case forms.]

We now have almost enough machinery to implement the rev-app macro, and compare it to the original (complex) version that used syntax-rules. The only thing that is missing is a way to generate a number of new identifiers — which we achieved earlier by a number of macro expansion (each expansion of a macro that has a new identifier x will have this identifier different from other expansions, which is why it worked). Racket has a function for this: generate-temporaries. Since it is common to generate temporaries for input syntaxes, the function expects an input syntax that has a list as its S-expression form (or a plain list).

(define-syntax (rev-app stx)
  (syntax-case stx ()
    [(_ F E ...)
    (let ([temps (generate-temporaries #'(E ...))])
      (with-syntax ([(E* ...) (reverse (syntax-e #'(E ...)))]
                    [(x  ...) temps]
                    [(x* ...) (reverse temps)])
        #'(let ([x* E*] ...)
            (F x ...))))]))

;; see that it works
(define (show x) (printf ">>> ~s\n" x) x)
(rev-app list (show 1) (show 2) (show 3))

Note that this is not shorter than the syntax-rules version, but it is easier to read since reverse and generate-temporaries have an obvious direct intention, eliminating the need to wonder through rewrite rules and inferring how they do their work. In addition, this macro expands in one step (use the macro stepper to compare it with the previous version), which makes it much more efficient.

extra Breaking Hygiene, How Bad is it?

We finally get to address the second deficiency of syntax-rules — its inability to intentionally capture an identifier so it is visible in user code. Let’s start with the simple version, the one that didn’t work:

(define-syntax-rule (if-it E1 E2 E3)
  (let ([it E1]) (if it E2 E3)))

and translate it to syntax-case:

(define-syntax (if-it stx)
  (syntax-case stx ()
    [(if-it E1 E2 E3)
    #'(let ([it E1]) (if it E2 E3))]))

The only problem here is that the it identifier is introduced by the macro, or more specifically, by the syntax form that makes up the return syntax. What we need now is a programmatic way to create an identifier with a lexical context that is different than the default. As mentioned above, Racket’s syntax system (and all other syntax-case systems) doesn’t provide a direct way to manipulate the lexical context. Instead, it provides a way to create a piece of syntax by copying the lexical scope of another one — and this is done with the datum->syntax function. The function consumes a syntax value to get the lexical scope from, and a “datum” which is an S-expression that can contain syntax values. The result will have these syntax values as given on the input, but raw S-expressions will be converted to syntaxes, using the given lexical context. In the above case, we need to convert an it symbol into the same-named identifier, and we can do that using the lexical scope of the input syntax. As we’ve seen before, we use with-syntax to inject the new identifier into the result:

(define-syntax (if-it stx)
  (syntax-case stx ()
    [(if-it E1 E2 E3)
    (with-syntax ([it (datum->syntax stx 'it)])
      #'(let ([it E1]) (if it E2 E3)))]))

We can even control the scope of the user binding — for example, it doesn’t make much sense to have it in the else branch. We can do this by first binding a plain (hygienic) identifier to the result, and only bind it to that when needed:

(define-syntax (if-it stx)
  (syntax-case stx ()
    [(if-it E1 E2 E3)
    (with-syntax ([it (datum->syntax stx 'it)])
      #'(let ([tmp E1]) (if tmp (let ([it tmp]) E2) E3)))]))

[A relevant note: Racket provides something that is known as “The Macro Writer’s Bill of Rights” — in this case, it guarantees that the extra let does not imply a runtime or a memory overhead.]

This works — and it’s a popular way for creating such user-visible bindings. However, breaking hygiene this way can lead to some confusing problems. Such problems are usually visible when we try to compose macros — for example, say that we want to create a cond-it macro, the anaphoric analogue of cond, which binds it in each branch. It seems that an obvious way of doing this is by layering it on top of if-it — it should even be simple enough to be defined with syntax-rules:

(define-syntax cond-it
  (syntax-rules (else)
    [(_ [test1 expr1 ...] [tests exprs ...] ...)
    (if-it test1
      (begin expr1 ...)
      (cond-it [tests exprs ...] ...))]
    ;; two end cases -- one with an `else' and one without
    [(_ [else expr ...]) (begin expr ...)]
    [(_) (void)]))

Surprisingly, this does not work! Can you see what went wrong?

The problem lies in how the it identifier is generated — it used the lexical context of the whole if-it expression, which seemed like exactly what we wanted. But in this case, the if-it expression is coming from the cond-it macro, not from the user input. Or to be more accurate: it’s the cond-it macro which is the user of if-it, so it is visible to cond-it, but not to its own users…

Note that these anaphoric macros are a popular example, but these problems do pop up elsewhere too. For example, imagine a loop macro that wants to bind break unhygienically, a class macro that binds this, and many others.

How can we solve this? There are several ways for this:

extra Macros in Racket’s Module System

Not in PLAI

One of the main things that Racket pioneered is integrating its syntax system with its module system. In plain Racket (#lang racket, not the course languages), every file is a module that can provide some functionality, for when you put this code in a file:

#lang racket
(provide plus)
(define (plus x y) (+ x y))

You get a library that gives you a plus function. This is just the usual thing that you’d expect from a library facility in a language — but Racket allows you to do the same with syntax definitions. For example, if we add the following to this file:

(provide with)
(define-syntax-rule (with [x V] E)
  (let ([x V]) E))

we — the users of this library — also get to have a with binding, which is a “FLANG-compatibility” macro that expands into a let. Now, on a brief look, this doesn’t seem all too impressive, but consider the fact that with is actually a translation function that lives at the syntax level, as a kind of a compiler plugin, and you’ll see that this is not as trivial as it seems. Racket arranges to do this with a concept of instantiating code at the compiler level, so libraries are used in two ways: either the usual thing as a runtime instantiation, or at compile time.

extra Defining Languages in Racket

But then Racket takes this concept even further. So far, we treated the thing that follows a #lang as a kind of a language specification — but the more complete story is that this specification is actually just a module. The only difference between such modules like racket or pl and “library modules” as our above file is that language modules provide a bunch of functionality that is specific to a language implementation. However, you don’t need to know about these things up front: instead, there’s a few tools that allow you to provide everything that some other module provides — if we add this to the above:

(provide (all-from-out racket))

then we get a library that provides the same two bindings as above (plus and with) — in addition to everything from the racket library (which it got from its own #lang racket line).

To use this file as a language, the last bit that we need to know about is the actual concrete level syntax. Racket provides an s-exp language which is a kind of a meta language for reading source code in the usual S-expression syntax. Assuming that the above is in a file called mylang.rkt, we can use it (from a different file in the same directory) as follows:

#lang s-exp "mylang.rkt"

which makes the language of this file be (a) read using the S-expression syntax, and (b) get its bindings from our module, so

#lang s-exp "mylang.rkt"
(with [x 10] (* x 4))

will show a result of 40.

So far this seems like just some awkward way to get to the same functionality as a simple library — but now we can use more tools to make things more interesting. First, we can provide everything from racket except for let — change the last provide to:

(provide (except-out (all-from-out racket) let))

Next, we can provide our with but make it have the name let instead — by replacing that (provide with) with:

(provide (rename-out [with let]))

The result is a language that is the same as Racket, except that it has an additional plus “built-in” function, and its let syntax is different, as specified by our macro:

#lang s-exp "mylang.rkt"
(let [x 10] (plus x 4))

To top things off, there are a few “special” implicit macros that Racket uses. One of them, #%app, is a macro that is used implicitly whenever there’s an expression that looks like a function application. In our terms, that’s the Call AST node that gets used whenever a braced-form isn’t one of the known forms. If we override this macro in a similar way that we did for let, we’re essentially changing the semantics of application syntax. For example, here’s a definition that makes it possible to use a @ keyword to get a list of results of applying a function on several arguments:

(define-syntax my-app
  (syntax-rules (@)
    [(_ F @ E ...)
    (list (F E) ...)]
    [(_ x ...) (x ...)]))

This makes the (my-app add1 @ 1 2) application evaluate to '(2 3), but if @ is not used (as the second subexpression), we get the usual function application. (Note that this is because the last clause expands to (x ...) which implicitly has the usual Racket function application.) We can now make our language replace Racket’s implicit #%app macro with this, in the same way as we did before: first, drop Racket’s version from what we provide:

(provide (except-out (all-from-out racket) let #%app))

and then provide our definition instead

(provide (rename-out [my-app #%app]))

Users of our language get this as the regular function application:

#lang s-exp "mylang.rkt"
(let [x (plus 6 10)] (sqrt @ (plus x -7) x (plus x 9)))

Since #%app is a macro, it can evaluate to anything, even to things that are not function applications at all. For example, here’s an extended definition that adds an arrow syntax that expands to a lambda expression not to an actual application:

(define-syntax my-app
  (syntax-rules (@ =>)
    [(_ F @ E ...)
    (list (F E) ...)]
    [(_ x => E ...)
    (lambda x E ...)]
    [(_ x ...) (x ...)]))

And an example of using it

#lang s-exp "mylang.rkt"
(define add1 ((x) => (+ x 1)))
;; or, combining all application forms in one example:
(((x) => (plus x 7)) @ 10 20 30)

Another such special macro is #%module-begin: this is a macro that is wrapped around the whole module body. Changing it makes it possible to change the semantics of a sequence of toplevel expressions in our language. The following is our complete language, with an example of redefining #%module-begin to create a “verbose” language that prints out expressions and what they evaluate to (note the verbose helper macro that is completely internal):


mylang.rkt D 
;; A language that is built as an extension on top of Racket

#lang racket

(provide (except-out (all-from-out racket)
                    let #%app #%module-begin))

(provide plus)
(define (plus x y) (+ x y))

(provide (rename-out [with let]))
(define-syntax-rule (with [x V] E)
                    (let ([x V]) E))

(provide (rename-out [my-app #%app]))
(define-syntax my-app
  (syntax-rules (=> @)
    [(_ x => E ...)
    (lambda x E ...)]
    [(_ F @ E ...)
    (list (F E) ...)]
    [(_ x ...) (x ...)]))

(provide (rename-out [mod-beg #%module-begin]))
(define-syntax-rule (mod-beg E ...)
                    (#%module-begin (verbose E) ...))
(define-syntax verbose
  (syntax-rules ()
    [(_ (define name value)) ; assume no (define (foo ...) ...)
    (begin (define name value)
            (printf "~s := ~s\n" 'name name))]
    [(_ E)
    (printf "~s --> ~s\n" 'E E)]))

And for reference, try that language with the above example:

#lang s-exp "mylang.rkt"
(define seven (+ 3 4))
(define add1 ((x) => (+ x 1)))
(((x) => (plus x seven)) @ 10 20 30)

Macro Conclusions

PLAI §37.5

Macros are extremely powerful, but this also means that their usage should be restricted only to situations where they are really needed. You can view any function as extending the current collection of tools that you provide — where these tools are much more difficult for your users to swallow than plain functions: evaluation can happen in any way, with any scope, unlike the uniform rules of function application. An analogy is that every function (or value) that you provide is equivalent to adding nouns to a vocabulary, but macros can add completely new rules for reading, since using them might result in a completely different evaluation. Because of this, adding macros carelessly can make code harder to read and debug — and using them should be done in a way that is as clear as possible for users.

When should a macro be used?

It is also important to note that macros should not be used too frequently. As said above, every macro adds a completely different way of reading your code — a way that doesn’t use the usual “nouns” and “verbs”, but there are other reasons not to use a macro.

One common usage case is as an optimization — trying to avoid an extra function call. For example, this:

int min(int x, int y) {
  if ( x < y ) then return x; else return y;
}

might seem wasteful if you don’t want a full function call on every usage of min. So you might be tempted to use this instead:

#define min(x,y) x<y ? x : y

you even know the pitfalls of C macros so you make it more robust:

#define min(x,y) (((x)<(y)) ? (x) : (y))

But small functions like the above are things that any decent compiler should know how to optimize, and even if your compiler doesn’t, it’s still not worth doing this optimization because programmer time is the most expensive factor in any computer system. In addition, a compiler is committed to doing these optimizations only when possible (eg, it is not possible to in-line a recursive function) and to do proper in-lining — unlike the min CPP macro above which is erroneous in case x or y are expressions that have side-effects.

Side-note: macros in mainstream languages

Macros are an extremely powerful tool in Racket (and other languages in the Lisp family) — how come nobody else uses them?

Well, people have tried to use them in many contexts. The problem is that you cannot get away with a simple solution that does nothing more than textual manipulation of your programs. For example, the standard C preprocessor is a macro language, but it is fundamentally limited to very simple situations. This is still a hot topic these days, with modern languages trying out different solutions (or giving up and claiming that macros are evil).

Here is an example that was written by Mark Jason Dominus (“Higher Order Perl”), in a Perl mailing list post among further discussion on macros in Lisp vs other languages, including Perl’s source transformers that are supposed to fill a similar role.

The example starts with writing the following simple macro:

#define square(x) x*x

This doesn’t quite work because

2/square(10)

expands to

2/10*10

which is 2, but you wanted 0.02. So you need this instead:

#define square(x) (x*x)

but this breaks because

square(1+1)

expands to

(1+1*1+1)

which is 3, but you wanted 4. So you need this instead:

#define square(x) ((x)*(x))

But what about

x = 2;
square(x++);

which expands to

((x++)*(x++))

or an expensive expression? So you need this instead:

int __MYTMP;
#define square(x) (__MYTMP = (x), __MYTMP*__MYTMP)

but now it only works for ints; you can’t do square(3.5) any more. To really fix this you have to use nonstandard extensions, something like:

#define square(x) ({ typedef xtype = x; xtype xval = x; xval*xval; })

or more like:

#define square(x) \
  ({ typedef xtype = (x); \
    xtype xval = (x); \
    xval*xval; })

And that’s just to get trivial macros, like “square()”, to work.


You should be able to appreciate now the tremendous power of macros. This is why there are so many “primitive features” of programming languages that can be considered as merely library functionality given a good macro system. For example, people are used to think about OOP as some inherent property of a language — but in Racket there are at least two very different object systems that it comes with, and several others in user-distributed code. All of these are implemented as a library which provides the functionality as well as the necessary syntax in the form of macros. So the basic principle is to have a small core language with powerful constructs, and make it easy to express complex ideas using these constructs.

This is an important point to consider before starting a new DSL (reminder: domain specific language) — if you need something that looks like a simple DSL but might grow to a full language, you can extend an existing language with macros to have the features you want, and you will always be able to grow to use the full language if necessary. This is particularly easy with Racket, but possible in other languages too.


Side note: the principle of a powerful but simple code language and easy extensions is not limited to using macros — other factors are involved, like first-class functions. In fact, “first class”-ness can help in many situations, for example: single inheritance + classes as first-class values can be used instead of multiple inheritance.

Types

PLAI §24

In our Toy language implementation, there are certain situations that are not covered. For example,

{< {+ 1 2} 3}

is not a problem, but

{+ {< 1 2} 3}

will eventually use Racket’s addition function on a boolean value, which will crash our evaluator. Assuming that we go back to the simple language we once had, where there were no booleans, we can still run into errors — except now these are the errors that our code raises:

{+ {fun {} 1} 2}

or

{1 2 3}

or

{{fun {x y} {+ x y}} 5}

In any case, it would be good to avoid such errors right from the start — it seems like we should be able to identify such bad code and not even try to run it. One thing that we can do is do a little more work at parse time, and declare the {1 2 3} program fragment as invalid. We can even try to forbid

{bind {{x 1}} {x 2 3}}

in the same way, but what should we do with this? —

{fun {x} {x 2 3}}

The validity of this depends on how it is used. The same goes for some invalid expressions — the above bogus expression can be fine if it’s in a context that shadows <:

{bind {{< *}}
  {+ {< 1 2} 3}}

Finally, consider this:

{+ 3 {if <mystery> 5 {fun {x} x}}}

where mystery contains something like random or read. In general, knowing whether a piece of code will run with no errors is a problem that is equivalent to the halting problem — and because of this, there is no way to create an “exact” type system: they are all either too restrictive (rejecting programs that would run with no errors) or too permissive (accepting programs that might crash). This is a very practical issue — type safety means a lot less bugs in the system. A good type system is still an actively researched problem.

What is a Type?

PLAI §25

A type is any property of a program (or an expression) that can be determined without running the program. (This is different than what is considered a type in Racket which is a property that is known only at run-time, which means that before run-time we know nothing so in essence we have a single type (in the static sense).) Specifically, we want to use types in a way that predicts some aspects of the program’s behavior, for example, whether a program will crash.

Usually, types are being used as the kind of value that an expression can evaluate to, not the precise value itself. For example, we might have two kinds of values — functions and numbers, and we know that addition always operates on numbers, therefore

{+ 1 {fun {x} x}}

is a type error. Note that to determine this we don’t care about the actual function, just the fact that it is a function.

Important: types can discriminate certain programs as invalid, but they cannot discriminate correct programs from incorrect ones. For example, there is no way for any type system to know that this:

{fun {x} {+ x 1}}

is an incorrect decrease-by-one function.

In general, type systems try to get to the optimal point where as much information as possible is known, yet the language is not too restricted, no significant computing resources are wasted, and programmers don’t spend much time annotating their code.

Why would you want to use a type system?

Our Types — The Picky Language

The first thing we need to do is to agree on what types are. Earlier, we talked about two types: numbers and functions (ignore booleans or anything else for now), we will use these two types for now.

In general, this means that we are using the Types are Sets meaning for types, and specifically, we will be implmenting a type system known as a Hindley-Milner system. This is not what Typed Racket is using. In fact, one of the main differences is that in our type system each binding has exactly one type, whereas in Typed Racket an identifier can have different types in different places in the code. An example of this is something that we’ve talked about earlier:

(: foo : (U String Number) -> Number)
(define (foo x)          ; \ these `x`s have a
  (if (number? x)        ; / (U Number String) type
    (+ x 1)              ; > this one is a Number
    (string-length x)))  ; > and this one is a String

A type system is presented as a collection of rules called “type judgments”, which describe how to determine the type of an expression. Beside the types and the judgments, a type system specification needs a (decidable) algorithm that can assign types to expressions.

Such a specification should have one rule for every kind of syntactic construct, so when we get a program we can determine the precise type of any expression. Also, these judgments are usually recursive since a type judgment will almost always rely on the types of sub-expressions (if any).

For our restricted system, we have two rules that we can easily specify:

n : Number  (any numeral `n' is a number)
{fun {x} E} : Function

(These rules are actually “axioms”, since the state facts that are true by themselves, with no need for any further work.)

And what about an identifier? Well, it is clear that we need to keep some form of an environment that will keep an account of types assigned to identifiers (note: all of this is not at run-time). This environment is used in all type judgments, and usually written as a capital Greek Gamma character (in some places G is used to stick to ASCII texts).

The conventional way to write the above two axioms is:

Γ ⊢ n : Number
Γ ⊢ {fun {x} E} : Function

The first one is read as “Gamma proves that n has the type Number”. Note that this is a syntactic environment, much like DE-ENVs that you have seen in homework.

So, we can write a rule for identifiers that simply has the type assigned by the environment:

Γ ⊢ x : Γ(x)    ; "Γ(x)" is "lookup(x, Γ)" for a new "type env"

We now need a rule for addition and a rule for application (note: we’re using a very limited subset of our old language, where arithmetic operators are not function applications). Addition is easy: if we can prove that both a and b are numbers in some environment Γ, then we know that {+ a b} is a number in the same environment. We write this as follows:

Γ ⊢ A : Number  Γ ⊢ B : Number
———————————————————————————————
    Γ ⊢ {+ A B} : Number

Now, what about application? We need to refer to some arbitrary type now, and the common letter for that is a Greek lowercase tau:

Γ ⊢ F : Function  Γ ⊢ V : τᵥ
—————————————————————————————
    Γ ⊢ {call F V} : ???

that is — if we can prove that f is a function, and that v is a value of some type τₐ, then … ??? Well, we need to know more about f: we need to know what type it consumes and what type it returns. So a simple function is not enough — we need some sort of a function type that specifies both input and output types. We will use the notation that was seen throughout the semester and dump function. Now we can write:

Γ ⊢ F : (τ₁ -> τ₂)  Γ ⊢ V : τ₁
——————————————————————————————
    Γ ⊢ {call F V} : τ₂

which makes sense — if you take a function of type τ₁->τ₂ and you feed it what it expects, you get the obvious output type. But going back to the language — where do we get these new arrow types from? We will modify the language and require that every function specifies its input and output type (and assume we have only one argument functions). For example, we will write something like this for a function that is the curried version of addition:

{fun {x : Number} : (Number -> Number)
  {fun {y : Number} : Number
    {+ x y}}}

So: the revised syntax for the limited language that contains only additions, applications and single-argument functions, and for fun — go back to using the call keyword is. The syntax we get is:

<PICKY> ::= <num>
          | <id>
          | { + <PICKY> <PICKY> }
          | { fun { <id> : <TYPE> } : <TYPE> <PICKY> }
          | { call <PICKY> <PICKY> }

<TYPE>  ::= Number
          | ( <TYPE> -> <TYPE> )

and the typing rules are:

Γ ⊢ n : Number

Γ ⊢ {fun {x : τ₁} : τ₂ E} : (τ₁ -> τ₂)

Γ ⊢ x : Γ(x)

Γ ⊢ A : Number  Γ ⊢ B : Number
———————————————————————————————
    Γ ⊢ {+ A B} : Number

Γ ⊢ F : (τ₁ -> τ₂)  Γ ⊢ V : τ₁
——————————————————————————————
    Γ ⊢ {call F V} : τ₂

But we’re still missing a big part — the current axiomatic rule for a fun expression is too weak. If we use it, we conclude that these expressions:

{fun {x : Number} : (Number -> Number)
  3}
{fun {x : Number} : Number
  {call x 2}}

are valid, as well concluding that this program:

{call {call {fun {x : Number} : (Number -> Number)
              3}
            5}
      7}

is valid, and should return a number. What’s missing? We need to check that the body part of the function is correct, so the rule for typing a fun is no longer a simple axiom but rather a type judgment. Here is how we check the body instead of blindly believing program annotations:

          Γ[x:=τ₁] ⊢ E : τ₂            ; Γ[x:=τ₁] is
——————————————————————————————————————  ;    extend(Γ, x, τ₁)
Γ ⊢ {fun {x : τ₁} : τ₂ E} : (τ₁ -> τ₂)  ; for the new type envs

That is — we want to make sure that if x has type τ₁, then the body expression E has type τ₂, and if we can prove this, then we can trust these annotations.

There is an important relationship between this rule and the call rule for application:

(Side note: Racket comes with a contract system that can identify type errors dynamically, and assign blame to either the caller or the callee — and these correspond to these two sides.)

Note that, as we said, number is really just a property of a certain kind of values, we don’t know exactly what numbers are actually used. In the same way, the arrow function types don’t tell us exactly what function it is, for example, (Number -> Number) can indicate a function that adds three to its argument, subtracts seven, or multiplies it by 7619. But it certainly contains much more than the previous naive function type. (Consider also Typed Racket here: it goes much further in expressing facts about code.)

For reference, here is the complete BNF and typing rules:

<PICKY> ::= <num>
          | <id>
          | { + <PICKY> <PICKY> }
          | { fun { <id> : <TYPE> } : <TYPE> <PICKY> }
          | { call <PICKY> <PICKY> }

<TYPE>  ::= Number
          | ( <TYPE> -> <TYPE> )

Γ ⊢ n : Number

Γ ⊢ x : Γ(x)

Γ ⊢ A : Number  Γ ⊢ B : Number
———————————————————————————————
    Γ ⊢ {+ A B} : Number

          Γ[x:=τ₁] ⊢ E : τ₂
——————————————————————————————————————
Γ ⊢ {fun {x : τ₁} : τ₂ E} : (τ₁ -> τ₂)

Γ ⊢ F : (τ₁ -> τ₂)  Γ ⊢ V : τ₁
——————————————————————————————
    Γ ⊢ {call F V} : τ₂

Examples of using types (abbreviate Number as Num) — first, a simple example:

                {} ⊢ 5 : Num  {} ⊢ 7 : Num
                ———————————————————————————
{} ⊢ 2 : Num        {} ⊢ {+ 5 7} : Num
———————————————————————————————————————————
          {} ⊢ {+ 2 {+ 5 7}} : Num

and a little more involved one:

    [x:=Num] ⊢ x : Num  [x:=Num] ⊢ 3 : Num
    ———————————————————————————————————————
          [x:=Num] ⊢ {+ x 3} : Num
———————————————————————————————————————————————
{} ⊢ {fun {x : Num} : Num {+ x 3}} : Num -> Num  {} ⊢ 5 : Num
——————————————————————————————————————————————————————————————
      {} ⊢ {call {fun {x : Num} : Num {+ x 3}} 5} : Num

Finally, try a buggy program like

{+ 3 {fun {x : Number} : Number x}}

and see where it is impossible to continue.

The main thing here is that to know that this is a type error, we have to prove that there is no judgment for a certain type (in this case, no way to prove that a fun expression has a Num type), which we (humans) can only do by inspecting all of the rules. Because of this, we need to also add an algorithm to our type system, one that we can follow and determine when it gives up.

Typing control

PLAI §26

We will now extend our typed Picky language to have a conditional expression, and predicates. First, we extend the BNF with a predicate expression, and we also need a type for the results:

<PICKY> ::= <num>
          | <id>
          | { + <PICKY> <PICKY> }
          | { < <PICKY> <PICKY> }
          | { fun { <id> : <TYPE> } : <TYPE> <PICKY> }
          | { call <PICKY> <PICKY> }
          | { if <PICKY> <PICKY> <PICKY> }

<TYPE>  ::= Number
          | Boolean
          | ( <TYPE> -> <TYPE> )

Initially, we use the same rules, and add the obvious type for the predicate:

Γ ⊢ A : Number  Γ ⊢ B : Number
———————————————————————————————
    Γ ⊢ {< A B} : Boolean

And what should the rule for if look like? Well, to make sure that the condition is a boolean, it should be something of this form:

Γ ⊢ C : Boolean  Γ ⊢ T : ???  Γ ⊢ E : ???
———————————————————————————————————————————
          Γ ⊢ {if C T E} : ???

What would be the types of t and e? A natural choice would be to let the programmer use any two types:

Γ ⊢ C : Boolean  Γ ⊢ T : τ₁  Γ ⊢ E : τ₂
—————————————————————————————————————————
          Γ ⊢ {if C T E} : ???

But what would the return type be? This is still a problem. (BTW, some kind of a union would be nice, but it has some strong implications that we will not discuss.) In addition, we will have a problem detecting possible errors like:

{+ 2 {if <mystery> 3 {fun {x} x}}}

Since we know nothing about the condition, we can just as well be conservative and force both arms to have the same type. The rule is therefore:

Γ ⊢ C : Boolean  Γ ⊢ T : τ  Γ ⊢ E : τ
———————————————————————————————————————
          Γ ⊢ {if C T E} : τ

— using the same letter indicates that we expect the types to be identical, unlike the previous attempt. Consequentially, this type system is fundamentally weaker than Typed Racket which we use in this class.

Here is the complete language specification with this extension:

<PICKY> ::= <num>
          | <id>
          | { + <PICKY> <PICKY> }
          | { < <PICKY> <PICKY> }
          | { fun { <id> : <TYPE> } : <TYPE> <PICKY> }
          | { call <PICKY> <PICKY> }
          | { if <PICKY> <PICKY> <PICKY> }

<TYPE>  ::= Number
          | Boolean
          | ( <TYPE> -> <TYPE> )

Γ ⊢ n : Number

Γ ⊢ x : Γ(x)

Γ ⊢ A : Number  Γ ⊢ B : Number
———————————————————————————————
    Γ ⊢ {+ A B} : Number

Γ ⊢ A : Number  Γ ⊢ B : Number
———————————————————————————————
    Γ ⊢ {< A B} : Boolean

          Γ[x:=τ₁] ⊢ E : τ₂
——————————————————————————————————————
Γ ⊢ {fun {x : τ₁} : τ₂ E} : (τ₁ -> τ₂)

Γ ⊢ F : (τ₁ -> τ₂)  Γ ⊢ V : τ₁
——————————————————————————————
    Γ ⊢ {call F V} : τ₂

Γ ⊢ C : Boolean  Γ ⊢ T : τ  Γ ⊢ E : τ
———————————————————————————————————————
          Γ ⊢ {if C T E} : τ

Extending Picky

In general, we can extend this language in one of two ways. For example, lets say that we want to add the with form. One way to add it is what we did above — simply add it to the language, and write the rule for it. In this case, we get:

Γ ⊢ V : τ₁  Γ[x:=τ₁] ⊢ E : τ₂
——————————————————————————————
Γ ⊢ {with {x : τ₁ V} E} : τ₂

Note how this rule encapsulates information about the scope of with. Also note that we need to specify the types for the bound values.

Another way to achieve this extension is if we add with as a derived rule. We know that when we see a

{with {x V} E}

expression, we can just translate it into

{call {fun {x} E} V}

So we could achieve this extension by using a rewrite rule to translate all with expressions into calls of anonymous functions (eg, using the with-stx facility that we have seen recently). This could be done formally: begin with the with form, translate to the call form, and finally show the necessary goals to prove its type. The only thing to be aware of is the need to translate the types too, and there is one type that is missing from the typed-with version above — the output type of the function. This is an indication that we don’t really need to specify function output types — we can just deduce them from the code, provided that we know the input type to the function.

Indeed, if we do this on a general template for a with expression, then we end up with the same goals that need to be proved as in the above rule:

      Γ[x:=τ₁] ⊢ E : τ₂
——————————————————————————————————————
Γ ⊢ {fun {x : τ₁} : τ₂ E} : (τ₁ -> τ₂)      Γ ⊢ V : τ₁
———————————————————————————————————————————————————————
        Γ ⊢ {call {fun {x : τ₁} : τ₂ E} V} : τ₂
        ———————————————————————————————————————
            Γ ⊢ {with {x : τ₁ V} E} : τ₂

Conclusion — we’ve seen type judgment rules, and using them in proof trees. Note that in these trees there is a clear difference between rules that have no preconditions — there are axioms that are always true (eg, a numeral is always of type num).

The general way of proving a type seems similar to evaluation of an expression, but there is a huge difference — nothing is really getting evaluated. As an example, we always go into the body of a function expression, which is done to get the function’s type, and this is later used anywhere this function is used — when you evaluate this:

{with {f {fun {x : Number} : Number x}}
  {+ {call f 1} {call f 2}}}

you first create a closure which means that you don’t touch the body of the function, and later you use it twice. In contrast, when you prove the type of this expression, you immediately go into the body of the function which you have to do to prove that it has the expected Number->Number type, and then you just use this type twice.

Finally, we have seen the importance of using the same type letters to enforce types, and in the case of typing an if statement this had a major role: specifying that the two arms can be any two types, or the same type.

Implementing Picky

The following is a simple implementation of the Picky language. It is based on the environments-based Flang implementation. Note the two main functions here — typecheck and typecheck*.


picky1.rkt D 
;; The Picky interpreter, verbose version

#lang pl

#|
The grammar:
  <PICKY> ::= <num>
            | <id>
            | { + <PICKY> <PICKY> }
            | { - <PICKY> <PICKY> }
            | { = <PICKY> <PICKY> }
            | { < <PICKY> <PICKY> }
            | { fun { <id> : <TYPE> } : <TYPE> <PICKY> }
            | { call <PICKY> <PICKY> }
            | { with { <id> : <TYPE> <PICKY> } <PICKY> }
            | { if <PICKY> <PICKY> <PICKY> }
  <TYPE>  ::= Num | Number
            | Bool | Boolean
            | { <TYPE> -> <TYPE> }

Evaluation rules:
  eval(N,env)                = N
  eval(x,env)                = lookup(x,env)
  eval({+ E1 E2},env)        = eval(E1,env) + eval(E2,env)
  eval({- E1 E2},env)        = eval(E1,env) - eval(E2,env)
  eval({= E1 E2},env)        = eval(E1,env) = eval(E2,env)
  eval({< E1 E2},env)        = eval(E1,env) < eval(E2,env)
  eval({fun {x} E},env)      = <{fun {x} E}, env>
  eval({call E1 E2},env1)    = eval(B,extend(x,eval(E2,env1),env2))
                              if eval(E1,env1) = <{fun {x} B}, env2>
                            = error!  otherwise <-- never happens
  eval({with {x E1} E2},env) = eval(E2,extend(x,eval(E1,env),env))
  eval({if E1 E2 E3},env)    = eval(E2,env)  if eval(E1,env) is true
                            = eval(E3,env)  otherwise

Type checking rules:

  Γ ⊢ n : Number

  Γ ⊢ x : Γ(x)

  Γ ⊢ A : Number  Γ ⊢ B : Number
  ———————————————————————————————
      Γ ⊢ {+ A B} : Number

  Γ ⊢ A : Number  Γ ⊢ B : Number
  ———————————————————————————————
      Γ ⊢ {< A B} : Boolean

            Γ[x:=τ₁] ⊢ E : τ₂
  ——————————————————————————————————————
  Γ ⊢ {fun {x : τ₁} : τ₂ E} : (τ₁ -> τ₂)

  Γ ⊢ F : (τ₁ -> τ₂)  Γ ⊢ V : τ₁
  ——————————————————————————————
      Γ ⊢ {call F V} : τ₂

  Γ ⊢ V : τ₁  Γ[x:=τ₁] ⊢ E : τ₂
  ——————————————————————————————
  Γ ⊢ {with {x : τ₁ V} E} : τ₂

  Γ ⊢ C : Boolean  Γ ⊢ T : τ  Γ ⊢ E : τ
  ———————————————————————————————————————
            Γ ⊢ {if C T E} : τ

|#

(define-type PICKY
  [Num  Number]
  [Id    Symbol]
  [Add  PICKY PICKY]
  [Sub  PICKY PICKY]
  [Equal PICKY PICKY]
  [Less  PICKY PICKY]
  [Fun  Symbol TYPE PICKY TYPE] ; name, in-type, body, out-type
  [Call  PICKY PICKY]
  [With  Symbol TYPE PICKY PICKY]
  [If    PICKY PICKY PICKY])

(define-type TYPE
  [NumT]
  [BoolT]
  [FunT TYPE TYPE])

(: parse-sexpr : Sexpr -> PICKY)
;; parses s-expressions into PICKYs
(define (parse-sexpr sexpr)
  (match sexpr
    [(number: n)    (Num n)]
    [(symbol: name) (Id name)]
    [(list '+ lhs rhs) (Add  (parse-sexpr lhs) (parse-sexpr rhs))]
    [(list '- lhs rhs) (Sub  (parse-sexpr lhs) (parse-sexpr rhs))]
    [(list '= lhs rhs) (Equal (parse-sexpr lhs) (parse-sexpr rhs))]
    [(list '< lhs rhs) (Less  (parse-sexpr lhs) (parse-sexpr rhs))]
    [(list 'call fun arg)
                      (Call  (parse-sexpr fun) (parse-sexpr arg))]
    [(list 'if c t e)
    (If (parse-sexpr c) (parse-sexpr t) (parse-sexpr e))]
    [(cons 'fun more)
    (match sexpr
      [(list 'fun (list (symbol: name) ': itype) ': otype body)
        (Fun name
            (parse-type-sexpr itype)
            (parse-sexpr body)
            (parse-type-sexpr otype))]
      [else (error 'parse-sexpr "bad `fun' syntax in ~s" sexpr)])]
    [(cons 'with more)
    (match sexpr
      [(list 'with (list (symbol: name) ': type named) body)
        (With name
              (parse-type-sexpr type)
              (parse-sexpr named)
              (parse-sexpr body))]
      [else (error 'parse-sexpr "bad `with' syntax in ~s" sexpr)])]
    [else (error 'parse-sexpr "bad expression syntax: ~s" sexpr)]))

(: parse-type-sexpr : Sexpr -> TYPE)
;; parses s-expressions into TYPEs
(define (parse-type-sexpr sexpr)
  (match sexpr
    ['Number  (NumT)]
    ['Boolean (BoolT)]
    ;; allow shorter names too
    ['Num  (NumT)]
    ['Bool (BoolT)]
    [(list itype '-> otype)
    (FunT (parse-type-sexpr itype) (parse-type-sexpr otype))]
    [else (error 'parse-type-sexpr "bad type syntax in ~s" sexpr)]))

(: parse : String -> PICKY)
;; parses a string containing a PICKY expression to a PICKY AST
(define (parse str)
  (parse-sexpr (string->sexpr str)))

;; Typechecker and related types and helpers

;; this is similar to ENV, but it holds type information for the
;; identifiers during typechecking; it is essentially "Γ"
(define-type TYPEENV
  [EmptyTypeEnv]
  [ExtendTypeEnv Symbol TYPE TYPEENV])

(: type-lookup : Symbol TYPEENV -> TYPE)
;; similar to `lookup' for type environments; note that the
;; error is phrased as a typecheck error, since this indicates
;; a failure at the type checking stage
(define (type-lookup name typeenv)
  (cases typeenv
    [(EmptyTypeEnv) (error 'typecheck "no binding for ~s" name)]
    [(ExtendTypeEnv id type rest-env)
    (if (eq? id name) type (type-lookup name rest-env))]))

(: typecheck : PICKY TYPE TYPEENV -> Void)
;; Checks that the given expression has the specified type.
;; Used only for side-effects (to throw a type error), so return
;; a void value.
(define (typecheck expr type type-env)
  (unless (equal? type (typecheck* expr type-env))
    (error 'typecheck "type error for ~s: expecting a ~s"
          expr type)))

(: typecheck* : PICKY TYPEENV -> TYPE)
;; Returns the type of the given expression (which also means that
;; it checks it). This is a helper for the real typechecker that
;; also checks a specific return type.
(define (typecheck* expr type-env)
  (: two-nums : PICKY PICKY -> Void)
  (define (two-nums e1 e2)
    (typecheck e1 (NumT) type-env)
    (typecheck e2 (NumT) type-env))
  (cases expr
    [(Num n) (NumT)]
    [(Id name) (type-lookup name type-env)]
    [(Add  l r) (two-nums l r) (NumT)]
    [(Sub  l r) (two-nums l r) (NumT)]
    [(Equal l r) (two-nums l r) (BoolT)]
    [(Less  l r) (two-nums l r) (BoolT)]
    [(Fun bound-id in-type bound-body out-type)
    (typecheck bound-body out-type
                (ExtendTypeEnv bound-id in-type type-env))
    (FunT in-type out-type)]
    [(Call fun arg)
    (cases (typecheck* fun type-env)
      [(FunT in-type out-type)
        (typecheck arg in-type type-env)
        out-type]
      [else (error 'typecheck "type error for ~s: expecting a fun"
                    expr)])]
    [(With bound-id itype named-expr bound-body)
    (typecheck named-expr itype type-env)
    (typecheck* bound-body
                (ExtendTypeEnv bound-id itype type-env))]
    [(If cond-expr then-expr else-expr)
    (typecheck cond-expr (BoolT) type-env)
    (let ([type (typecheck* then-expr type-env)])
      (typecheck else-expr type type-env) ; enforce same type
      type)]))

;; Evaluator and related types and helpers

(define-type ENV
  [EmptyEnv]
  [Extend Symbol VAL ENV])

(define-type VAL
  [NumV  Number]
  [BoolV Boolean]
  [FunV  Symbol PICKY ENV])

(: lookup : Symbol ENV -> VAL)
;; lookup a symbol in an environment, return its value or throw an
;; error if it isn't bound
(define (lookup name env)
  (cases env
    [(EmptyEnv) (error 'lookup "no binding for ~s" name)]
    [(Extend id val rest-env)
    (if (eq? id name) val (lookup name rest-env))]))

(: strip-numv : Symbol VAL -> Number)
;; converts a VAL to a Racket number if possible, throws an error if
;; not using the given name for the error message
(define (strip-numv name val)
  (cases val
    [(NumV n) n]
    ;; this error will never be reached, see below for more
    [else (error name "expected a number, got: ~s" val)]))

(: arith-op : (Number Number -> Number) VAL VAL -> VAL)
;; gets a Racket numeric binary operator, and uses it within a NumV
;; wrapper
(define (arith-op op val1 val2)
  (NumV (op (strip-numv 'arith-op val1)
            (strip-numv 'arith-op val2))))

(: bool-op : (Number Number -> Boolean) VAL VAL -> VAL)
;; gets a Racket numeric binary predicate, and uses it
;; within a BoolV wrapper
(define (bool-op op val1 val2)
  (BoolV (op (strip-numv 'bool-op val1)
            (strip-numv 'bool-op val2))))

(: eval : PICKY ENV -> VAL)
;; evaluates PICKY expressions by reducing them to values
(define (eval expr env)
  (cases expr
    [(Num n) (NumV n)]
    [(Id name) (lookup name env)]
    [(Add  l r) (arith-op + (eval l env) (eval r env))]
    [(Sub  l r) (arith-op - (eval l env) (eval r env))]
    [(Equal l r) (bool-op  = (eval l env) (eval r env))]
    [(Less  l r) (bool-op  < (eval l env) (eval r env))]
    [(Fun bound-id in-type bound-body out-type)
    ;; note that types are not used at runtime,
    ;; so they're not stored in the closure
    (FunV bound-id bound-body env)]
    [(Call fun-expr arg-expr)
    (define fval (eval fun-expr env))
    (cases fval
      [(FunV bound-id bound-body f-env)
        (eval bound-body
              (Extend bound-id (eval arg-expr env) f-env))]
      ;; `cases' requires complete coverage of all variants, but
      ;; this `else' is never used since we typecheck programs
      [else (error 'eval "`call' expects a function, got: ~s"
                    fval)])]
    [(With bound-id type named-expr bound-body)
    (eval bound-body (Extend bound-id (eval named-expr env) env))]
    [(If cond-expr then-expr else-expr)
    (let ([bval (eval cond-expr env)])
      (if (cases bval
            [(BoolV b) b]
            ;; same as above: this case is never reached
            [else (error 'eval "`if' expects a boolean, got: ~s"
                                bval)])
        (eval then-expr env)
        (eval else-expr env)))]))

(: run : String -> Number)
;; evaluate a PICKY program contained in a string
(define (run str)
  (let ([prog (parse str)])
    (typecheck prog (NumT) (EmptyTypeEnv))
    (let ([result (eval prog (EmptyEnv))])
      (cases result
        [(NumV n) n]
        ;; this error is never reached, since we make sure
        ;; that the program always evaluates to a number above
        [else (error 'run "evaluation returned a non-number: ~s"
                          result)]))))

;; tests -- including translations of the FLANG tests
(test (run "5") => 5)
(test (run "{< 1 2}") =error> "type error")
(test (run "{fun {x : Num} : Num {+ x 1}}") =error> "type error")
(test (run "{call {fun {x : Num} : Num {+ x 1}} 4}") => 5)
(test (run "{with {x : Num 3} {+ x 1}}") => 4)
(test (run "{with {identity : {Num -> Num} {fun {x : Num} : Num x}}
              {call identity 1}}")
      => 1)
(test (run "{with {add3 : {Num -> Num}
                  {fun {x : Num} : Num {+ x 3}}}
              {call add3 1}}")
      => 4)
(test (run "{with {add3 : {Num -> Num}
                  {fun {x : Num} : Num {+ x 3}}}
              {with {add1 : {Num -> Num}
                    {fun {x : Num} : Num {+ x 1}}}
                {with {x : Num 3}
                  {call add1 {call add3 x}}}}}")
      => 7)
(test (run "{with {identity : {{Num -> Num} -> {Num -> Num}}
                  {fun {x : {Num -> Num}} : {Num -> Num} x}}
              {with {foo : {Num -> Num}
                    {fun {x : Num} : Num {+ x 1}}}
                {call {call identity foo} 123}}}")
      => 124)
(test (run "{with {x : Num 3}
              {with {f : {Num -> Num}
                    {fun {y : Num} : Num {+ x y}}}
                {with {x : Num 5}
                  {call f 4}}}}")
      => 7)
(test (run "{call {with {x : Num 3}
                    {fun {y : Num} : Num {+ x y}}}
                  4}")
      => 7)
(test (run "{with {f : {Num -> Num}
                  {with {x : Num 3} {fun {y : Num} : Num {+ x y}}}}
              {with {x : Num 100}
                {call f 4}}}")
      => 7)
(test (run "{call {call {fun {x : {Num -> {Num -> Num}}}
                            : {Num -> Num}
                          {call x 1}}
                        {fun {x : Num} : {Num -> Num}
                          {fun {y : Num} : Num {+ x y}}}}
                  123}")
      => 124)
(test (run "{call {fun {x : Num} : Num {if {< x 2} {+ x 5} {+ x 6}}}
                  1}")
      => 6)
(test (run "{call {fun {x : Num} : Num {if {< x 2} {+ x 5} {+ x 6}}}
                  2}")
      => 8)

One thing that is very obvious when you look at the examples is that this language is way too verbose to be practical — types are repeated over and over again. If you look carefully at the typechecking fragments for the two relevant expressions — fun and with — you can see that we can actually get rid of almost all of the type annotations.

Improving Picky

The following version does that, there are no types mentioned except for the input type for a function. Note that we can do that at this point because our language is so simple that many pieces of code have a specific type. (For example, if we add polymorphism things get more complicated.)


picky2.rkt D 
;; The Picky interpreter, almost no explicit types

#lang pl

#|
The grammar:
  <PICKY> ::= <num>
            | <id>
            | { + <PICKY> <PICKY> }
            | { - <PICKY> <PICKY> }
            | { = <PICKY> <PICKY> }
            | { < <PICKY> <PICKY> }
            | { fun { <id> : <TYPE> } <PICKY> }
            | { call <PICKY> <PICKY> }
            | { with { <id> <PICKY> } <PICKY> }
            | { if <PICKY> <PICKY> <PICKY> }
  <TYPE>  ::= Num | Number
            | Bool | Boolean
            | { <TYPE> -> <TYPE> }

Evaluation rules:
  eval(N,env)                = N
  eval(x,env)                = lookup(x,env)
  eval({+ E1 E2},env)        = eval(E1,env) + eval(E2,env)
  eval({- E1 E2},env)        = eval(E1,env) - eval(E2,env)
  eval({= E1 E2},env)        = eval(E1,env) = eval(E2,env)
  eval({< E1 E2},env)        = eval(E1,env) < eval(E2,env)
  eval({fun {x} E},env)      = <{fun {x} E}, env>
  eval({call E1 E2},env1)    = eval(B,extend(x,eval(E2,env1),env2))
                              if eval(E1,env1) = <{fun {x} B}, env2>
                            = error!  otherwise <-- never happens
  eval({with {x E1} E2},env) = eval(E2,extend(x,eval(E1,env),env))
  eval({if E1 E2 E3},env)    = eval(E2,env)  if eval(E1,env) is true
                            = eval(E3,env)  otherwise

Type checking rules (note how implicit types are made):

  Γ ⊢ n : Number

  Γ ⊢ x : Γ(x)

  Γ ⊢ A : Number  Γ ⊢ B : Number
  ———————————————————————————————
      Γ ⊢ {+ A B} : Number

  Γ ⊢ A : Number  Γ ⊢ B : Number
  ———————————————————————————————
      Γ ⊢ {< A B} : Boolean

          Γ[x:=τ₁] ⊢ E : τ₂
  —————————————————————————————————
  Γ ⊢ {fun {x : τ₁} E} : (τ₁ -> τ₂)

  Γ ⊢ F : (τ₁ -> τ₂)  Γ ⊢ V : τ₁
  ——————————————————————————————
      Γ ⊢ {call F V} : τ₂

  Γ ⊢ V : τ₁  Γ[x:=τ₁] ⊢ E : τ₂
  ——————————————————————————————
    Γ ⊢ {with {x V} E} : τ₂

  Γ ⊢ C : Boolean  Γ ⊢ T : τ  Γ ⊢ E : τ
  ———————————————————————————————————————
            Γ ⊢ {if C T E} : τ

|#

(define-type PICKY
  [Num  Number]
  [Id    Symbol]
  [Add  PICKY PICKY]
  [Sub  PICKY PICKY]
  [Equal PICKY PICKY]
  [Less  PICKY PICKY]
  [Fun  Symbol TYPE PICKY]  ; no output type
  [Call  PICKY PICKY]
  [With  Symbol PICKY PICKY] ; no types here
  [If    PICKY PICKY PICKY])

(define-type TYPE
  [NumT]
  [BoolT]
  [FunT TYPE TYPE])

(: parse-sexpr : Sexpr -> PICKY)
;; parses s-expressions into PICKYs
(define (parse-sexpr sexpr)
  (match sexpr
    [(number: n)    (Num n)]
    [(symbol: name) (Id name)]
    [(list '+ lhs rhs) (Add  (parse-sexpr lhs) (parse-sexpr rhs))]
    [(list '- lhs rhs) (Sub  (parse-sexpr lhs) (parse-sexpr rhs))]
    [(list '= lhs rhs) (Equal (parse-sexpr lhs) (parse-sexpr rhs))]
    [(list '< lhs rhs) (Less  (parse-sexpr lhs) (parse-sexpr rhs))]
    [(list 'call fun arg)
                      (Call  (parse-sexpr fun) (parse-sexpr arg))]
    [(list 'if c t e)
    (If (parse-sexpr c) (parse-sexpr t) (parse-sexpr e))]
    [(cons 'fun more)
    (match sexpr
      [(list 'fun (list (symbol: name) ': itype) body)
        (Fun name (parse-type-sexpr itype) (parse-sexpr body))]
      [else (error 'parse-sexpr "bad `fun' syntax in ~s" sexpr)])]
    [(cons 'with more)
    (match sexpr
      [(list 'with (list (symbol: name) named) body)
        (With name (parse-sexpr named) (parse-sexpr body))]
      [else (error 'parse-sexpr "bad `with' syntax in ~s" sexpr)])]
    [else (error 'parse-sexpr "bad expression syntax: ~s" sexpr)]))

(: parse-type-sexpr : Sexpr -> TYPE)
;; parses s-expressions into TYPEs
(define (parse-type-sexpr sexpr)
  (match sexpr
    ['Number  (NumT)]
    ['Boolean (BoolT)]
    ;; allow shorter names too
    ['Num  (NumT)]
    ['Bool (BoolT)]
    [(list itype '-> otype)
    (FunT (parse-type-sexpr itype) (parse-type-sexpr otype))]
    [else (error 'parse-type-sexpr "bad type syntax in ~s" sexpr)]))

(: parse : String -> PICKY)
;; parses a string containing a PICKY expression to a PICKY AST
(define (parse str)
  (parse-sexpr (string->sexpr str)))

;; Typechecker and related types and helpers

;; this is similar to ENV, but it holds type information for the
;; identifiers during typechecking; it is essentially "Γ"
(define-type TYPEENV
  [EmptyTypeEnv]
  [ExtendTypeEnv Symbol TYPE TYPEENV])

(: type-lookup : Symbol TYPEENV -> TYPE)
;; similar to `lookup' for type environments; note that the
;; error is phrased as a typecheck error, since this indicates
;; a failure at the type checking stage
(define (type-lookup name typeenv)
  (cases typeenv
    [(EmptyTypeEnv) (error 'typecheck "no binding for ~s" name)]
    [(ExtendTypeEnv id type rest-env)
    (if (eq? id name) type (type-lookup name rest-env))]))

(: typecheck : PICKY TYPE TYPEENV -> Void)
;; Checks that the given expression has the specified type.
;; Used only for side-effects (to throw a type error), so return
;; a void value.
(define (typecheck expr type type-env)
  (unless (equal? type (typecheck* expr type-env))
    (error 'typecheck "type error for ~s: expecting a ~s"
          expr type)))

(: typecheck* : PICKY TYPEENV -> TYPE)
;; Returns the type of the given expression (which also means that
;; it checks it). This is a helper for the real typechecker that
;; also checks a specific return type.
(define (typecheck* expr type-env)
  (: two-nums : PICKY PICKY -> Void)
  (define (two-nums e1 e2)
    (typecheck e1 (NumT) type-env)
    (typecheck e2 (NumT) type-env))
  (cases expr
    [(Num n) (NumT)]
    [(Id name) (type-lookup name type-env)]
    [(Add  l r) (two-nums l r) (NumT)]
    [(Sub  l r) (two-nums l r) (NumT)]
    [(Equal l r) (two-nums l r) (BoolT)]
    [(Less  l r) (two-nums l r) (BoolT)]
    [(Fun bound-id in-type bound-body)
    (FunT in-type
          (typecheck* bound-body
                      (ExtendTypeEnv bound-id in-type type-env)))]
    [(Call fun arg)
    (cases (typecheck* fun type-env)
      [(FunT in-type out-type)
        (typecheck arg in-type type-env)
        out-type]
      [else (error 'typecheck "type error for ~s: expecting a fun"
                    expr)])]
    [(With bound-id named-expr bound-body)
    (typecheck* bound-body
                (ExtendTypeEnv bound-id
                                (typecheck* named-expr type-env)
                                type-env))]
    [(If cond-expr then-expr else-expr)
    (typecheck cond-expr (BoolT) type-env)
    (let ([type (typecheck* then-expr type-env)])
      (typecheck else-expr type type-env) ; enforce same type
      type)]))

;; Evaluator and related types and helpers

(define-type ENV
  [EmptyEnv]
  [Extend Symbol VAL ENV])

(define-type VAL
  [NumV  Number]
  [BoolV Boolean]
  [FunV  Symbol PICKY ENV])

(: lookup : Symbol ENV -> VAL)
;; lookup a symbol in an environment, return its value or throw an
;; error if it isn't bound
(define (lookup name env)
  (cases env
    [(EmptyEnv) (error 'lookup "no binding for ~s" name)]
    [(Extend id val rest-env)
    (if (eq? id name) val (lookup name rest-env))]))

(: strip-numv : Symbol VAL -> Number)
;; converts a VAL to a Racket number if possible, throws an error if
;; not using the given name for the error message
(define (strip-numv name val)
  (cases val
    [(NumV n) n]
    ;; this error will never be reached, see below for more
    [else (error name "expected a number, got: ~s" val)]))

(: arith-op : (Number Number -> Number) VAL VAL -> VAL)
;; gets a Racket numeric binary operator, and uses it within a NumV
;; wrapper
(define (arith-op op val1 val2)
  (NumV (op (strip-numv 'arith-op val1)
            (strip-numv 'arith-op val2))))

(: bool-op : (Number Number -> Boolean) VAL VAL -> VAL)
;; gets a Racket numeric binary predicate, and uses it
;; within a BoolV wrapper
(define (bool-op op val1 val2)
  (BoolV (op (strip-numv 'bool-op val1)
            (strip-numv 'bool-op val2))))

(: eval : PICKY ENV -> VAL)
;; evaluates PICKY expressions by reducing them to values
(define (eval expr env)
  (cases expr
    [(Num n) (NumV n)]
    [(Id name) (lookup name env)]
    [(Add  l r) (arith-op + (eval l env) (eval r env))]
    [(Sub  l r) (arith-op - (eval l env) (eval r env))]
    [(Equal l r) (bool-op  = (eval l env) (eval r env))]
    [(Less  l r) (bool-op  < (eval l env) (eval r env))]
    [(Fun bound-id in-type bound-body)
    ;; note that types are not used at runtime,
    ;; so they're not stored in the closure
    (FunV bound-id bound-body env)]
    [(Call fun-expr arg-expr)
    (define fval (eval fun-expr env))
    (cases fval
      [(FunV bound-id bound-body f-env)
        (eval bound-body
              (Extend bound-id (eval arg-expr env) f-env))]
      ;; `cases' requires complete coverage of all variants, but
      ;; this `else' is never used since we typecheck programs
      [else (error 'eval "`call' expects a function, got: ~s"
                    fval)])]
    [(With bound-id named-expr bound-body)
    (eval bound-body (Extend bound-id (eval named-expr env) env))]
    [(If cond-expr then-expr else-expr)
    (let ([bval (eval cond-expr env)])
      (if (cases bval
            [(BoolV b) b]
            ;; same as above: this case is never reached
            [else (error 'eval "`if' expects a boolean, got: ~s"
                                bval)])
        (eval then-expr env)
        (eval else-expr env)))]))

(: run : String -> Number)
;; evaluate a PICKY program contained in a string
(define (run str)
  (let ([prog (parse str)])
    (typecheck prog (NumT) (EmptyTypeEnv))
    (let ([result (eval prog (EmptyEnv))])
      (cases result
        [(NumV n) n]
        ;; this error is never reached, since we make sure
        ;; that the program always evaluates to a number above
        [else (error 'run "evaluation returned a non-number: ~s"
                          result)]))))

;; tests -- including translations of the FLANG tests
(test (run "5") => 5)
(test (run "{fun {x : Num} {+ x 1}}") =error> "type error")
(test (run "{call {fun {x : Num} {+ x 1}} 4}") => 5)
(test (run "{with {x 3} {+ x 1}}") => 4)
(test (run "{with {identity {fun {x : Num} x}} {call identity 1}}")
      => 1)
(test (run "{with {add3 {fun {x : Num} {+ x 3}}}
              {call add3 1}}")
      => 4)
(test (run "{with {add3 {fun {x : Num} {+ x 3}}}
              {with {add1 {fun {x : Num} {+ x 1}}}
                {with {x 3}
                  {call add1 {call add3 x}}}}}")
      => 7)
(test (run "{with {identity {fun {x : {Num -> Num}} x}}
              {with {foo {fun {x : Num} {+ x 1}}}
                {call {call identity foo} 123}}}")
      => 124)
(test (run "{with {x 3}
              {with {f {fun {y : Num} {+ x y}}}
                {with {x 5} {call f 4}}}}")
      => 7)
(test (run "{call {with {x 3} {fun {y : Num} {+ x y}}} 4}")
      => 7)
(test (run "{with {f {with {x 3} {fun {y : Num} {+ x y}}}}
              {with {x 100}
                {call f 4}}}")
      => 7)
(test (run "{call {call {fun {x : {Num -> {Num -> Num}}} {call x 1}}
                        {fun {x : Num} {fun {y : Num} {+ x y}}}}
                  123}")
      => 124)
(test (run "{call {fun {x : Num} {if {< x 2} {+ x 5} {+ x 6}}} 1}")
      => 6)
(test (run "{call {fun {x : Num} {if {< x 2} {+ x 5} {+ x 6}}} 2}")
      => 8)

Finally, an obvious question is whether we can get rid of all of the type declarations. The main point here is that we need to somehow be able to typecheck expressions and assign “temporary types” to them that will later on change — for example, when we typecheck this:

{with {identity {fun {x} x}}
  {call identity 1}}

we need to somehow decide that the named expression has a general function type, with no commitment on the actual input and output types — and then change them after we typecheck the body. (We could try to resolve that somehow by typechecking the body first, but that will not work, since the body must be checked with some type assigned to the identifier, or it will fail.)

Even better…

This can be done using type variables — things that contain boxes that can be used to change types as typecheck progresses. The following version does that. (Also, it gets rid of the typecheck* thing, since it can be achieved by using a type-variable and a call to typecheck.) Note the interesting tests at the end.


picky3.rkt D 
;; The Picky interpreter, no explicit types

#lang pl

#|
The grammar:
  <PICKY> ::= <num>
            | <id>
            | { + <PICKY> <PICKY> }
            | { - <PICKY> <PICKY> }
            | { = <PICKY> <PICKY> }
            | { < <PICKY> <PICKY> }
            | { fun { <id> } <PICKY> }
            | { call <PICKY> <PICKY> }
            | { with { <id> <PICKY> } <PICKY> }
            | { if <PICKY> <PICKY> <PICKY> }

The types are no longer part of the input syntax.

Evaluation rules:
  eval(N,env)                = N
  eval(x,env)                = lookup(x,env)
  eval({+ E1 E2},env)        = eval(E1,env) + eval(E2,env)
  eval({- E1 E2},env)        = eval(E1,env) - eval(E2,env)
  eval({= E1 E2},env)        = eval(E1,env) = eval(E2,env)
  eval({< E1 E2},env)        = eval(E1,env) < eval(E2,env)
  eval({fun {x} E},env)      = <{fun {x} E}, env>
  eval({call E1 E2},env1)    = eval(B,extend(x,eval(E2,env1),env2))
                              if eval(E1,env1) = <{fun {x} B}, env2>
                            = error!  otherwise <-- never happens
  eval({with {x E1} E2},env) = eval(E2,extend(x,eval(E1,env),env))
  eval({if E1 E2 E3},env)    = eval(E2,env)  if eval(E1,env) is true
                            = eval(E3,env)  otherwise

Type checking rules (note the extra complexity in the `fun' rule):

  Γ ⊢ n : Number

  Γ ⊢ x : Γ(x)

  Γ ⊢ A : Number  Γ ⊢ B : Number
  ———————————————————————————————
      Γ ⊢ {+ A B} : Number

  Γ ⊢ A : Number  Γ ⊢ B : Number
  ———————————————————————————————
      Γ ⊢ {< A B} : Boolean

      Γ[x:=τ₁] ⊢ E : τ₂
  ————————————————————————————
  Γ ⊢ {fun {x} E} : (τ₁ -> τ₂)

  Γ ⊢ F : (τ₁ -> τ₂)  Γ ⊢ V : τ₁
  ——————————————————————————————
      Γ ⊢ {call F V} : τ₂

  Γ ⊢ C : Boolean  Γ ⊢ T : τ  Γ ⊢ E : τ
  ———————————————————————————————————————
            Γ ⊢ {if C T E} : τ

  Γ ⊢ V : τ₁  Γ[x:=τ₁] ⊢ E : τ₂
  ——————————————————————————————
    Γ ⊢ {with {x V} E} : τ₂

|#

(define-type PICKY
  [Num  Number]
  [Id    Symbol]
  [Add  PICKY PICKY]
  [Sub  PICKY PICKY]
  [Equal PICKY PICKY]
  [Less  PICKY PICKY]
  [Fun  Symbol PICKY] ; no types even here
  [Call  PICKY PICKY]
  [With  Symbol PICKY PICKY]
  [If    PICKY PICKY PICKY])

(: parse-sexpr : Sexpr -> PICKY)
;; parses s-expressions into PICKYs
(define (parse-sexpr sexpr)
  (match sexpr
    [(number: n)    (Num n)]
    [(symbol: name) (Id name)]
    [(list '+ lhs rhs) (Add  (parse-sexpr lhs) (parse-sexpr rhs))]
    [(list '- lhs rhs) (Sub  (parse-sexpr lhs) (parse-sexpr rhs))]
    [(list '= lhs rhs) (Equal (parse-sexpr lhs) (parse-sexpr rhs))]
    [(list '< lhs rhs) (Less  (parse-sexpr lhs) (parse-sexpr rhs))]
    [(list 'call fun arg)
                      (Call  (parse-sexpr fun) (parse-sexpr arg))]
    [(list 'if c t e)
    (If (parse-sexpr c) (parse-sexpr t) (parse-sexpr e))]
    [(cons 'fun more)
    (match sexpr
      [(list 'fun (list (symbol: name)) body)
        (Fun name (parse-sexpr body))]
      [else (error 'parse-sexpr "bad `fun' syntax in ~s" sexpr)])]
    [(cons 'with more)
    (match sexpr
      [(list 'with (list (symbol: name) named) body)
        (With name (parse-sexpr named) (parse-sexpr body))]
      [else (error 'parse-sexpr "bad `with' syntax in ~s" sexpr)])]
    [else (error 'parse-sexpr "bad expression syntax: ~s" sexpr)]))

(: parse : String -> PICKY)
;; parses a string containing a PICKY expression to a PICKY AST
(define (parse str)
  (parse-sexpr (string->sexpr str)))

;; Typechecker and related types and helpers

;; this is not a part of the AST now, and it also has a new variant
;; for type variables (see `same-type' for how it's used)
(define-type TYPE
  [NumT]
  [BoolT]
  [FunT TYPE TYPE]
  [?T (Boxof (U TYPE #f))])

;; this is similar to ENV, but it holds type information for the
;; identifiers during typechecking; it is essentially "Γ"
(define-type TYPEENV
  [EmptyTypeEnv]
  [ExtendTypeEnv Symbol TYPE TYPEENV])

(: type-lookup : Symbol TYPEENV -> TYPE)
;; similar to `lookup' for type environments; note that the
;; error is phrased as a typecheck error, since this indicates
;; a failure at the type checking stage
(define (type-lookup name typeenv)
  (cases typeenv
    [(EmptyTypeEnv) (error 'typecheck "no binding for ~s" name)]
    [(ExtendTypeEnv id type rest-env)
    (if (eq? id name) type (type-lookup name rest-env))]))

(: typecheck : PICKY TYPE TYPEENV -> Void)
;; Checks that the given expression has the specified type. Used
;; only for side-effects, so return a void value. There are two
;; side-effects that it can do: throw an error if the input
;; expression doesn't typecheck, and type variables can be mutated
;; once their values are known -- this is done by the `types='
;; utility function that follows.
(define (typecheck expr type type-env)
  ;; convenient helpers
  (: type= : TYPE -> Void)
  (define (type= type2) (types= type type2 expr))
  (: two-nums : PICKY PICKY -> Void)
  (define (two-nums e1 e2)
    (typecheck e1 (NumT) type-env)
    (typecheck e2 (NumT) type-env))
  (cases expr
    [(Num n)    (type= (NumT))]
    [(Id name)  (type= (type-lookup name type-env))]
    [(Add  l r) (type= (NumT))  (two-nums l r)] ; note that the
    [(Sub  l r) (type= (NumT))  (two-nums l r)] ; order in these
    [(Equal l r) (type= (BoolT)) (two-nums l r)] ; things can be
    [(Less  l r) (type= (BoolT)) (two-nums l r)] ; swapped...
    [(Fun bound-id bound-body)
    (let (;; the identity of these type variables is important!
          [itype (?T (box #f))]
          [otype (?T (box #f))])
      (type= (FunT itype otype))
      (typecheck bound-body otype
                  (ExtendTypeEnv bound-id itype type-env)))]
    [(Call fun arg)
    (let ([type2 (?T (box #f))]) ; same here
      (typecheck arg type2 type-env)
      (typecheck fun (FunT type2 type) type-env))]
    [(With bound-id named-expr bound-body)
    (let ([type2 (?T (box #f))]) ; and here
      (typecheck named-expr type2 type-env)
      (typecheck bound-body type
                  (ExtendTypeEnv bound-id type2 type-env)))]
    [(If cond-expr then-expr else-expr)
    (typecheck cond-expr (BoolT) type-env)
    (typecheck then-expr type type-env)
    (typecheck else-expr type type-env)]))

(: types= : TYPE TYPE PICKY -> Void)
;; Compares the two input types, and throw an error if they don't
;; match. This function is the core of `typecheck', and it is used
;; only for its side-effect. Another side effect in addition to
;; throwing an error is when type variables are present -- they will
;; be mutated in an attempt to make the typecheck succeed. Note that
;; the two type arguments are not symmetric: the first type is the
;; expected one, and the second is the one that the code implies
;; -- but this matters only for the error messages. Also, the
;; expression input is used only for these errors. As the code
;; clearly shows, the main work is done by `same-type' below.
(define (types= type1 type2 expr)
  (unless (same-type type1 type2)
    (error 'typecheck "type error for ~s: expecting ~a, got ~a"
          expr (type->string type1) (type->string type2))))

(: type->string : TYPE -> String)
;; Convert a TYPE to a human readable string,
;; used for error messages
(define (type->string type)
  (format "~s" type)
  ;; The code below would be useful, but unfortunately it doesn't
  ;; work in some cases. To see the problem, try to run the example
  ;; below that applies identity on itself. It's left here so you
  ;; can try it out when you're not running into this problem.
  #|
  (cases type
    [(NumT)  "Num"]
    [(BoolT) "Bool"]
    [(FunT i o)
    (string-append (type->string i) " -> " (type->string o))]
    [(?T box)
    (let ([t (unbox box)])
      (if t (type->string t) "?"))])
  |#)

;; Convenience type to make it possible to have a single `cases'
;; dispatch on two types instead of nesting `cases' in each branch
(define-type 2TYPES [PairT TYPE TYPE])

(: same-type : TYPE TYPE -> Boolean)
;; Compares the two input types, return true or false whether
;; they're the same. The process might involve mutating ?T type
;; variables.
(define (same-type type1 type2)
  ;; the `PairT' type is only used to conveniently match on both
  ;; types in a single `cases', it's not used in any other way
  (cases (PairT type1 type2)
    ;; flatten the first type, or set it to the second if it's unset
    [(PairT (?T box) type2)
    (let ([t1 (unbox box)])
      (if t1
        (same-type t1 type2)
        (begin (set-box! box type2) #t)))]
    ;; do the same for the second (reuse the above case)
    [(PairT type1 (?T box)) (same-type type2 type1)]
    ;; the rest are obvious
    [(PairT (NumT) (NumT)) #t]
    [(PairT (BoolT) (BoolT)) #t]
    [(PairT (FunT i1 o1) (FunT i2 o2))
    (and (same-type i1 i2) (same-type o1 o2))]
    [else #f]))

;; Evaluator and related types and helpers

(define-type ENV
  [EmptyEnv]
  [Extend Symbol VAL ENV])

(define-type VAL
  [NumV  Number]
  [BoolV Boolean]
  [FunV  Symbol PICKY ENV])

(: lookup : Symbol ENV -> VAL)
;; lookup a symbol in an environment, return its value or throw an
;; error if it isn't bound
(define (lookup name env)
  (cases env
    [(EmptyEnv) (error 'lookup "no binding for ~s" name)]
    [(Extend id val rest-env)
    (if (eq? id name) val (lookup name rest-env))]))

(: strip-numv : Symbol VAL -> Number)
;; converts a VAL to a Racket number if possible, throws an error if
;; not using the given name for the error message
(define (strip-numv name val)
  (cases val
    [(NumV n) n]
    ;; this error will never be reached, see below for more
    [else (error name "expected a number, got: ~s" val)]))

(: arith-op : (Number Number -> Number) VAL VAL -> VAL)
;; gets a Racket numeric binary operator, and uses it within a NumV
;; wrapper
(define (arith-op op val1 val2)
  (NumV (op (strip-numv 'arith-op val1)
            (strip-numv 'arith-op val2))))

(: bool-op : (Number Number -> Boolean) VAL VAL -> VAL)
;; gets a Racket numeric binary predicate, and uses it
;; within a BoolV wrapper
(define (bool-op op val1 val2)
  (BoolV (op (strip-numv 'bool-op val1)
            (strip-numv 'bool-op val2))))

(: eval : PICKY ENV -> VAL)
;; evaluates PICKY expressions by reducing them to values
(define (eval expr env)
  (cases expr
    [(Num n) (NumV n)]
    [(Id name) (lookup name env)]
    [(Add  l r) (arith-op + (eval l env) (eval r env))]
    [(Sub  l r) (arith-op - (eval l env) (eval r env))]
    [(Equal l r) (bool-op  = (eval l env) (eval r env))]
    [(Less  l r) (bool-op  < (eval l env) (eval r env))]
    [(Fun bound-id bound-body) (FunV bound-id bound-body env)]
    [(Call fun-expr arg-expr)
    (define fval (eval fun-expr env))
    (cases fval
      [(FunV bound-id bound-body f-env)
        (eval bound-body
              (Extend bound-id (eval arg-expr env) f-env))]
      ;; `cases' requires complete coverage of all variants, but
      ;; this `else' is never used since we typecheck programs
      [else (error 'eval "`call' expects a function, got: ~s"
                    fval)])]
    [(With bound-id named-expr bound-body)
    (eval bound-body (Extend bound-id (eval named-expr env) env))]
    [(If cond-expr then-expr else-expr)
    (let ([bval (eval cond-expr env)])
      (if (cases bval
            [(BoolV b) b]
            ;; same as above: this case is never reached
            [else (error 'eval "`if' expects a boolean, got: ~s"
                                bval)])
        (eval then-expr env)
        (eval else-expr env)))]))

(: run : String -> Number)
;; evaluate a PICKY program contained in a string
(define (run str)
  (let ([prog (parse str)])
    (typecheck prog (NumT) (EmptyTypeEnv))
    (let ([result (eval prog (EmptyEnv))])
      (cases result
        [(NumV n) n]
        ;; this error is never reached, since we make sure
        ;; that the program always evaluates to a number above
        [else (error 'run "evaluation returned a non-number: ~s"
                          result)]))))

;; tests -- including translations of the FLANG tests
(test (run "5") => 5)
(test (run "{fun {x} {+ x 1}}") =error> "type error")
(test (run "{call {fun {x} {+ x 1}} 4}") => 5)
(test (run "{with {x 3} {+ x 1}}") => 4)
(test (run "{with {identity {fun {x} x}} {call identity 1}}") => 1)
(test (run "{with {add3 {fun {x} {+ x 3}}} {call add3 1}}") => 4)
(test (run "{with {add3 {fun {x} {+ x 3}}}
              {with {add1 {fun {x} {+ x 1}}}
                {with {x 3}
                  {call add1 {call add3 x}}}}}")
      => 7)
(test (run "{with {identity {fun {x} x}}
              {with {foo {fun {x} {+ x 1}}}
                {call {call identity foo} 123}}}")
      => 124)
(test (run "{with {x 3}
              {with {f {fun {y} {+ x y}}}
                {with {x 5} {call f 4}}}}")
      => 7)
(test (run "{call {with {x 3} {fun {y} {+ x y}}} 4}")
      => 7)
(test (run "{with {f {with {x 3} {fun {y} {+ x y}}}}
              {with {x 100}
                {call f 4}}}")
      => 7)
(test (run "{call {call {fun {x} {call x 1}}
                        {fun {x} {fun {y} {+ x y}}}}
                  123}")
      => 124)
(test (run "{call {fun {x} {if {< x 2} {+ x 5} {+ x 6}}} 1}") => 6)
(test (run "{call {fun {x} {if {< x 2} {+ x 5} {+ x 6}}} 2}") => 8)

;; Note that we still have a language with the same type system,
;; even though it looks like it could be more flexible -- for
;; example, the following two examples work:
(test (run "{with {identity {fun {x} x}}
              {call identity 1}}")
      => 1)
(test (run "{with {identity {fun {x} x}}
              {if {call identity {< 1 2}} 1 2}}")
      => 1)
;; but this doesn't, since identity can not be used with different
;; types:
(test (run "{with {identity {fun {x} x}}
              {if {call identity {< 1 2}}
                {call identity 1}
                2}}")
      =error> "type error")
;; this doesn't work either -- with an interesting error message:
(test (run "{with {identity {fun {x} x}}
              {call {call identity identity} 1}}")
      =error> "type error")
;; ... but these two work fine:
(test (run "{with {identity1 {fun {x} x}}
              {with {identity2 {fun {x} x}}
                {+ {call identity1 1}
                  {if {call identity2 {< 1 2}} 1 2}}}}")
      => 2)
(test (run "{with {identity1 {fun {x} x}}
              {with {identity2 {fun {x} x}}
                {call {call identity1 identity2} 1}}}")
      => 1)

Here are two other interesting things to try out — in particular, the type that is shown in the error message is interesting:

(run "{fun {x} x}")
(run "{call {fun {x} {call x x}} {fun {x} {call x x}}}")

More specifically, it is interesting to try the following to see explicitly what our typechecker infers for {fun {x} {call x x}}:

> (define b (?T (box #f)))
> (typecheck (parse "{fun {x} {call x x}}") b (EmptyTypeEnv))
> (cases b [(?T b) (unbox b)] [else #f])
- : TYPE
(?T #&(FunT #0=(?T #&(FunT (?T #&#0#) #1=(?T #&#f))) #1#))

To see it clearly, we can replace each (?T #&...) with the ... that it contains:

(FunT #0=(FunT #0# #1=#f) #1#)

and to clarify further, convert the FunT to an infix -> and the #f to a <?> and use α for the unknown “type variable” that is represented by the #1 (which is used twice):

(#0=(#0# -> α) -> α)

This shows us that the type is recursive.

Sidenote#1: You can now go back to the code and look at type->string, which is an attempt to implement a nice string representation for types. Can you see now why it cannot work (at least not without more complex code)?

Sidenote#2: Compare the above with OCaml, which can infer such types when started with a -rectypes flag:

# let foo = fun x -> x x ;;
val foo : ('a -> 'b as 'a) -> 'b = <fun>

The type here is identical to our type: 'a and 'b should be read as α and β resp., and as is used in the same way that Racket shows a cyclic structure using #0#. As for the question of why OCaml doesn’t always behave as if the -rectypes flag is given, the answer is that its type checker might fall into the same trap that ours does — it gets stuck with:

# let foo = (fun x -> x x) (fun x -> x x) ;;

The α that we see here is “kind of” in a direction of something that resembles a polymorphic type, but we really don’t have polymorphism in our language: each box can be filled just one time with one type, and from then on that type is used in all further uses of the same box type. For example, note the type error we get with:

{with {f {fun {x} x}}
  {call f {< {call f 1} {call f 2}}}}

Typing Recursion

We already know that without recursion life can be very boring… So we obviously want to be able to have recursive functions — but the question is how will they interact with our type system. One thing that we have seen is that by just having functions we get recursion. This was achieved by the Y combinator function. It seems like the same should apply to our simple typed language. The core of the Y combinator was using an expression similar to Omega that generates the infinite loop that is needed. In our language:

{call {fun {x} {call x x}} {fun {x} {call x x}}}

This expression was impossible to evaluate completely since it never terminates, but it served as a basis for the Y combinator so we need to be able to perform this kind of infinite loop. Now, consider the type of the first x — it’s used in a call expression as a function, so its type must be a function type, say τ₁->τ₂. In addition, its argument is x itself so its type is also τ₁ — this means that we have:

τ₁ -> τ₂ = τ₁

and from this we get:

=> τ₁ = τ₁ -> τ₂
      = (τ₁ -> τ₂) -> τ₂
      = ((τ₁ -> τ₂) -> τ₂) -> τ₂
      = ...

And this is a type that does not exist in our type system, since we can only have finite types. Therefore, we have a proof by contradiction that this expression cannot be typed in our system.

This is closely related to the fact that the typed language we have described so far is “strongly normalizing”: no matter what program you write, it will always terminate! To see this, very informally, consider this language without functions — this is clearly a language where all programs terminate, since the only way to create a loop is through function applications. Now add functions and function application — in the typing rules for the resulting language, each fun creates a function type (creates an arrow), and each function application consumes a function type (deletes one arrow) — since types are finite, the number of arrows is finite, which means that the number of possible applications is finite, so all programs must run in finite time.

Note that when we discussed how to type the Y combinator we needed to use a Rec constructor — something that the current type system has. Using that, we could have easily solve the τ₁ = τ₁ -> τ₂ equation with (Rec τ₁ (τ₁ -> τ₂)).

In the our language, therefore, the halting problem doesn’t even exist, since all programs (that are properly typed) are guaranteed to halt. This property is useful in many real-life situations (consider firewall rules, configuration files, devices with embedded code). But the language that we get is very limited as a result — we really want the power to shoot our feet…

Extending Picky with recursion

As we have seen, our language is strongly normalizing, which means that to get general recursion, we must introduce a new construct (unlike previously, when we didn’t really need one). We can do this as we previously did — by adding a new construct to the language, or we can somehow extend the (sub) language of type descriptions to allow a new kind of type that can be used to solve the τ₁ = τ₁ -> τ₂ equation. An example of this solution would be similar to the Rec type constructor in Typed Racket: a new type constructor that allows a type to refer to itself — and using (Rec τ₁ (τ₁ -> τ₂)) as the solution. However, this complicates things: type descriptions are no longer unique, since we have Num, (Rec this Num), and (Rec this (Rec that Num)) that are all equal.

For simplicity we will now take the first route and add rec — an explicit recursive binder form to the language (as with with, we’re going back to rec rather than bindrec to keep things simple).

First, the new BNF:

<PICKY> ::= <num>
          | <id>
          | { + <PICKY> <PICKY> }
          | { < <PICKY> <PICKY> }
          | { fun { <id> : <TYPE> } : <TYPE> <PICKY> }
          | { call <PICKY> <PICKY> }
          | { with { <id> : <TYPE> <PICKY> } <PICKY> }
          | { rec { <id> : <TYPE> <PICKY> } <PICKY> }
          | { if <PICKY> <PICKY> <PICKY> }

<TYPE>  ::= Number
          | Boolean
          | ( <TYPE> -> <TYPE> )

We now need to add a typing judgment for rec expressions. What should it look like?

            ???
———————————————————————————
Γ ⊢ {rec {x : τ₁ V} E} : τ₂

rec is similar to all the other local binding forms, like with, it can be seen as a combination of a function and an application. So we need to check the two things that those rules checked — first, check that the body expression has the right type assuming that the type annotation given to x is valid:

  Γ[x:=τ₁] ⊢ E : τ₂  ???
———————————————————————————
Γ ⊢ {rec {x : τ₁ V} E} : τ₂

Now, we also want to add the other side — making sure that the τ₁ type annotation is valid:

Γ[x:=τ₁] ⊢ E : τ₂  Γ ⊢ V : τ₁
——————————————————————————————
Γ ⊢ {rec {x : τ₁ V} E} : τ₂

But that will not be possible in general — V is an expression that usually includes x itself — that’s the whole point. The conclusion is that we should use a similar trick to the one that we used to specify evaluation of recursive binders — the same environment is used for both the named expression and for the body expression:

Γ[x:=τ₁] ⊢ E : τ₂  Γ[x:=τ₁] ⊢ V : τ₁
—————————————————————————————————————
    Γ ⊢ {rec {x : τ₁ V} E} : τ₂

You can also see now that if this rule adds an arrow type to the Γ type environment (i.e., τ₁ is an arrow), then it is doing so in a way that makes it possible to use it over and over, making it possible to run infinite loops in this language.

Our complete language specification is below.

<PICKY> ::= <num>
          | <id>
          | { + <PICKY> <PICKY> }
          | { < <PICKY> <PICKY> }
          | { fun { <id> : <TYPE> } : <TYPE> <PICKY> }
          | { call <PICKY> <PICKY> }
          | { with { <id> : <TYPE> <PICKY> } <PICKY> }
          | { rec  { <id> : <TYPE> <PICKY> } <PICKY> }
          | { if <PICKY> <PICKY> <PICKY> }

<TYPE>  ::= Number
          | Boolean
          | ( <TYPE> -> <TYPE> )

Γ ⊢ n : Number

Γ ⊢ x : Γ(x)

Γ ⊢ A : Number  Γ ⊢ B : Number
———————————————————————————————
    Γ ⊢ {+ A B} : Number

Γ ⊢ A : Number  Γ ⊢ B : Number
———————————————————————————————
    Γ ⊢ {< A B} : Boolean

          Γ[x:=τ₁] ⊢ E : τ₂
——————————————————————————————————————
Γ ⊢ {fun {x : τ₁} : τ₂ E} : (τ₁ -> τ₂)

Γ ⊢ F : (τ₁ -> τ₂)  Γ ⊢ V : τ₁
——————————————————————————————
    Γ ⊢ {call F V} : τ₂

Γ ⊢ C : Boolean  Γ ⊢ T : τ  Γ ⊢ E : τ
———————————————————————————————————————
          Γ ⊢ {if C T E} : τ

Γ ⊢ V : τ₁  Γ[x:=τ₁] ⊢ E : τ₂
——————————————————————————————
Γ ⊢ {with {x : τ₁ V} E} : τ₂

Γ[x:=τ₁] ⊢ V : τ₁  Γ[x:=τ₁] ⊢ E : τ₂
—————————————————————————————————————
    Γ ⊢ {rec {x : τ₁ V} E} : τ₂

Typing Data

PLAI §27

An important concept that we have avoided so far is user-defined types. This issue exists in practically all languages, including the ones we did so far, since a language without the ability to create new user-defined types is a language with a major problem. (As a side note, we did talk about mimicking an object system using plain closures, but it turns out that this is insufficient as a replacement for true user-defined types — you can kind of see that in the Schlac language, where the lack of all types mean that there is no type error.)

In the context of a statically typed language, this issue is even more important. Specifically, we talked about typing recursive code, but we should also consider typing recursive data. For example, we will start with a length function in an extension of the language that has empty?, rest, and NumCons and NumEmpty constructors:

{rec {length : ???
      {fun {l : ???} : Number
        {if {empty? l}
          0
          {+ 1 {call length {rest l}}}}}}
  {call length {NumCons 1 {NumCons 2 {NumCons 3 {NumEmpty}}}}}}

But adding all of these new functions as built-ins is getting messy: we want our language to have a form for defining new kinds of data. In this example — we want to be able to define the NumList type for lists of numbers. We therefore extend the language with a new with-type form for creating new user-defined types, using variants in a similar way to our own course language:

{with-type {NumList [NumEmpty]
                    [NumCons Number ???]}
  {rec {length : ???
        {fun {l : ???} : Number
          ...}}
    ...}}

We assume here that the NumList definition provides us with a number of new built-ins — NumEmpty and NumCons constructors, and assume also a cases form that can be used to both test a value and access its components (with the constructors serving as patterns). This makes the code a little different than what we started with:

{with-type {NumList [NumEmpty]
                    [NumCons Number ???]}
  {rec {length : ???
        {fun {l : ???} : Number
          {cases l
            [{NumEmpty}    0]
            [{NumCons x r} {+ 1 {call length r}}]}}}
    {call length {NumCons 1 {NumCons 2 {NumCons 3 {NumEmpty}}}}}}}

The question is what should the ??? be filled with? Clearly, recursive data types are very common and we need to support them. The scope of with-type should therefore be similar to rec, except that it works at the type level: the new type is available for its own definition. This is the complete code now:

{with-type {NumList [NumEmpty]
                    [NumCons Number NumList]}
  {rec {length : (NumList -> Number)
        {fun {l : NumList} : Number
          {cases l
            [{NumEmpty}    0]
            [{NumCons x r} {+ 1 {call length r}}]}}}
    {call length {NumCons 1 {NumCons 2 {NumCons 3 {NumEmpty}}}}}}}

(Note that in the course language we can do just that, and in addition, the Rec type constructor can be used to make up recursive types.)

An important property that we would like this type to have is for it to be well founded: that we’d never get stuck in some kind of type-level infinite loop. To see that this holds in this example, note that some of the variants are self-referential (only NumCons here), but there is at least one that is not (NumEmpty) — if there wasn’t any simple variant, then we would have no way to construct instances of this type to begin with!

[As a side note, if the language has lazy semantics, we could use such types — for example:

{with-type {NumList [NumCons Number NumList]}
  {rec {ones : NumList {NumCons 1 ones}}
    ...}}

Reasoning about such programs requires more than just induction though.]

Judgments for recursive types

If we want to have a language that is basically similar to the course language, then — as seen above — we’d use a similar cases expression. How should we type-check such expressions? In this case, we want to verify this:

Γ ⊢ {cases l [{NumEmpty} 0]
            [{NumCons x r} {+ 1 {call length r}}]} : Number

Similarly to the judgment for if expressions, we require that the two result expressions are numbers. Indeed, you can think about cases as a more primitive tool that has the functionality of if — in other words, given such user-defined types we could implement booleans as a new type and and implement if using cases. For example, wrap programs with:

{with-type {Bool [True] [False]} ...}

and translate {if E1 E2 E3} to {cases E1 [{True} E2] [{False} E3]}.

Continuing with typing cases, we now have:

Γ ⊢ 0 : Number          Γ ⊢ {+ 1 {call length r}} : Number
————————————————————————————————————————————————————————————
Γ ⊢ {cases l [{NumEmpty} 0]
            [{NumCons x r} {+ 1 {call length r}}]} : Number

But this will not work — we have no type for r here, so we can’t prove the second subgoal. We need to consider the NumList type definition as something that, in addition to the new built-ins, provides us with type judgments for these built-ins. In the case of the NumCons variant, we know that using {NumCons x r} is a pattern that matches NumList values that are a result of this variant constructor but it also binds x and r to the values of the two fields, and since all uses of the constructor are verified, the fields have the declared types. This means that we need to extend Γ in this rule so we’re able to prove the two subgoals. Note that we do the same for the NumEmpty case, except that there are no new bindings there.

Γ ⊢ 0 : Number
Γ[x:=Number; r:=NumList] ⊢ {+ 1 {call length r}} : Number
————————————————————————————————————————————————————————————
Γ ⊢ {cases l [{NumEmpty} 0]
            [{NumCons x r} {+ 1 {call length r}}]} : Number

Finally, we need to verify that the value itself — l — has the right type: that it is a NumList.

Γ ⊢ l : NumList
Γ ⊢ 0 : Number
Γ[x:=Number; r:=NumList] ⊢ {+ 1 {call length r}} : Number
————————————————————————————————————————————————————————————
Γ ⊢ {cases l [{NumEmpty} 0]
            [{NumCons x r} {+ 1 {call length r}}]} : Number

But why NumList and not some other defined type? This judgment needs to do a little more work: it should inspect all of the variants that are used in the branches, find the type that defines them, then use that type as the subgoal. Furthermore, to make the type checker more useful, it can check that we have complete coverage of the variants, and that no variant is used twice:

Γ ⊢ l : NumList
    (also need to show that NumEmpty and NumCons are all of
    the variants of NumList, with no repetition or extras.)
Γ ⊢ 0 : Number
Γ[x:=Number; r:=NumList] ⊢ {+ 1 {call length r}} : Number
————————————————————————————————————————————————————————————
Γ ⊢ {cases l [{NumEmpty} 0]
            [{NumCons x r} {+ 1 {call length r}}]} : Number

Note that how this is different from the version in the textbook — it has a type-case expression with the type name mentioned explicitly — for example: {type-case l NumList {{NumEmpty} 0} ...}. This is essentially the same as having each defined type come with its own cases expression. Our rule needs to do a little more work, but overall it is a little easier to use. (And the same goes for the actual implementation of the two languages.)

In addition to cases, we should also have typing judgments for the constructors. These are much simpler, for example:

Γ ⊢ x : Number  Γ ⊢ r : NumList
————————————————————————————————
  Γ ⊢ {NumCons x r} : NumList

Alternatively, we could add the constructors as new functions instead of new special forms — so in the Picky language they’d be used in call expressions. The with-type will then create the bindings for its scope at runtime, and for the typechecker it will add the relevant types to Γ:

Γ[NumCons:=(Number NumList -> NumList); NumEmpty:=(-> NumList)]

(This requires functions of any arity, of course.) Using accessor functions could be similarly simpler than cases, but less convenient for users.

Note about representation: a by-product of our type checker is that whenever we have a NumList value, we know that it must be an instance of either NumEmpty or NumCons. Therefore, we could represent such values as a wrapped value container, with a single bit that distinguishes the two. This is in contrast to dynamically typed languages like Racket, where every new type needs to have its own globally unique tag.

extra “Runaway” instances

Consider this code:

{with-type {NumList [NumEmpty] ...} {NumEmpty}}

We now know how to type check its validity, but what about the type of this whole expression? The obvious choice would be NumList:

{with-type {NumList [NumEmpty] ...} {NumEmpty}} : NumList

There is a subtle but important problem here: the expression evaluates to a NumList, but we can no longer use this value, since we’re out of the scope of the NumList type definition! In other words, we would typecheck a program that is pretty much useless.

Even if we were to allow such a value to flow to a different context with a NumList type definition, we wouldn’t want the two to be confused — following the principle of lexical scope, we’d want each type definition to be unique to its own scope even if it has the same concrete name. For example, using NumList as the type of the inner with-type here:

{with-type {NumList something-completely-different}
  {with-type {NumList [NumEmpty] ...}
    {NumEmpty}}}

would make it wrong.

(In fact, we might want to have a new type even if the value goes outside of this scope and back in. The default struct definitions in Racket have exactly this property — they’re generative — which means that each “call” to define-struct creates a new type, so:

(define (two-foos)
  (define (foo x)
    (struct foo (x))
    (foo x))
  (list (foo 1) (foo 2)))

returns two instances of two different foo types!)

One way to resolve this is to just forbid the type from escaping the scope of its definition — so we would forbid the type of the expression from being NumList, which makes

{with-type {NumList [NumEmpty] ...} {NumEmpty}} : NumList

invalid. But that’s not enough — what about returning a compound value that contains an instance of NumList? For example — what if we return a list or a function with a NumList instance?

{with-type {NumList [NumEmpty] ...}
  {fun {x} {NumEmpty}}} : Num -> NumList??

Obviously, we would need to extend this restriction: the resulting type should not mention the defined type at all — not even in lists or functions or anything else. This is actually easy to do: if the overall expression is type-checked in the surrounding lexical scope, then it is type-checked in the surrounding type environment (Γ), and that environment has nothing in it about NumList (well, nothing about this NumList).

Note that this is, very roughly speaking, what our course language does: define-type can only define new types when it is used at the top-level.

This works fine with the above assumption that such a value would be completely useless — but there are aspects of such values that are useful. Such types are close to things that are known as “existential types”, and they are for defining opaque values that you can do nothing with except pass them around, and only code in a specific lexical context can actually use them. For example, you could lump together the value with a function that can work on this value. If it wasn’t for the define-type top-level restriction, we could write the following:

(: foo : Integer -> (List ??? (??? -> Integer)))
(define (foo x)
  (define-type FOO [Foo Integer])
  (list (Foo 1)
        (lambda (f)
          (cases f [(Foo n) (* n n)]))))

There is nothing that we can do with resulting Foo instance (we don’t even have a way to name it) — but in the result of the above function we get also a function that could work on such values, even ones from different calls:

((second (foo 1)) (first (foo 2))) -> 4

Since such kind of values are related to hiding information, they’re useful (among other things) when talking about module systems (and object systems), where you want to have a local scope for a piece of code with bindings that are not available outside it.

Type soundness

PLAI §28

Having a type checker is obviously very useful — but to be able to rely on it, we need to provide some kind of a formal account of the kind of guarantees that we get by using one. Specifically, we want to guarantee that a program that type-checks is guaranteed to never fail with a type error. Such type errors in Racket result in an exception — but in C they can result in anything. In our simple Picky implementation, we still need to check the resulting value in run:

(typecheck prog (NumT) (EmptyTypeEnv))
(let ([result (eval prog (EmptyEnv))])
  (cases result
    [(NumV n) n]
    ;; this error is never reached, since we make sure
    ;; that the program always evaluates to a number above
    [else (error 'run "evaluation returned a non-number: ~s"
                      result)]))

A soundness proof for this would show that checking the result (in cases) is not needed. However, the check must be there since Typed Racket (or any other typechecker) is far from making up and verifying such a proof by itsef.

In this context we have a specific meaning for “fail with a type error”, but these failures can be very different based on the kind of properties that your type checker verifies. This property of a type system is called soundness: a sound type system is one that will never allow such errors for type-checked code:

For any program p, if we can type-check p : τ, then p will evaluate to a value that is in the type τ.

The importance of this can be seen in that it is the only connection between the type system and code execution. Without it, a type system is a bunch of syntactic rules that are completely disconnected from how the program runs. (Note also that — “in the type” — works for the (common) case where types are sets of values.)

But this statement isn’t exactly what we need — it states a property that is too strong: what if execution gets stuck in an infinite loop? (That wasn’t needed before we introduced rec, where we could extend the conclusion part to: “… then p will terminate and evaluate to a value that is in the type τ”.) We therefore need to revise it:

For any program p, if we can type-check p : τ, and if p terminates and returns v, then v is in the type τ.

But there are still problems with this. Some programs evaluate to a value, some get stuck in an infinite loop, and some … throw an error. Even with type checking, there are still cases when we get runtime errors. For example, in practically all statically typed languages the length of a list is not encoded in its type, so {first null} would throw an error. (It’s possible to encode more information like that in types, but there is a downside to this too: putting more information in the type system means that things get less flexible, and it becomes more difficult to write programs since you’re moving towards proving more facts about them.)

Even if we were to encode list lengths in the type, we would still have runtime errors: opening a missing file, writing to a read-only file fetching a non-existent URL, etc, so we must find some way to account for these errors. Some “solutions” are:

So, assuming exceptions, we need to further refine what it means for a type system to be sound:

For any program p, if we can type-check p : τ, and if p terminates without exceptions and returns v, then v is in the type τ.

An important thing to note here is that languages can have very different ideas about where to raise an exception. For example, Scheme implementations often have a trivial type-checker and throw runtime exceptions when there is a type error. On the other hand, there are systems that express much more in their type system, leaving much less room for runtime exceptions.

A soundness proof ties together a particular type system with the statement that it is sound. As such, it is where you tie the knot between type checking (which happens at the syntactic level) and execution (dealing with runtime values). These are two things that are usually separate — we’ve seen throughout the course many examples for things that could be done only at runtime, and things that should happen completely on the syntax. eval is the important semantic function that connects the two worlds (compile also did this, when we converted our evaluator to a compiler) — and in here, it is the soundness proof that makes the connection.

To demonstrate the kind of differences between the two sides, consider an if expression — when it is executed, only one branch is evaluated, and the other is irrelevant, but when we check its type, both sides need to be verified. The same goes for a function whose execution get stuck in an infinite loop: the type checker will not get into a loop since it is not executing the code, only scans the (finite) syntax.

The bottom line here is that type soundness is really a claim that the type system provides some guarantees about the runtime behavior of programs, and its proof demonstrates that these guarantees do hold. A fundamental problem with the type system of C and C++ is that it is not sound: these languages have a type system, but it does not provide such runtime guarantees. (In fact, C is even worse in that it really has two type systems: there is the system that C programmers usually interact with, which has a conventional set of type — including even higher-order function types; and there is the machine-level type system, which only talks about various bit lengths of data. For example, using “%s” in a printf() format string will blindly copy characters from the address pointed to by the argument until it reaches a 0 character — even if the actual argument is really a floating point number or a function.)

Note that people often talk about “strongly typed languages”. This term is often meaningless in that different people take it to mean different things: it is sometimes used for a language that “has a static type checker”, or a language that “has a non-trivial type checker”, and sometimes it means that a language has a sound type system. For most people, however, it means some vague idea like “a language like C or Pascal or Java” rather than some concrete definition.

extra Explicit polymorphism

PLAI §29

Consider the length definition that we had — it is specific for NumLists, so rename it to lengthNum:

{with-type {NumList ...}
  {rec {lengthNum : (NumList -> Num)
        {fun {l : NumList} : Num
          {cases l
            [{NumEmpty}    0]
            [{NumCons x r} {+ 1 {call lengthNum r}}]}}}
    {call lengthNum
          {NumCons 1 {NumCons 2 {NumCons 3 {NumEmpty}}}}}}}

To simplify things, assume that types are previously defined, and that we have an even more Racket-like language where we simply write a define form:

{define lengthNum
  {fun {l : NumList} : Num
    {cases l
      [{NumEmpty}    0]
      [{NumCons x r} {+ 1 {call lengthNum r}}]}}}

What would happen if, for example, we want to take the length of a list of booleans? We won’t be able to use the above code since we’d get a type error. Instead, we’d need a separate definition for the other kind of length:

{define lengthBool
  {fun {l : BoolList} : Num
    {cases l
      [{BoolEmpty}    0]
      [{BoolCons x r} {+ 1 {call lengthBool r}}]}}}

We’ve designed a statically typed language that is effective in catching a large number of errors, but it turns out that it’s too restrictive — we cannot implement a single generic length function. Given that our type system allows an infinite number of types, this is a major problem, since every new type that we’ll want to use in a list requires writing a new definition for a length function that is specific to this type.

One way to address the problem would be to somehow add a new length primitive function, with specific type rules to make it apply to all possible types. (Note that the same holds for the list type too — we need a new type definition for each of these, so this solution implies a new primitive type that will do the same generic trick.) This is obviously a bad idea: there are other functions that will need the same treatment (append, reverse, map, fold, etc), and there are other types with similar problems (any new container type). A good language should allow writing such a length function inside the language, rather than changing the language for every new addition.

Going back to the code, a good question to ask is what is it exactly that is different between the two length functions? The answer is that there’s very little that is different. To see this, we can take the code and replace all occurrences of Num or Bool by some ???. Even better — this is actually abstracting over the type, so we can use a familiar type variable, τ:

{define length〈τ〉
  {fun {l : 〈τ〉List} : Num
    {cases l
      [{〈τ〉Empty}    0]
      [{〈τ〉Cons x r} {+ 1 {call length〈τ〉 r}}]}}}

This is a kind of a very low-level “abstraction” — we replace parts of the text — parts of identifiers — with a kind of a syntactic meta variable. But the nature of this abstraction is something that should look familiar — it’s abstracting over the code, so it’s similar to a macro. It’s not really a macro in the usual sense — making it a real macro involves answering questions like what does length evaluate to (in the macro system that we’ve seen, a macro is not something that is a value in itself), and how can we use these macros in the cases patterns. But still, the similarity should provide a good intuition about what goes on — and in particular the basic fact is the same: this is an abstraction that happens at the syntax level, since typechecking is something that happens at that level.

To make things more manageable, we’ll want to avoid the abstraction over parts of identifiers, so we’ll move all of the meta type variables, and make them into arguments, using 〈...〉 brackets to stand for “meta level applications”:

{define length〈τ〉
  {fun {l : List〈τ〉} : Num
    {cases l
      [{Empty〈τ〉}    0]
      [{Cons〈τ〉 x r} {+ 1 {call length〈τ〉 r}}]}}}

Now, the first “〈τ〉” is actually a kind of an input to length, it’s a binding that has the other τs in its scope. So we need to have the syntax reflect this somehow — and since fun is the way that we write such abstractions, it seems like a good choice:

{define length
  {fun {τ}
    {fun {l : List〈τ〉} : Num
      {cases l
        [{Empty〈τ〉}    0]
        [{Cons〈τ〉 x r} {+ 1 {call length〈τ〉 r}}]}}}}

But this is very confused and completely broken. The new abstraction is not something that is implemented as a function — otherwise we’ll need to somehow represent type values within our type system. (Trying that has some deep problems — for example, if we have such type values, then it needs to have a type too; and if we add some Type for this, then Type itself should be a value — one that has itself as its type!)

So instead of fun, we need a new kind of a syntactic, type-level abstraction. This is something that is acts as a function that gets used by the type checker. The common way to write such functions is with a capital lambdaΛ. Since we already use Greek letters for things that are related to types, we’ll use that as is (again, with "〈〉"s), instead of a confusing capitalized Lambda (or a similarly confusing Fun):

{define length
  〈Λ 〈τ〉                  ; sidenote: similar to (All (t) ...)
    {fun {l : List〈τ〉} : Num
      {cases l
        [{Empty〈τ〉}    0]
        [{Cons〈τ〉 x r} {+ 1 {call length〈τ〉 r}}]}}〉}

and to use this length we’ll need to instantiate it with a specific type:

{+ {call length〈Num〉 {list 1 2}}
  {call length〈Bool〉 {list #t #f}}}

Note that we have several kinds of meta-applications, with slightly different intentions:

Actually, the last item points at one way in which the above sample calls:

{+ {call length〈Num〉 {list 1 2}}
  {call length〈Bool〉 {list #t #f}}}

are broken — we should also have a type argument for list:

{+ {call length〈Num〉 {list〈Num〉 1 2}}
  {call length〈Bool〉 {list〈Bool〉 #t #f}}}

or, given that we’re in the limited picky language:

{+ {call length〈Num〉 {cons〈Num〉 1 {cons〈Num〉 2 null〈Num〉}}}
  {call length〈Bool〉 {cons〈Bool〉 #t {cons〈Bool〉 #f null〈Bool〉}}}}

Such a language is called “parametrically polymorphic with explicit type parameters” — it’s polymorphic since it applies to any type, and it’s explicit since we have to specify the types in all places.

extra Polymorphism in the type description language

Given our definition for length, the type of length〈Num〉 is obvious:

length〈Num〉 : List〈Num〉 -> Num

but what would be the type of length by itself? If it was a function (which was a broken idea we’ve seen), then we would write:

length : τ -> (List〈τ〉 -> Num)

But this is broken in the same way: the first arrow is fundamentally different than the second — one is used for a Λ, and the other for a fun. In fact, the arrows are even more different, because the two τs are very different: the first one binds the second. So the first arrow is bogus — instead of an arrow we need some way to say that this is a type that “for all τ” is “List〈τ〉 -> Num”. The common way to write this should be very familiar:

length : ∀τ. List〈τ〉 -> Num

Finally, τ is usually used as a meta type variable; for these types the convention is to use the first few Greek letters, so we get:

length : ∀α. List〈α〉 -> Num

And some more examples:

filter : ∀α. (α->Bool) × List〈α〉 -> List〈α〉
map : ∀α,β. (α->β) × List〈α〉 -> List〈β〉

where × stands for multiple arguments (which isn’t mentioned explicitly in Typed Racket).

extra Type judgments for explicit polymorphism and execution

Given our notation for polymorphic functions, it looks like we’re introducing a runtime overhead. For example, our length definition:

{define length
  〈Λ 〈α〉
    {fun {l : List〈α〉} : Num
      {cases l
        [{Empty〈α〉}    0]
        [{Cons〈α〉 x r} {+ 1 {call length〈α〉 r}}]}}〉}

looks like it now requires another curried call for each iteration through the list. This would be bad for two reasons: first, one of the main goals of static type checking is to avoid runtime work, so adding work is highly undesirable. An even bigger problem is that types are fundamentally a syntactic thing — they should not exist at runtime, so we don’t want to perform these type applications at runtime simply because we don’t want types to exist at runtime. If you think about it, then every traditional compiler that typechecks code does so while compiling, not when the resulting compiled program runs. (A recent exception in various languages are “dynamic” types that are used in a way that is similar to plain (untyped) Racket.)

This means that we want to eliminate these applications in the typechecker. Even better: instead of complicating the typechecker, we can begin by applying all of the type meta-applications, and get a result that does not have any such applications or any type variables left — then use the simple typechecker on the result. This process is called “type elaboration”.

As usual, there are two new formal rules for dealing with these abstractions — one for type abstractions and another for type applications. Starting from the latter:

  Γ ⊢ E : ∀α.τ
———————————————————
Γ ⊢ E〈τ₂〉 : τ[τ₂/α]

which means that when we encounter a type application E〈τ₂〉 where E has a polymorphic type ∀α.τ, then we substitute the type variable α with the input type τ₂. Note that this means that conceptually, the typechecker is creating all of the different (monomorphic) length versions, but we don’t need all of them for execution — having checked the types, we can have a single length function which would be similar to the function that Racket uses (i.e., the same “low level” code with types erased).

To see how this works, consider our length use, which has a type of ∀α. List〈α〉 -> Num. We get the following proof that ends in the exact type of length (remember that when you prove you climb up):

Γ ⊢ length : ∀α. List〈α〉 -> Num
——————————————————————————————————————————————
Γ ⊢ length〈Bool〉 : (List〈α〉 -> Num)[Bool/α]
——————————————————————————————————————————————
Γ ⊢ length〈Bool〉 : List〈Bool〉 -> Num    [...]
——————————————————————————————————————————————
Γ ⊢ {call length〈Bool〉 {cons〈Bool〉 ...}} : Num

The second rule for type abstractions is:

  Γ[α] ⊢ E : τ
———————————————————
Γ ⊢ 〈Λ〈α〉 E〉 : ∀α.τ

This rule means that to typecheck a type abstraction, we need to check the body while binding the type variable α — but it’s not bound to some specific type. Instead, it’s left unspecified (or non-deterministic) — and typechecking is expected to succeed without requiring an actual type. If some specific type is actually required, then typechecking should fail. The intuition behind this is that a polymorphic function can be one only if it doesn’t need some specific type — for example, {fun {x} {- {+ x 1} 1}} is an identity function, but it’s an identity that requires the input to be a number, and therefore it cannot have a polymorphic ∀α.α type like {fun {x} x}.

Another example is our length function — the actual type that the list holds better not matter, or our length function is not really polymorphic. This makes sense: to typecheck the function, this rule means that we need to typecheck the body, with α being some unknown type that cannot be used.

One thing that we need to be careful when applying any kind of abstraction (and the first rule does just that for a very simple lambda-calculus-like language) is infinite loops. But in the case of our type language, it turns out that this lambda-calculus that gets used at the meta-level is one of the strongly normalizing kinds, therefore no infinite loops happen. Intuitively, this means that we should be able to do this elaboration in just one pass over the code. Furthermore, there are no side-effects, therefore we can safely cache the results of applying type abstraction to speed things up. In the case of length, using it on a list of Num will lead to one such application, but when we later get to the recursive call we can reuse the (cached) first result.

extra Explicit polymorphism conclusions

Quoted directly from the book:

Explicit polymorphism seems extremely unwieldy: why would anyone want to program with it? There are two possible reasons. The first is that it’s the only mechanism that the language designer gives for introducing parameterized types, which aid in code reuse. The second is that the language includes some additional machinery so you don’t have to write all the types every time. In fact, C++ introduces a little of both (though much more of the former), so programmers are, in effect, manually programming with explicit polymorphism virtually every time they use the STL (Standard Template Library). Similarly, the Java 1.5 and C# languages support explicit polymorphism. But we can possibly also do better than foist this notational overhead on the programmer.

Web Programming

PLAI §15

Consider web programming as a demonstration of a frequent problem. The HTTP protocol is stateless: each HTTP query can be thought of as running a program (or a function), getting a result, then killing it. This makes interactive applications hard to write.

For example, consider this behavior (which is based on a real story of a probably not-so-real bug known as “the ITA bug”):

  • You go on a flight reservation website, and look at flights to Paris or London for a vacation.

  • You get a list of options, and choose one for Paris and one for London, ctrl-click the first and then the second to open them in new tabs.

  • You look at the descriptions and decide that you like the first one best, so you click the button to buy the ticket.

  • A month later you go on your plane, and when you land you realize that you’re in the wrong country — the ticket you paid for was the second one after all…

Obviously there is some fundamental problem here — especially given that this problem plagued many websites early on (and these days these kind of problems can still be found in some places (like the registrar’s system), except that people are much more aware of it, and are much more prepared to deal with it). In an attempt to clarify what it is exactly that went wrong, we might require that each interaction will result in something that is deterministically based on what the browser window shows when the interaction is made — but even that is not always true. Consider the same scenario except with a bookstore and an “add to my cart” button. In this case you want to be able to add one item to the cart in the first window, then switch to the second window and click “add” there too: you want to end up with a cart that has both items.

The basic problem here is HTTP’s statelessness, something that both web servers and web browsers use extensively. Browsers give you navigation buttons and sometimes will not even communicate with the web server when you use them (instead, they’ll show you cached pages), they give you the ability to open multiple windows or tabs from the current one, and they allow you to “clone” the current tab. If you view each set of HTTP queries as a session — this means that web browsers allow you to go back and forth in time, explore multiple futures in parallel, and clone your current world.

These are features that the HTTP protocol intentionally allows by being stateless, and that people have learned to use effectively. A stateful protocol (like ssh, or ftp) will run in a single process (or a program, or a function) that is interacting with you directly, and this process dies only when you disconnect. A big advantage of stateful protocols is their ability to be very interactive and rely on state (eg, an editor updates a character on the screen, relying on the rest of it showing the same text); but stateless protocols can scale up better, and deal with a more hectic kind of interaction (eg, open a page on an online store, keep it open and buy the item a month later; or any of the above “time manipulation” devices).

Side-note: Some people think that Ajax is the answer to all of these problems. In reality, Ajax is layered on top of (asynchronous) web queries, so in fact it is the exact same situation. You do have an option of creating an application that works completely on the client side, but that wouldn’t be as attractive — and even if you do so, you’re still working inside a browser that can play the same time tricks.

Basic web programming

PLAI §16

Obviously, writing programs to run on a web server is a profitable activity, and therefore highly desirable. But when we do so, we need to somehow cope with the web’s statelessness. To see the implications from a PL point of view we’ll use a small “toy” example that demonstrates the basic issues — an “addition” service:

[Such a small application is not realistic, of course: you can obviously ask for both numbers on the same page. We still use it, though, to minimize the general interaction problem to a more comprehensible core problem.]

Starting from just that, consider how you’d want to write the code for such a service. (If you have experience writing web apps, then try to forget all of that now, and focus on just this basic problem.) The plain version of what we want to implement is:

(print
  (+ (read "First number")
    (read "Second number")))

which is trivially “translated” to:

(web-display
  (+ (web-read "First number")
    (web-read "Second number")))

But this is never going to work. The interaction is limited to presenting the user with some data and that’s all — you cannot do any kind of interactive querying. For the purpose of making this more concrete, imagine that web-read and web-display both communicate information to the user via something like error: the information is sent and at the same time the whole computation is aborted. With this, the above code will just manage to ask for the first number and nothing else happens.

We therefore must turn this server function into three separate functions: one that shows the prompt for the first number, one that gets the value entered and shows the second prompt, and a third that shows the results page. The first two of these functions would send the information (and the respective computation dies) to the browser, including a form submission URL that will invoke the next function.

Assuming a generic “query argument” that represents the browser request, and a return value that represents a page for the browser to render, we have:

(define (f1 query)
  ... show the first question ...)

(define (f2 query)
  ... extract the number from the query ...
  ... show the second question ...)

(define (f3 query)
  ... extract the number from the query ...
  ... show the sum ...)

Note that f2 receives the first number directly, but f3 doesn’t. Yet, it is obviously needed to show the sum. A typical hack to get around this is to use a “hidden field” in the HTML form that f2 generates, where that field holds the second result. To make things more concrete, we’ll use some imaginary web API functions:

(define (f1 query)
  (web-read "First number" 'n1 "f2"))

(define (f2 query)
  (let ([n1 (get-field query 'n1)])
    ;; imagine that the following "configures" what web-read
    ;; produces by adding a hidden field to display
    (with-hidden-field 'n1 n1
      (web-read "Second number" 'n2 "f3"))))

(define (f3 query)
  (web-display
    "Your two numbers sum up to: "
    (+ (get-field query 'n1)
      (get-field query 'n2))))

Which would (supposedly) result in something like the following html forms when the user enters 1 and 2:

http://.../f1
<form action="http://.../f2">
  First number:
  <input type="text" name="n1" />
</form>

http://.../f2
<form action="http://.../f3">
  <input type="hidden" name="n1" value="1" />
  Second number:
  <input type="text" name="n2" />
</form>

http://.../f3
<p>Your two numbers sum up to: 3</p>

This is often a bad solution: it gets very difficult to manage with real services where the “state” of the server consists of much more than just a single number — and it might even include values that are not expressible as part of the form (for example an open database connection or a running process). Worse, the state is all saved in the client browser — if it dies, then the interaction is gone. (Imagine doing your taxes, and praying that the browser won’t crash a third time.)

Another common approach is to store the state information on the server, and use a small handle (eg, in a cookie) to identify the state, then each function can use the cookie to retrieve the current state of the service — but this is exactly how we get to the above bugs. It will fail with any of the mentioned time-manipulation features.

Continuations: Web Programming

To try and get a better solution, we’ll re-start with the original expression:

(web-display (+ (web-read "First number")
                (web-read "Second number")))

and assuming that web-read works as a regular function, we need to begin with executing the first read:

                (web-read "First number")

We then need to take that result and plug it into an expression that will read the second number and sum the results — that’s the same as the first expression, except that instead of the first web-read we use a “hole”:

(web-display (+ <*>
                (web-read "Second number")))

where <*> marks the point where we need to plug the result of the first question into. A better way to explain this hole is to make the expression into a function:

(lambda (<*>)
  (web-display (+ <*>
                  (web-read "Second number"))))

We can split the second and third interactions in the same way. First we can assemble the above two bits of code into an expression that has the same meaning as the original one:

((lambda (<*>)
  (web-display (+ <*> (web-read "Second number"))))
(web-read "First number"))

And now we can continue doing this and split the body of the consumer:

  (web-display (+ <*> (web-read "Second number")))

into a “reader” and the rest of the computation (using a new hole):

                      (web-read "Second number")  ; reader part

  (web-display (+ <*> <*2>))                      ; rest of comp

Doing all of this gives us:

((lambda (<*1>)
  ((lambda (<*2>)
      (web-display (+ <*1> <*2>)))
    (web-read "Second number")))
(web-read "First number"))

And now we can proceed to the main trick. Conceptually, we’d like to think about web-read as something that is implemented in a simple way:

(define (web-read prompt)
  (printf "~a: " prompt)
  (read-number))

except that the “real” thing would throw an error and die once the prompt is printed. The trick is one that we’ve already seen: we can turn the code inside-out by making the above “hole functions” be an argument to the reading function — a consumer callback for what needs to be done once the number is read. This callback is called a continuation, and we’ll use a /k suffix for names of functions that expect a continuation (k is a common name for a continuation argument):

(define (web-read/k prompt k)
  (printf "~a: " prompt)
  (k (read-number)))

This is not too different from the previous version — the only difference is that we make the function take a consumer function as an input, and hand it what we read instead of just returning it. It makes things a little easier, since we pass the hole function to web-read/k, and it will invoke it when needed:

(web-read/k "First number"
  (lambda (<*1>)
    (web-read/k "Second number"
      (lambda (<*2>)
        (web-display (+ <*1> <*2>))))))

You might notice that this looks too complicated; we could get exactly the same result with:

(web-display (+ (web-read/k "First number"
                            (lambda (<*>) <*>))
                (web-read/k "Second number"
                            (lambda (<*>) <*>))))

but then there’s not much point to having web-read/k at all… So why have it? Remember that the main problem is that in the context of a web server we think of web-read as something that throws an error and kills the computation. So if we use such a web-read/k with a continuation, we can make it save this continuation in some global state, so it can be used later when there is a value.

As a side note, all of this might start looking very familiar to you if you have any experience working with callback-heavy code. In fact, consider the fact that the continuation (or k) is basically just a callback, so the above is roughly:

webRead("First number", function(a) {
  webRead("Second number", function(b) {
    webDisplay(a + b);
  });
});

We’ll talk more about JavaScript later.

Simulating web reading

We can now actually try all of this in plain Racket by simulating web interactions. This is useful to look at the core problem while avoiding the whole web mess that is uninteresting for the purpose of our discussion. The main feature that we need to emulate is statelessness — and as we’ve discussed, we can simulate that using error to guarantee that the process is properly killed for each interaction. We will do this in web-display which simulates sending the results to the client and therefore terminates the server process:

(define (web-display n)
  (error 'web-display "~s" n))

More importantly, we need to do it in web-read/k — but in this case, we need more than just an error — we need a way to store the k so the computation can be resumed later. To continue with the web analogy we do this in two steps: error is used to display the information (the input prompt), and the user action of entering a number and submitting it will be simulated by calling a function. Since the computation is killed after we show the prompt, the way to implement this is by making the user call a toplevel submit function — and before throwing the interaction error, we’ll save the k continuation in a global box:

(define (web-read/k prompt k)
  (set-box! resumer k)
  (error 'web-read
        "enter (submit N) to continue the following\n  ~a:"
        prompt))

submit uses the saved continuation:

(define (submit n)
  ((unbox resumer) n))

For safety, we’ll initialize resumer with a function that throws an error (a real one, not intended for interactions), make web-display reset it to the same function, and also make submit do so after grabbing its value — meaning that submit can only be used after a web-read/k. And for convenience, we’ll use raise-user-error instead of error, which is a Racket function that throws an error without a stack trace (since our errors are intended). It’s also helpful to disable debugging in DrRacket, so it won’t take us back to the code over and over again.


web-base-library.rkt D 
;; Fake web interaction library (to be used with manual code CPS-ing
;; examples)

#lang racket

(define error raise-user-error)

(define (nothing-to-do ignored)
  (error 'REAL-ERROR "No computation to resume."))

(define resumer (box nothing-to-do))

(define (web-display n)
  (set-box! resumer nothing-to-do)
  (error 'web-display "~s" n))

(define (web-read/k prompt k)
  (set-box! resumer k)
  (error 'web-read
        "enter (submit N) to continue the following\n  ~a:"
        prompt))

(define (submit n)
  ;; to avoid mistakes, we clear out `resumer' before invoking it
  (let ([k (unbox resumer)])
    (set-box! resumer nothing-to-do)
    (k n)))

We can now try out our code for the addition server, using plain argument names instead of <*>s:

(web-read/k "First number"
  (lambda (n1)
    (web-read/k "Second number"
      (lambda (n2)
        (web-display (+ n1 n2))))))

and see how everything works. You can also try now the bogus expression that we mentioned:

(web-display (+ (web-read/k "First number"  (lambda (n) n))
                (web-read/k "Second number" (lambda (n) n))))

and see how it breaks: the first web-read/k saves the identity function as the global resumer, losing the rest of the computation.


Again, this should be familiar: we’ve taken a simple compound expression and “linearized” it as a sequence of an input operation and a continuation receiver for its result. This is essentially the same thing that we used for dealing with inputs in the lazy language — and the similarity is not a coincidence. The problem that we faced there was very different (representing IO as values that describe it), but it originates from a similar situation — some computation goes on (in whatever way the lazy language decides to evaluate it), and when we have a need to read something we must return a description of this read that contains “the rest of the computation” to the eager part of the interpreter that executes the IO. Once we get the user input, we send it to this computation remainder, which can return another read request, and so on.

Based on this intuition, we can guess that this can work for any piece of code, and that we can even come up with a nicer “flat” syntax for it. For example, here is a simple macro that flattens a sequence of reads and a final display:

(define-syntax web-code
  (syntax-rules (read display)
    [(_ (read n prompt) more ...)
    (web-read/k prompt
      (lambda (n)
        (web-code more ...)))]
    [(_ (display last))
    (web-display last)]))

and using it:

(web-code (read x "First number")
          (read y "Second number")
          (display (+ x y)))

However, we’ll avoid such cuteness to make the transformation more explicit for the sake of the discussion. Eventually, we’ll see how things can become even better than that (as done in Racket): we can get to write plain-looking Racket expressions and avoid even the need for an imperative form for the code. In fact, it’s easy to write this addition server using Racket’s web server framework, and the core of the code looks very simple:

(define (start initial-request)
  (page "The sum is: "
        (+ (web-read "First number")
          (web-read "Second number"))))

There is not much more than that — it has two utilities, page creates a well-formed web page, and web-read performs the reading. The main piece of magic there is in send/suspend which makes the web server capture the computation’s continuation and store it in a hash table, to be retrieved when the user visits the given URL. Here’s the full code:

#lang web-server/insta
(define (page . body)
  (response/xexpr
  `(html (body ,@(map (lambda (x)
                        (if (number? x) (format "~a" x) x))
                      body)))))
(define (web-read prompt)
  ((compose string->number (curry extract-binding/single 'n)
            request-bindings send/suspend)
  (lambda (k)
    (page `(form ([action ,k])
              ,prompt ": " (input ([type "text"] [name "n"])))))))
(define (start initial-request)
  (page "The sum is: "
        (+ (web-read "First number")
          (web-read "Second number"))))

More Web Transformations

PLAI §17

Transforming a recursive function

Going back to transforming code, we did the transformation on a simple expression — and as you’d guess, it’s possible to make it work for more complex code, even recursive functions. Let’s start with some simple function that sums up a bunch of numbers, given a list of prompts for these numbers. Since it’s a function, it’s a reusable piece of code that can be used in multiple places, and to demonstrate that, we add a web-display with a specific list of prompts.

(define (sum prompts)
  (if (null? prompts)
    0
    (+ (web-read (first prompts))
      (sum (rest prompts)))))

(web-display (sum '("First" "Second" "Third")))

We begin by converting the web-read to its continuation version:

(define (sum prompts)
  (if (null? prompts)
    0
    (web-read/k (first prompts)
      (lambda (n)
        (+ n
          (sum (rest prompts)))))))

(web-display (sum '("First" "Second" "Third")))

But using web-read/k immediately terminates the running computation, which means that when sum is called on the last line, the surrounding web-display will be lost, and therefore this will not work. The way to solve this is to make sum itself take a continuation, which we’ll get in a similar way — by rewriting it as a sum/k function, and then we can make our sample use pull in the web-display into the callback as we’ve done before:

(define (sum/k prompts k)
  (if (null? prompts)
    0
    (web-read/k (first prompts)
      (lambda (n)
        (+ n
          (sum (rest prompts)))))))

(sum/k '("First" "Second" "Third")
      (lambda (sum) (web-display sum)))

We also need to deal with the recursive sum call and change it to a sum/k. Clearly, the continuation is the same continuation that the original sum was called with, so we need to pass it on in the recursive call too:

(define (sum/k prompts k)
  (if (null? prompts)
    0
    (web-read/k (first prompts)
      ;; get the value provided by the user, and add it to the value
      ;; that the recursive call generates
      (lambda (n)
        (+ n
          (sum/k (rest prompts)
                  k))))))

(sum/k '("First" "Second" "Third")
      (lambda (sum) (web-display sum)))

But there is another problem now: the addition is done outside of the continuation, therefore it will be lost as soon as there’s a second web-read/k call. In other words, computation bits that are outside of any continuations are going to disappear, and therefore they must be encoded as an explicit part of the continuation. The solution is therefore to move the addition into the continuation:

(define (sum/k prompts k)
  (if (null? prompts)
    0
    (web-read/k (first prompts)
      (lambda (n)
        (sum/k (rest prompts)
              (lambda (sum-of-rest)
                (k (+ n sum-of-rest))))))))

(sum/k '("First" "Second" "Third")
      (lambda (sum) (web-display sum)))

Note that with this code every new continuation is bigger — it contains the previous continuation (note that “contains” here is done by making it part of the closure), and it also contains one new addition.

But if the continuation is only getting bigger, then how do we ever get a result out of this? Put differently, when we reach the end of the prompt list, what do we do? — Clearly, we just return 0, but that silently drops the continuation that we worked so hard to accumulate. This means that just returning 0 is wrong — instead, we should send the 0 to the pending continuation:

(define (sum/k prompts k)
  (if (null? prompts)
    (k 0)
    (web-read/k (first prompts)
      (lambda (n)
        (sum/k (rest prompts)
              (lambda (sum-of-rest)
                (k (+ n sum-of-rest))))))))

(sum/k '("First" "Second" "Third")
      (lambda (sum) (web-display sum)))

This makes sense now, and the code works as expected. This sum/k is a utility to be used in a web server application, and such applications need to be transformed in a similar way to what we’re doing. Therefore, our own sum/k is a function that expects to be invoked from such transformed code — so it needs to have an argument for the waiting receiver, and it needs to pass that receiver around (accumulating more functionality into it) until it’s done.

As a side note, web-display is the only thing that is used in the toplevel continuation, so we could have used it directly without a lambda wrapper:

(sum/k '("First" "Second" "Third")
      web-display)

Using sum/k

To get some more experience with this transformation, we’ll try to convert some code that uses the above sum/k. For example, lets add a multiplier argument that will get multiplied by the sum of the given numbers. Begin with the simple code. This is an actual application, so we’re writing just an expression to do the computation and show the result, not a function.

(web-display (* (web-read "Multiplier")
                (sum '("First" "Second" "Third"))))

We now need to turn the two function calls into their */k form. Since we covered sum/k just now, begin with that. The first step is to inspect its continuation: this is the same code after we replace the sum call with a hole:

(web-display (* (web-read "Multiplier")
                <*>))

Now take this expression, make it into a function by abstracting over the hole and call it n, and pass that to sum/k:

(sum/k '("First" "Second" "Third")
      (lambda (n)
        (web-display (* (web-read "Multiplier")
                        n))))

(Note that this is getting rather mechanical now.) Now for the web-read part, we need to identify its continuation — that’s the expression that surrounds it inside the first continuation function, and we’ll use m for the new hole:

        (* m
            n)

As above, abstract over m to get a continuation, and pass it into web-read/k:

(sum/k '("First" "Second" "Third")
      (lambda (n)
        (web-read/k "Multiplier"
                    (lambda (m)
                      (web-display (* m n))))))

and we’re done. An interesting question here is what would happen if instead of the above, we start with the web-read and then get to the sum? We’d end up with a different version:

(web-read/k "Multiplier"
            (lambda (m)
              (sum/k '("First" "Second" "Third")
                    (lambda (n)
                      (web-display (* m n))))))

Note how these options differ — one reads the multiplier first, and the other reads it last.

Side-note: if in the last step of turning web-read to web-read/k we consider the whole expression when we formulate the continuation, then we get to the same code. But this isn’t really right, since it is converting code that is already-converted.

In other words, our conversion results in code that fixes a specific evaluation order for the original expression. The way that the inputs happen in the original expression

(web-display (* (web-read "Multiplier")
                (sum '("First" "Second" "Third"))))

is unspecified in the code — it only happens to be left-to-right implicitly, because Racket evaluates function arguments in that order. However, the converted code does not depend on how Racket evaluates function arguments. (Can you see a similar conclusion here about strictness?)

Note also another property of the converted code: every intermediate result has a name now. This makes sense, since another way to fix the evaluation order is to do just that. For example, convert the above to either

(let* ([m (web-read "Multiplier")]
      [n (sum '("First" "Second" "Third"))])
  (* m n))

or

(let* ([n (sum '("First" "Second" "Third"))]
      [m (web-read "Multiplier")])
  (* m n))

This is also a good way to see why this kind of conversion can be a useful tool in compiling code: the resulting code is in a kind of a low-level form that makes it easy to translate to assembly form, where function calls are eliminated, and instead there are only jumps (since all calls are tail-calls). In other words, the above can be seen as a piece of code that is close to something like:

val n = sum(["First","Second","Third"])
val m = web_read("Multiplier")
web_display(m*n)

and it’s almost visible in the original converted code if we format it in a very weird way:

;; sum(["First","Second","Third"]) -> n
(sum/k '("First" "Second" "Third") (lambda (n)
;; web_read("Multiplier") -> m
(web-read/k "Multiplier" (lambda (m)
;; web_display(m*n)
(web-display (* m n))))))

Converting stateful code

Another case to consider is applying this transformation to code that uses mutation with some state. For example, here’s some simple account code that keeps track of a balance state:

(define account
  (let ([balance (box 0)])
    (lambda ()
      (set-box! balance
                (+ (unbox balance)
                  (web-read (format "Balance: ~s; Change"
                                    (unbox balance)))))
      (account))))

(Note that there is no web-display here, since it’s an infinite loop.) As usual, the fact that this function is expected to be used by a web application means that it should receive a continuation:

(define account/k
  (let ([balance (box 0)])
    (lambda (k)
      (set-box! balance
                (+ (unbox balance)
                  (web-read (format "Balance: ~s; Change"
                                    (unbox balance)))))
      (account))))

Again, we need to convert the web-read into web-read/k by abstracting out its continuation. We’ll take the set-box! expression and create a continuation out of it:

      (set-box! balance
                (+ (unbox balance)
                  <*>))

and using change as the name for the continuation argument, we get:

(define account/k
  (let ([balance (box 0)])
    (lambda (k)
      (web-read/k (format "Balance: ~s; Change"
                          (unbox balance))
                  (lambda (change)
                    (set-box! balance (+ (unbox balance) change))))
      (account))))

And finally, we translate the loop call to pass along the same continuation it received (it seems suspicious, but there’s nothing else that could be used there):

(define account/k
  (let ([balance (box 0)])
    (lambda (k)
      (web-read/k (format "Balance: ~s; Change" (unbox balance))
                  (lambda (change)
                    (set-box! balance (+ (unbox balance) change))))
      (account/k k))))

But if we try to run this — (account/k web-display) — we don’t get any result at all: it reads one number and then just stops without the usual request to continue, and without showing any result. The lack of printed result is a hint for the problem — it must be the void return value of the set-box!. Again, we need to remember that invoking a web-read/k kills any pending computation and the following (resume) will restart its continuation — but the recursive call is not part of the loop.

The problem is the continuation that we formulated:

      (set-box! balance
                (+ (unbox balance)
                  change))

which should actually contain the recursive call too:

      (set-box! balance
                (+ (unbox balance)
                  change))
      (account/k k)

In other words, the recursive call was left outside of the continuation, and therefore it was lost when the fake server terminated the computation on a web-read/k — so it must move into the continuation as well:

(define account/k
  (let ([balance (box 0)])
    (lambda (k)
      (web-read/k (format "Balance: ~s; Change" (unbox balance))
                  (lambda (change)
                    (set-box! balance (+ (unbox balance) change))
                    (account/k k))))))

and the code now works. The only suspicious thing that we’re still left with is the loop that passes k unchanged — but this actually is the right thing to do here. The original loop had a tail-recursive call that didn’t pass along any new argument values, since the infinite loop is doing its job via mutations to the box and nothing else was done in the looping call. The continuation of the original call is therefore also the continuation of the second call, etc. All of these continuations are closing over a single box and this binding does not change (it cannot change if we don’t use a set!); instead, the boxed value is what changes through the loop.

Converting higher order functions

Next we try an even more challenging transformation: a higher order function. To get a chance to see more interesting examples, we’ll have some more code in this case.

For example, say that we want to compute the sum of squares of a list. First, the simple code (as above, there’s no need to wrap a web-display around the whole thing, just make it return the result):

(define (sum l) (foldl + 0 l))
(define (square n) (* n n))
(define (read-number prompt)
  (web-read (format "~a number" prompt)))
(web-display (sum (map (lambda (prompt)
                        (square (read-number prompt)))
                      '("First" "Second" "Third"))))

Again, we can begin with web-read — we want to convert it to the continuation version, which means that we need to convert read-number to get one too. This transformation is refreshingly trivial:

(define (read-number/k prompt k)
  (web-read/k (format "~a number" prompt) k))

This is an interesting point — it’s a simple definition that just passes k on, as is. The reason for this is similar to the simple continuation passing of the imperative loop: the pre-translation read-number is doing a simple tail call to web-read, so the evaluation context of the two is identical. The only difference is the prompt argument, and that’s the same format call.

Of course things would be different if format itself required a web interaction, since then we’d need some format/k, but without that things are really simple. The same goes for the two utility functions — sum and square: they’re not performing any web interaction so it seems likely that they’ll stay the same.

We now get to the main expression, which should obviously change since it needs to call read-number/k, so it needs to send it some continuation. By now, it should be clear that passing an identity function as a continuation is going to break the surrounding context once the running computation is killed for the web interaction. We need to somehow generate a top-level identity continuation and propagate it inside, and the sum call should be in that continuation together with the web-display call. Actually, if we do the usual thing and write the expression with a <*> hole, we get:

(web-display (sum (map (lambda (prompt) (square <*>))
                      '("First" "Second" "Third"))))

and continuing with the mechanical transformation that we know, we need to abstract over this expression+hole into a function, then pass it as an argument to read-number/k:

;; very broken
(read-number/k
(lambda (<*>)
  (web-display (sum (map (lambda (prompt) (square <*>))
                          '("First" "Second" "Third"))))))

But that can’t work in this case — we need to send read-number/k a prompt, but we can’t get a specific one since there is a list of them. In fact, this is related to a more serious problem — pulling out read-number/k like this is obviously broken since it means that it gets called only once, instead, we need to call it once for each prompt value.

The solution in this case is to convert map too:

(web-display (sum (map/k (lambda (prompt)
                          (square (read-number prompt)))
                        '("First" "Second" "Third")
                        ...some-continuation...)))

and of course we should move web-display and sum into that continuation:

(map/k (lambda (prompt) (square (read-number prompt)))
      '("First" "Second" "Third")
      (lambda (l) (web-display (sum l))))

We can now use read-number/k, but the question is what should it get for it’s continuation?

(map/k (lambda (prompt) (square (read-number/k prompt ???)))
      '("First" "Second" "Third")
      (lambda (l) (web-display (sum l))))

Clearly, map/k will need to somehow communicate some continuation to the mapped function, which in turn will send it to read-number/k. This means that the mapped function should get converted too, and gain a k argument. To do this, we’ll first make things convenient and have a name for it (this is only for convenience, we could just as well convert the lambda directly):

(define (read-squared prompt)
  (square (read-number/k prompt ???)))
(map/k read-squared
      '("First" "Second" "Third")
      (lambda (l) (web-display (sum l))))

Then convert it in the now-obvious way:

(define (read-squared/k prompt k)
  (read-number/k prompt
                (lambda (n)
                  (k (square n)))))
(map/k read-squared/k
      '("First" "Second" "Third")
      (lambda (l) (web-display (sum l))))

Everything is in place now — except for map/k, of course. We’ll start with the definition of plain map:

(define (map f l)
  (if (null? l)
    null
    (cons (f (first l)) (map f (rest l)))))

The first thing in turning it into a map/k is adding a k input,

(define (map f l k)
  (if (null? l)
    null
    (cons (f (first l)) (map f (rest l)))))

and now we need to face the fact that the f input is itself one with a continuation — an f/k:

(define (map/k f/k l k)
  (if (null? l)
    null
    (cons (f (first l)) (map f (rest l)))))

Consider now the single f call — that should turn into a call to f/k with some continuation:

(define (map/k f/k l k)
  (if (null? l)
    null
    (cons (f/k (first l) ???) (map f (rest l)))))

but since f/k will involve a web interaction, it will lead to killing the cons around it. The solution is to move that cons into the continuation that is handed to f/k — and as usual, this involves the second cons argument — the continuation is derived from replacing the f/k call by a hole:

    (cons <*> (map f (rest l)))

and abstracting that hole, we get:

(define (map/k f/k l k)
  (if (null? l)
    null
    (f/k (first l)
        (lambda (result)
          (cons result (map f (rest l)))))))

We now do exactly the same for the recursive map call — it should use map/k with f/k and some continuation:

(define (map/k f/k l k)
  (if (null? l)
    null
    (f/k (first l)
        (lambda (result)
          (cons result (map/k f/k (rest l) ???))))))

and we need to move the surrounding cons yet again into this continuation. The holed expression is:

          (cons result <*>)

and abstracting that and moving it into the map/k continuation we get:

(define (map/k f/k l k)
  (if (null? l)
    null
    (f/k (first l)
        (lambda (result)
          (map/k f/k (rest l)
                  (lambda (new-rest)
                    (cons result new-rest)))))))

There are just one more problem with this — the k argument is never used. This implies two changes, since it needs to be used once in each of the conditional branches. Can you see where it should be added? (Try to do this before reading the code below.)

The complete code follows:

(define (map/k f/k l k)
  (if (null? l)
    (k null)
    (f/k (first l)
        (lambda (result)
          (map/k f/k (rest l)
                  (lambda (new-rest)
                    (k (cons result new-rest))))))))
(define (sum l) (foldl + 0 l))
(define (square n) (* n n))
(define (read-number/k prompt k)
  (web-read/k (format "~a number" prompt) k))
(define (read-squared/k prompt k)
  (read-number/k prompt (lambda (n) (k (square n)))))
(map/k read-squared/k
      '("First" "Second" "Third")
      (lambda (l) (web-display (sum l))))

Highlevel Overview on Continuations

Very roughly speaking, the transformation we made turns a function call like

(...stuff... (f ...args...) ...more-stuff...)

into

(f/k ...args...
    (lambda (<*>)
      (...stuff... <*> ...more-stuff...)))

This is the essence of the solution to the statelessness problem: to remember where we left off, we conveniently flip the expression inside-out so that its context is all stored in its continuation. One thing to note is that we did this only for functions that had some kind of web interaction, either directly or indirectly (since in the indirect case they still need to carry around the continuation).

If we wanted to make this process a completely mechanical one, then we wouldn’t have been able to make this distinction. After all, a function like map is perfectly fine as it is, unless it happens to be used with a continuation-carrying function — and that’s something that we know only at runtime. We would therefore need to transform all function calls as above, which in turn means that all functions would need to get an extra continuation argument.

Here are a few things to note about such fully-transformed code:

What we did here is the tedious way of getting continuations: we basically implemented them by massaging our code, turning it inside-out into code with the right shape. The problem with this is that the resulting code is no longer similar to what we had originally written, which makes it more difficult to debug and to maintain. We therefore would like to have this done in some automatic way, ideally in a way that means that we can leave our plain original code as is.

An Automatic Continuation Converter

PLAI §18

The converted code that we produced manually above is said to be written in “Continuation Passing Style”, or CPS. What we’re looking for is for a way to generate such code automatically — a way to “CPS” a given source code. When you think about it, this process is essentially a source to source function which should be bolted onto the compiler or evaluator. In fact, if we want to do this in Racket, then this description makes it sound a lot like a macro — and indeed it could be implemented as such.

Note that “CPS” has two related but distinct meanings here: you could have code that is written “in CPS style”, which means that it handles explicit continuations. Uses of this term usually refer to using continuation functions in some places in the code, not for fully transformed code. The other meaning is used for fully-CPS-ed code, which is almost never written directly. In addition, “CPS” is often used as a verb — either the manual process of refactoring code into passing some continuations explicitly (in the first case), or the automatic process of fully converting code (in the second one).

Before we get to the actual implementation, consider how we did the translation — there was a difference in how we handled plain top-level expressions and library functions. In addition, we had some more discounts in the manual process — one such discount was that we didn’t treat all value expressions as possible computations that require conversion. For example, in a function application, we took the function sub-expression as a simple value and left it as is, but for an automatic translation we need to convert that expression too since it might itself be a more complicated expression.

Instead of these special cases and shortcuts, we’ll do something more uniform: we will translate every expression into a function. This function will accept a receiver (= a continuation) and will pass it the value of the expression. This will be done for all expressions, even simple ones like plain numbers, for example, we will translate the 5 expression into (lambda (k) (k 5)), and the same goes for other constants and plain identifiers. Since we’re specifying a transformation here, we will treat it as a kind of a meta function and use a CPS[x] to make it easy to talk about:

CPS[5]
-->
(lambda (k) (k 5)) ; same for other numbers and constants

CPS[x]
-->
(lambda (k) (k x)) ; same as above for identifiers too

When we convert a primitive function application, we still do the usual thing, which is now easier to see as a general rule — using CPS[?] as the meta function that does the transformation:

CPS[(+ E1 E2)]
-->
(lambda (k)        ; everything turns to cont.-consuming functions
  (CPS[E1]        ; the CPS of E1 -- it expects a cont. argument
  (lambda (v1)    ; send this cont. to CPS[E1], so v1 is its value
    (CPS[E2]      ; same for E2 -- expects a cont.
      (lambda (v2) ; and again, v2 becomes the value of E2
        (k (+ v1 v2))))))) ; finally return the sum to our own cont.

In the above, you can see that (CPS[E] (lambda (v) …)) can be read as “evaluate E and bind the result to v”. (But note that the CPS conversion is not doing any evaluation, it just reorders code to determine how it gets evaluated when it later runs — so “compute” might be a better term to use here.) With this in mind, we can deal with other function applications: evaluate the function form, evaluate the argument form, then apply the first value on the second value, and finally wrap everything with a (lambda (k) …) and return the result to this continuation:

CPS[(E1 E2)]
-->
(lambda (k)
  (CPS[E1]        ; bind the result of evaluating E1
  (lambda (v1)    ; to v1
    (CPS[E2]      ; and the result of evaluating E2
      (lambda (v2) ; to v2
        (k (v1 v2))))))) ; apply and return the result

But this is the rule that we should use for primitive non-continuation functions only — it’s similar to what we did with + (except that we skipped evaluating + since it’s known). Instead, we’re dealing here with functions that are defined in the “web language” (in the code that is being converted), and as we’ve seen, these functions get a k argument which they use to return the result to. That was the whole point: pass k on to functions, and have them return the value directly to the k context. So the last part of the above should be fixed:

CPS[(E1 E2)]
-->
(lambda (k)
  (CPS[E1]        ; bind the result of evaluating E1
  (lambda (v1)    ; to v1
    (CPS[E2]      ; and the result of evaluating E2
      (lambda (v2) ; to v2
        (v1 v2 k)))))) ; apply and have it return the result to k

There’s a flip side to this transformation — whenever a function is created with a lambda form, we need to add a k argument to it, and make it return its value to it. Then, we need to “lift” the whole function as usual, using the same transformation we used for other values in the above. We’ll use k for the latter continuation argument, and cont for the former:

CPS[(lambda (arg) E)]
-->
(lambda (k) ; this is the usual
  (k        ; lifting of values
  (lambda (arg cont) ; the translated function has a cont. input
    (CPS[E] cont)))) ; the translated body returns its result to it

It is interesting to note the two continuations in the translated result: the first one (using k) is the continuation for the function value, and the second one (using cont) is the continuation used when the function is applied. Comparing this to our evaluators — we can say that the first roughly corresponds to evaluating a function form to get a closure, and the second corresponds to evaluating the body of a function when it’s applied, which means that cont is the dynamic continuation that matches the dynamic context in which the function is executed. Inspecting the CPS-ed form of the identity function is unsurprising: it simply passes its first argument (the “real” one) into the continuation since that’s how we return values in this system:

CPS[(lambda (x) x)]
-->
(lambda (k)
  (k
  (lambda (x cont)
    (CPS[x] cont))))
-->
(lambda (k)
  (k
  (lambda (x cont)
    ((lambda (k) (k x)) cont))))
--> ; reduce the redundant function application
(lambda (k)
  (k
  (lambda (x cont)
    (cont x))))

Note the reduction of a trivial application — doing this systematic conversion leads to many of them.

We now get to the transformation of the form that is the main reason we started with all of this — web-read. This transformation is simple, it just passes along the continuation to web-read/k:

CPS[(web-read E)]
-->
(lambda (k)
  (CPS[E]          ; evaluate the prompt expression
  (lambda (prompt) ; and bind it to prompt
    (web-read/k prompt k)))) ; use the prompt and the current cont.

We also need to deal with web-display — we changed the function calling protocol by adding a continuation argument, but web-display is defined outside of the CPS-ed language so it doesn’t have that argument. Another way of fixing it could be to move its definition into the language, but then we’ll still need to have a special treatment for the error that it uses.

CPS[(web-display E)]
-->
(lambda (k)
  (CPS[E]          ; evaluate the expression
  (lambda (val)  ; and bind it to val
    (web-display val))))

As you can see, all of these transformations are simple rewrites. We can use a simple syntax-rules macro to implement this transformation, essentially creating a DSL by translating code into plain Racket. Note that in the specification above we’ve implicitly used some parts of the input as keywords — lambda, +, web-read, and define — this is reflected in the macro code. The order of the rules is important, for example, we need to match first on (web-read E) and then on the more generic (E1 E2), and we ensure that the last default lifting of values has a simple expression by matching on (x …) before that.

(define-syntax CPS
  (syntax-rules (+ lambda web-read web-display) ;*** keywords
    [(CPS (+ E1 E2))
    (lambda (k)
      ((CPS E1)
        (lambda (v1)
          ((CPS E2)
          (lambda (v2)
            (k (+ v1 v2)))))))]
    [(CPS (web-read E))
    (lambda (k)
      ((CPS E)
        (lambda (val)
          (web-read/k val k))))]
    [(CPS (web-display E))
    (lambda (k)                ; could be:
      ((CPS E)                ;    (lambda (k)
        (lambda (val)          ;      ((CPS E) web-display))
          (web-display val))))] ; but keep it looking uniform
    [(CPS (lambda (arg) E))
    (lambda (k)
      (k (lambda (arg cont)
            ((CPS E)
            cont))))]
    [(CPS (E1 E2))
    (lambda (k)
      ((CPS E1)
        (lambda (v1)
          ((CPS E2)
          (lambda (v2)
            (v1 v2 k))))))]
    ;; the following pattern ensures that the last rule is used only
    ;; with simple values and identifiers
    [(CPS (x ...))
    ---syntax-error---]
    [(CPS V)
    (lambda (k) (k V))]))

The transformation that this code implements is one of the oldest CPS transformations — it is called the Fischer Call by Value CPS transformation, and is due Michael Fischer. There has been much more research into such transformations — the Fischer translation, while easy to understand due to its uniformity, introduces significant overhead in the form of many new functions in its result. Some of these are easy to optimize — for example, things like ((lambda (k) (k v)) E) could be optimized to just (E v) assuming a left-to-right evaluation order or proving that E has no side-effects (and Racket performs this optimization and several others), but some of the overhead is not easily optimized. There have been several other CPS transformations, in an attempt to avoid such overhead.

Finally, trying to run code using this macro can be a little awkward. We need to explicitly wrap all values in definitions by a CPS, and we need to invoke top-level expressions with a particular continuation — web-display in our context. We can do all of that with a convenience macro that will transform a number of definitions followed by an optional expression.

Note the use of begin — usually, it is intended for sequential execution, but it is also used in macro result expressions when we need a macro to produce multiple expressions (since the result of a macro must be a single S-expression) — this is why it’s used here, not for sequencing side-effects.

(define-syntax CPS-code
  (syntax-rules (define)
    [(CPS-code (define (id arg) E) more ...)
    ;; simple translation to `lambda'
    (CPS-code (define id (lambda (arg) E)) more ...)]
    [(CPS-code (define id E) more ...)
    (begin (define id ((CPS E) (lambda (x) x)))
            (CPS-code more ...))]
    [(CPS-code last-expr)
    ((CPS last-expr) web-display)]
    [(CPS-code) ; happens when there is no plain expr at
    (begin)])) ; the end so do nothing in this case

The interesting thing that this macro does is set up a proper continuation for definitions and top-level expressions. In the latter case, it passes web-display as the continuation, and in the former case, it passes the identity function as the continuation — which is used to “lower” the lifted value from its continuation form into a plain value. Using the identity function as a continuation is not really correct: it means that if evaluating the expression to be bound performs some web interaction, then the definition will be aborted, leaving the identifier unbound. The way to solve this is by arranging for the definition operation to be done in the continuation, for example, we can get closer to this using an explicit mutation step:

    [(CPS-code (define id E) more ...)
    (begin (define id #f)
            ((CPS E) (lambda (x) (set! id x)))
            (CPS-code more ...))]

But there are two main problems with this: first, the rest of the code — (CPS-code more ...) — should also be done in the continuation, which will defeat the global definitions. We could try to use the continuation to get the scope:

    [(CPS-code (define id E) more ...)
    ((CPS E) (lambda (id) (CPS-code more ...)))]

but that breaks recursive definitions. In any case, the second problem is that this is not accurate even if we solved the above: we really need to have parts of the Racket definition mechanism exposed to make it work. So we’ll settle with the simple version as an approximation. It works fine if we use definitions only for functions, and invoke them in toplevel expressions.

For reference, the complete code at this point follows.


web-simple-language.rkt D 
;; Simulation of web interactions with a CPS converter (not an
;; interpreter)

#lang racket

(define error raise-user-error)

(define (nothing-to-do ignored)
  (error 'nothing-to-do "No computation to resume."))

(define resumer (box nothing-to-do))

(define (web-display n)
  (set-box! resumer nothing-to-do)
  (error 'web-display "~s" n))

(define (web-read/k prompt k)
  (set-box! resumer k)
  (error 'web-read
        "enter (submit N) to continue the following\n  ~a:"
        prompt))

(define (submit n)
  ;; to avoid mistakes, we clear out `resumer' before invoking it
  (let ([k (unbox resumer)])
    (set-box! resumer nothing-to-do)
    (k n)))

(define-syntax CPS
  (syntax-rules (+ lambda web-read web-display) ;*** keywords
    [(CPS (+ E1 E2))
    (lambda (k)
      ((CPS E1)
        (lambda (v1)
          ((CPS E2)
          (lambda (v2)
            (k (+ v1 v2)))))))]
    [(CPS (web-read E))
    (lambda (k)
      ((CPS E)
        (lambda (val)
          (web-read/k val k))))]
    [(CPS (web-display E))
    (lambda (k)
      ((CPS E)
        (lambda (val)
          (web-display val))))]
    [(CPS (lambda (arg) E))
    (lambda (k)
      (k (lambda (arg cont)
            ((CPS E)
            cont))))]
    [(CPS (E1 E2))
    (lambda (k)
      ((CPS E1)
        (lambda (v1)
          ((CPS E2)
          (lambda (v2)
            (v1 v2 k))))))]
    ;; the following pattern ensures that the last rule is used only
    ;; with simple values and identifiers
    [(CPS (x ...))
    ---syntax-error---]
    [(CPS V) ; <-- only numbers, other literals, and identifiers
    (lambda (k)
      (k V))]))

(define-syntax CPS-code
  (syntax-rules (define)
    [(CPS-code (define (id arg) E) more ...)
    ;; simple translation to `lambda'
    (CPS-code (define id (lambda (arg) E)) more ...)]
    [(CPS-code (define id E) more ...)
    (begin (define id ((CPS E) (lambda (x) x)))
            (CPS-code more ...))]
    [(CPS-code last-expr)
    ((CPS last-expr) web-display)]
    [(CPS-code) ; happens when there is no plain expr at
    (begin)])) ; the end so do nothing in this case

Here is a quick example of using this:

(CPS-code
  (web-display (+ (web-read "First number")
                  (web-read "Second number"))))

Note that this code uses web-display, which is not really needed since CPS-code would use it as the top-level continuation. (Can you see why it works the same either way?) So this is even closer to a plain program:

(CPS-code (+ (web-read "First number")
            (web-read "Second number")))

A slightly more complicated example:

(CPS-code
  (define (add n)
    (lambda (m)
      (+ m n)))
  (define (read-and-add n)
    ((add n) (web-read "Another number")))
  (read-and-add (web-read "A number")))

Using this for the other examples is not possible with the current state of the translation macro. These example will require extending the CPS transformation with functions of any arity, multiple expressions in a body, and it recognize additional primitive functions. None of these is difficult, it will just make it more verbose.

Continuations as a Language Feature

This is conceptually between PLAI §18 and PLAI §19

In the list of CPS transformation rules there were two rules that deserve additional attention in how they deal with their continuation.

First, note the rule for web-display:

[(CPS (web-display E))
(lambda (k)
  ((CPS E)
    (lambda (val)
      (web-display val))))]

— it simply ignores its continuation. This means that whenever web-display is used, the rest of the computation is simply discarded, which seems wrong — it’s the kind of problem that we’ve encountered several times when we discussed the transforming web application code. Of course, this doesn’t matter much for our implementation of web-display since it aborts the computation anyway using error — but what if we did that intentionally? We would get a kind of an “abort now” construct: we can implement this as a new abort form that does just that:

(define-syntax CPS
  (syntax-rules (...same... abort) ;*** new keyword
    ...
    [(CPS (abort E))
    (lambda (k)
      ((CPS E) (lambda (x) x)))] ; ignore `k'
    ...))

You could try that — (CPS-code (+ 1 2)) produces 3 as “web output”, but (CPS-code (+ 1 (abort 2))) simply returns 2. In fact, it doesn’t matter how complicated the code is — as soon as it encounters an abort the whole computation is discarded and we immediately get its result, for example, try this:

(CPS-code
  (define (add n)
    (lambda (m)
      (+ m n)))
  (define (read-and-add n)
    ((abort 999) ((add n) (web-read "Another number"))))
  (read-and-add (web-read "A number")))

it reads the first number and then it immediately returns 999. This seems like a potentially useful feature, except that it’s a little too “untamed” — it aborts the program completely, getting all the way back to the top-level with a result. (It’s actually quite similar to throwing an exception, only without a way to catch it.) It would be more useful to somehow control the part of the computation that gets aborted instead.

That leads to the second exceptional form in our translator: web-read. If you look closely at all of our transformation rules, you’ll see that the continuation argument is never made accessible to user code — the k argument is always generated by the macro (and inaccessible to user code due to the hygienic macro system). The continuation is only passed as the extra argument to user functions, but in the rule that adds this argument:

    [(CPS (lambda (arg) E))
    (lambda (k)
      (k (lambda (arg cont)
            ((CPS E)
            cont))))]

the new cont argument is introduced by the macro so it is inaccessible as well. The only place where the k argument is actually used is in the web-read rule, where it is sent to the resulting web-read/k call. (This makes sense, since web reading is how we mimic web interaction, and therefore it is the only reason for CPS-ing our code.) However, in our fake web framework this function is a given built-in, so the continuation is still not accessible for user code.

What if we pass the continuation argument to a user function in a way that intentionally exposes it? We can achieve this by writing a function that is similar to web-read/k, except that it will somehow pass the continuation to user code. A simple way to do that is to have the new function take a function value as its primary input, and call this function with the continuation (which is still received as the implicit second argument):

(define (call-k f k)
  (f k))

This is close, but it fails because it doesn’t follow our translated function calling protocol, where every function receives two inputs — the original argument and the continuation. Because of this, we need to call f with a second continuation value, which is k as well:

(define (call-k f k)
  (f k k))

But we also fail to follow the calling protocol by passing k as is: it is a continuation value, which in our CPS system is a one-argument function. (In fact, this is another indication that continuations are not accessible to user code — they don’t follow the same function calling protocol.) It is best to think about continuations as meta values that are not part of the user language just yet. To make it usable, we need to wrap it so we get the usual two-argument function which user code can call:

(define (call-k f k)
  (f (lambda (val cont) (k val)) k))

This explicit wrapping is related to the fact that continuations are a kind of meta-level value — and the wrapping is needed to “lower” it to the user’s world. (This is sometimes called “reification”: a meta value is reified as a user value.)

Using this new definition, we can write code that can access its own continuation as a plain value. Here is a simple example that grabs the top-level continuation and labels it abort, then uses it in the same way we’ve used the above abort:

> (CPS-code (call-k (lambda (abort) (+ 1 (abort 2)))))
web-display: 2

But we can grab any continuation we want, not just the top-level one:

(CPS-code (+ 100 (call-k (lambda (abort) (+ 1 (abort 2))))))
web-display: 102

Side note: how come we didn’t need a new CPS translation rule for this function? There is no need for one, since call-k is already written in a way that follows our calling convention, and no translation rule is needed. In fact, no such rule is needed for web-read too — except for changing the call to web-read/k, it does exactly the same thing that a function call does, so we can simply rename web-read/k as web-read and drop the rule. (Note that the rewritten function call will have a (CPS web-read) — but CPS-ing an identifier results in the identifier itself.) The same holds for web-display — we just need to make it adhere to the calling convention and add a k input which is ignored. One minor complication is that web-display is also used as a continuation value for a top-level expression in CPS-code — so we need to wrap it there.

The resulting code follows:

#lang racket

(define error raise-user-error)

(define (nothing-to-do ignored)
  (error 'nothing-to-do "No computation to resume."))

(define resumer (box nothing-to-do))

(define (web-display n k) ; note that k is not used!
  (set-box! resumer nothing-to-do)
  (error 'web-display "~s" n))

(define (web-read prompt k)
  (set-box! resumer k)
  (error 'web-read
        "enter (submit N) to continue the following\n  ~a:"
        prompt))

(define (submit n)
  ;; to avoid mistakes, we clear out `resumer' before invoking it
  (let ([k (unbox resumer)])
    (set-box! resumer nothing-to-do)
    (k n)))

(define (call-k f k)
  (f (lambda (val cont) (k val)) k))

(define-syntax CPS
  (syntax-rules (+ lambda)
    [(CPS (+ E1 E2))
    (lambda (k)
      ((CPS E1)
        (lambda (v1)
          ((CPS E2)
          (lambda (v2)
            (k (+ v1 v2)))))))]
    [(CPS (lambda (arg) E))
    (lambda (k)
      (k (lambda (arg cont)
            ((CPS E)
            cont))))]
    [(CPS (E1 E2))
    (lambda (k)
      ((CPS E1)
        (lambda (v1)
          ((CPS E2)
          (lambda (v2)
            (v1 v2 k))))))]
    ;; the following pattern ensures that the last rule is used only
    ;; with simple values and identifiers
    [(CPS (x ...))
    ---syntax-error---]
    [(CPS V)
    (lambda (k)
      (k V))]))

(define-syntax CPS-code
  (syntax-rules (define)
    [(CPS-code (define (id arg) E) more ...)
    ;; simple translation to `lambda'
    (CPS-code (define id (lambda (arg) E)) more ...)]
    [(CPS-code (define id E) more ...)
    (begin (define id ((CPS E) (lambda (x) x)))
            (CPS-code more ...))]
    [(CPS-code last-expr)
    ((CPS last-expr) (lambda (val) (web-display val 'whatever)))]
    [(CPS-code) ; happens when there is no plain expr at
    (begin)])) ; the end so do nothing in this case

Obviously, given call-k we could implement web-read/k in user code: call-k makes the current continuation available and going on from there is simple (it will require a little richer language, so we will do that in a minute). In fact, there is no real reason to stick to the fake web framework to play with continuations. (Note: since we don’t throw an error to display the results, we can also allow multiple non-definition expressions in CPS-code.)


cps-language.rkt D 
;; A language that is CPS-transformed (not an interpreter)

#lang racket

(define (call-k f k)
  (f (lambda (val cont) (k val)) k))

(define-syntax CPS
  (syntax-rules (+ lambda)
    [(CPS (+ E1 E2))
    (lambda (k)
      ((CPS E1)
        (lambda (v1)
          ((CPS E2)
          (lambda (v2)
            (k (+ v1 v2)))))))]
    [(CPS (lambda (arg) E))
    (lambda (k)
      (k (lambda (arg cont)
            ((CPS E)
            cont))))]
    [(CPS (E1 E2))
    (lambda (k)
      ((CPS E1)
        (lambda (v1)
          ((CPS E2)
          (lambda (v2)
            (v1 v2 k))))))]
    ;; the following pattern ensures that the last rule is used only
    ;; with simple values and identifiers
    [(CPS (x ...))
    ---syntax-error---]
    [(CPS V)
    (lambda (k)
      (k V))]))

(define-syntax CPS-code
  (syntax-rules (define)
    [(CPS-code (define (id arg) E) more ...)
    ;; simple translation to `lambda'
    (CPS-code (define id (lambda (arg) E)) more ...)]
    [(CPS-code (define id E) more ...)
    (begin (define id ((CPS E) (lambda (x) x)))
            (CPS-code more ...))]
    [(CPS-code expr more ...)
    (begin ((CPS expr) (lambda (x) x))
            (CPS-code more ...))]
    [(CPS-code) (begin)])) ; done

(CPS-code (call-k (lambda (abort) (+ 1 (abort 2))))
          (+ 100 (call-k (lambda (abort) (+ 1 (abort 2))))))

Continuations in Racket

As we have seen, CPS-ing code makes it possible to implement web applications with a convenient interface. This is fine in theory, but in practice it suffers from some problems. Some of these problems are technicalities: it relies on proper implementation of tail calls (since all calls are tail calls), and it represents the computation stack as a chain of closures and therefore prevents the usual optimizations. But there is one problem that is much more serious: it is a global transformation, and as such, it requires access to the complete program code. As an example, consider how CPS-code deals with definitions: it uses an identity function as the continuation, but that wasn’t the proper way to do them, since it would break if computing the value performs some web interaction. A good solution would instead put the side-effect that define performs in the continuation — but this side effect is not even available for us when we work inside Racket.

Because of this, the proper way to make continuations available is for the language implementation itself to provide it. There are a few languages that do just that — and Scheme has pioneered this as part of the core requirements that the standard dictates: a Scheme implementation needs to provide call-with-current-continuation, which is the same tool as our call-k. Usually it is also provided with a shorter name, call/cc. Here are our two examples, re-done with Racket’s built-in call/cc:

(call/cc (lambda (abort) (+ 1 (abort 2))))
(+ 100 (call/cc (lambda (abort) (+ 1 (abort 2)))))

[Side note: continuations as we see here are still provided only by a few “fringe” functional languages. However, they are slowly making their way into more mainstream languages — Ruby has these continuations too, and several other languages provide more limited variations, like generators in Python. On the other hand, Racket provides a much richer functionality: it has delimited continuations (which represents only a part of a computation context), and its continuations are also composable — a property that goes beyond what we see here.]

Racket also comes with a more convenient let/cc form, which exposes the “grab the current continuation” pattern more succinctly — it’s a simple macro definition:

(define-syntax-rule (let/cc k body ...)
  (call/cc (lambda (k) body ...)))

and the two examples become:

(let/cc abort (+ 1 (abort 2)))
(+ 100 (let/cc abort (+ 1 (abort 2))))

When it gets to choosing an implementation strategy, there are two common approaches: one is to do the CPS transformation at the compiler level, and another is to capture the actual runtime stack and wrap it in an applicable continuation objects. The former can lead to very efficient compilation of continuation-heavy programs, but the latter makes it easier to deal with foreign functions (consider higher order functions that are given as a library where you don’t have its source) and allows using the normal runtime stack that CPUs are using very efficiently. Racket implements continuations with the latter approach mainly for these reasons.

To see how these continuations expose some of the implementation details that we normally don’t have access to, consider grabbing the continuation of a definition expression:

> (define b (box #f))
> (define a (let/cc k (set-box! b k) 123))
> a
123
> ((unbox b) 1000)
> a
1000

Note that using a top-level (let/cc abort …code…) is not really aborting for a reason that is related to this: a true abort must capture the continuation before any computation begins. A natural place to do this is in the REPL implementation.

Finally, we can use these to re-implement our fake web framework, using Racket’s continuations instead of performing our own transformation. The only thing that requires continuations is our web-read — and using the Racket facilities we can implement it as follows:

(define (web-read prompt) ; no `k' input
  (let/cc k ; instead, get it with `let/cc'
    ;; and the body is the same as it was
    (set-box! resumer k)
    (error 'web-read
          "enter (submit N) to continue the following\n  ~a:"
          prompt)))

Note that this kind of an implementation is no longer a “language” — it is implemented as a plain library now, demonstrating the flexibility that having continuations in our language enables. While this is still just our fake toy framework, it is the core way in which the Racket web server is implemented (see the “addition server” implementation above), using a hash table that maps freshly made URLs to stored continuations. The complete code follows:


continuation-based-web-library.rkt D 
;; Simulation of web interactions with Racket's built-in
;; continuation facility

#lang racket

(define error raise-user-error)

(define (nothing-to-do ignored)
  (error 'nothing-to-do "No computation to resume."))

(define resumer (box nothing-to-do))

(define (web-display n)
  (set-box! resumer nothing-to-do)
  (error 'web-display "~s" n))

(define (web-read prompt)
  (let/cc k
    (set-box! resumer k)
    (error 'web-read
          "enter (submit N) to continue the following\n  ~a:"
          prompt)))

(define (submit n)
  ;; to avoid mistakes, we clear out `resumer' before invoking it
  (let ([k (unbox resumer)])
    (set-box! resumer nothing-to-do)
    (k n)))

Using this, you can try out some of the earlier examples, which now become much simpler since there is no need to do any CPS-ing. For example, the code that required transforming map into a map/k can now use the plain map directly. In fact, that’s the exact code we started that example with — no changes needed:

(define (sum l) (foldl + 0 l))
(define (square n) (* n n))
(define (read-number prompt)
  (web-read (format "~a number" prompt)))
(web-display (sum (map (lambda (prompt)
                        (square (read-number prompt)))
                      '("First" "Second" "Third"))))

Note how web-read is executed directly — it is a plain library function.

extra Playing with Continuations

PLAI §19

So far we’ve seen a number of “tricks” that can be done with continuations. The simplest was aborting a computation — here’s an implementation of functions with a return that can be used to exit the function early:

(define-syntax (fun stx)
  (syntax-case stx ()
    [(_ name (x ...) body ...)
    (with-syntax ([return (datum->syntax #'name 'return)])
      #'(define (name x ...) (let/cc return body ...)))]))

;; try it:
(fun mult (list)
  (define (loop list)
    (cond [(null? list) 1]
          [(zero? (first list)) (return 0)] ; early return
          [else (* (first list) (loop (rest list)))]))
  (loop list))
(mult '(1 2 3 0 x))

[Side note: This is a cheap demonstration. If we rewrite the loop tail-recursively, then aborting it is simple — just return 0 instead of continuing the loop. And that’s not a coincidence, aborting from a tail-calling loop is easy, and CPS makes such aborts possible by making only tail calls.]

But such uses of continuations are simple because they’re used only to “jump out” of some (dynamic) context. More exotic uses of continuations rely on the ability to jump into a previously captured continuation. In fact, our web-read implementation does just that (and more). The main difference is that in the former case the continuation is used exactly once — either explicitly by using it, or implicitly by returning a value (without aborting). If a continuation can be used after the corresponding computation is over, then why not use it over and over again… For example, we can try an infinite loop by capturing a continuation and later use it as a jump target:

(define (foo)
  (define loop (let/cc k k))    ; captured only for the context
  (printf "Meh.\n")
  (loop 'something))            ; need to give it some argument

This almost works — we get two printouts so clearly the jump was successful. The problem is that the captured loop continuation is the one that expects a value to bind to loop itself, so the second attempted call has 'something as the value of loop, obviously, leading to an error. This can be used as a hint for a solution — simply pass the continuation to itself:

(define (foo)
  (define loop (let/cc k k))
  (printf "Meh.\n")
  (loop loop))                  ; keep the value of `loop'

Another way around this problem is to capture the continuation that is just after the binding — but we can’t do that (try it…). Instead, we can use side-effects:

(define (foo)
  (define loop (box #f))
  (let/cc k (set-box! loop k))  ; cont. of the outer expression
  (printf "Meh.\n")
  ((unbox loop) 'something))

Note: the 'something value goes to the continuation which makes it the result of the (let/cc ...) expression — which means that it’s never actually used now.

This might seem like a solution that is not as “clean”, since it uses mutation — but note that the problem that we’re solving stems from a continuation that exposes the mutation that Racket performs when it creates a new binding.

Here’s an example of a loop that does something a little more interesting in a goto-like kind of way:

(define (foo)
  (define n (box 0))
  (define loop (box #f))
  (let/cc k (set-box! loop k))
  (set-box! n (add1 (unbox n)))
  (printf "n = ~s\n" (unbox n))
  ((unbox loop)))

Note: in this example the continuation is called without any inputs. How is this possible? As we’ve seen, the 'something value in the last example is the never-used result of the let/cc expression. In this case, the continuation is called with no input, which means that the let/cc expression evaluates to … nothing! This is not just some void value, but no value at all. The complete story is that in Racket expressions can evaluate to multiple values, and in this case, it’s no values at all.

Given such examples it’s no wonder that continuations tend to have a reputation for being “similar to goto in their power”. This reputation has some vague highlevel justification in that both features can produce obscure “spaghetti code” — but in practice they are very different. On one hand continuations are more limited: unlike goto, you can only jump to a continuation that you have already “visited”. On the other hand, jumping to a continuation is doing much more than jumping to a goto label, the latter changes the next instruction to execute (the “program counter” register), but the former changes current computation context (in low level terms, both the PC register and the stack). (See also the setjmp() and longjmp() functions in C, or the context related functions (getcontext(), setcontext(), swapcontext()).)

To demonstrate how different continuations are from plain gotos, we’ll start with a variation of the above loop — instead of performing the loop we just store it in a global box, and we return the counter value instead of printing it:

(define loop (box #f))

(define (foo)
  (define n (box 0))
  (let/cc k (set-box! loop k))
  (set-box! n (add1 (unbox n)))
  (unbox n))

Now, the first time we call (foo), we get 1 as expected, and then we can call (unbox loop) to re-invoke the continuation and get the following numbers:

> (foo)
1
> ((unbox loop))
2
> ((unbox loop))
3

[Interesting experiment: try doing the same, but use (list (foo)) as the first interaction, and the same ((unbox loop)) later.]

The difference between this use and a goto is now evident: we’re not just just jumping to a label — we’re jumping back into a computation that returns the next number. In fact, the continuation can include a context that is outside of foo, for example, we can invoke the continuation from a different function, and loop can be used to search for a specific number:

(define (bar)
  (let ([x (foo)])
    (unless (> x 10) ((unbox loop)))
    x))

and now (bar) returns 11. The loop is now essentially going over the obvious part of foo but also over parts of bar. Here’s an example that makes it even more obvious:

(define (bar)
  (let* ([x (foo)]
        [y (* x 2)])
    (unless (> x 10) ((unbox loop)))
    y))

Since the y binding becomes part of the loop. Our foo can be considered as a kind of a producer for natural numbers that can be used to find a specific number, invoking the loop continuation to try the next number when the last one isn’t the one we want.

extra The Ambiguous Operator: amb

Our foo is actually a very limited version of something that is known as “McCarthy’s Ambiguous Operator”, usually named amb. This operator is used to perform a kind of a backtrack-able choice among several values.

To develop our foo into such an amb, we begin by renaming foo as amb and loop as fail, and instead of returning natural numbers in sequence we’ll have it take a list of values and return values from this list. Also, we will use mutable variables instead of boxes to reduce clutter (a feature that we’ve mostly ignored so far). The resulting code is:

(define fail #f)

(define (amb choices)
  (let/cc k (set! fail k))
  (let ([choice (first choices)])
    (set! choices (rest choices))
    choice))

Of course, we also need to check that we actually have values to return:

(define fail #f)

(define (amb choices)
  (let/cc k (set! fail k))
  (if (pair? choices)
    (let ([choice (first choices)])
      (set! choices (rest choices))
      choice)
    (error "no more choices!")))

The resulting amb can be used in a similar way to the earlier foo:

(define (bar)
  (let* ([x (amb '(5 10 15 20))]
        [y (* x 2)])
    (unless (> x 10) (fail))
    y))
(bar)

This is somewhat useful, but searching through a simple list of values is not too exciting. Specifically, we can have only one search at a time. Making it possible to have multiple searches is not too hard: instead of a single failure continuation, store a stack of them, where each new amb pushes a new one on it.

We define failures as this stack and push a new failure continuation in each amb. fail becomes a function that simply invokes the most recent failure continuation, if one exists.

(define failures null)

(define (fail)
  (if (pair? failures)
    ((first failures))
    (error "no more choices!")))

(define (amb choices)
  (let/cc k (set! failures (cons k failures)))
  (if (pair? choices)
    (let ([choice (first choices)])
      (set! choices (rest choices))
      choice)
    (error "no more choices!")))

This is close, but there’s still something missing. When we run out of options from the choices list, we shouldn’t just throw an error — instead, we should invoke the previous failure continuation, if there is one. In other words, we want to use fail, but before we do, we need to pop up the top-most failure continuation since it is the one that we are currently dealing with:

(define failures null)

(define (fail)
  (if (pair? failures)
    ((first failures))
    (error "no more choices!")))

(define (amb choices)
  (let/cc k (set! failures (cons k failures)))
  (if (pair? choices)
    (let ([choice (first choices)])
      (set! choices (rest choices))
      choice)
    (begin (set! failures (rest failures))
          (fail))))

(define (assert condition)
  (unless condition (fail)))

Note the addition of a tiny assert utility, something that is commonly done with amb. We can now play with this code as before:

(let* ([x (amb '(5 10 15 20))]
      [y (* x 2)])
  (unless (> x 10) (fail))
  y)

But of course the new feature is more impressive, for example, find two numbers that sum up to 6 and the first is the square of the second:

(let ([a (amb '(1 2 3 4 5 6 7 8 9 10))]
      [b (amb '(1 2 3 4 5 6 7 8 9 10))])
  (assert (= 6 (+ a b)))
  (assert (= a (* b b)))
  (list a b))

Find a Pythagorean triplet:

(let ([a (amb '(1 2 3 4 5 6))]
      [b (amb '(1 2 3 4 5 6))]
      [c (amb '(1 2 3 4 5 6))])
  (assert (= (* a a) (+ (* b b) (* c c))))
  (list a b c))

Specifying the list of integers is tedious, but easily abstracted into a function:

(let* ([int6 (lambda () (amb '(1 2 3 4 5 6)))]
      [a (int6)]
      [b (int6)]
      [c (int6)])
  (assert (= (* a a) (+ (* b b) (* c c))))
  (list a b c))

A more impressive demonstration is finding a solution to tests known as “Self-referential Aptitude Test”, for example, here’s one such test (by Christian Schulte and Gert Smolka) — it’s a 10-question multiple choice test:

  1. The first question whose answer is b is question (a) 2; (b) 3; (c) 4; (d) 5; (e) 6.
  2. The only two consecutive questions with identical answers are questions (a) 2 and 3; (b) 3 and 4; (c) 4 and 5; (d) 5 and 6; (e) 6 and 7.
  3. The answer to this question is the same as the answer to question (a) 1; (b) 2; (c) 4; (d) 7; (e) 6.
  4. The number of questions with the answer a is (a) 0; (b) 1; (c) 2; (d) 3; (e) 4.
  5. The answer to this question is the same as the answer to question (a) 10; (b) 9; (c) 8; (d) 7; (e) 6.
  6. The number of questions with answer a equals the number of questions with answer (a) b; (b) c; (c) d; (d) e; (e) none of the above.
  7. Alphabetically, the answer to this question and the answer to the following question are (a) 4 apart; (b) 3 apart; (c) 2 apart; (d) 1 apart; (e) the same.
  8. The number of questions whose answers are vowels is (a) 2; (b) 3; (c) 4; (d) 5; (e) 6.
  9. The number of questions whose answer is a consonant is (a) a prime; (b) a factorial; (c) a square; (d) a cube; (e) divisible by 5.
  10. The answer to this question is (a) a; (b) b; (c) c; (d) d; (e) e.

and the solution is pretty much a straightforward translation:

(define (self-test)
  (define (choose-letter) (amb '(a b c d e)))
  (define q1  (choose-letter))
  (define q2  (choose-letter))
  (define q3  (choose-letter))
  (define q4  (choose-letter))
  (define q5  (choose-letter))
  (define q6  (choose-letter))
  (define q7  (choose-letter))
  (define q8  (choose-letter))
  (define q9  (choose-letter))
  (define q10 (choose-letter))
  ;; 1. The first question whose answer is b is question (a) 2;
  ;;    (b) 3; (c) 4; (d) 5; (e) 6.
  (assert (eq? q1 (cond [(eq? q2 'b) 'a]
                        [(eq? q3 'b) 'b]
                        [(eq? q4 'b) 'c]
                        [(eq? q5 'b) 'd]
                        [(eq? q6 'b) 'e]
                        [else (assert #f)])))
  ;; 2. The only two consecutive questions with identical answers
  ;;    are questions (a) 2 and 3; (b) 3 and 4; (c) 4 and 5; (d) 5
  ;;    and 6; (e) 6 and 7.
  (define all (list q1 q2 q3 q4 q5 q6 q7 q8 q9 q10))
  (define (count-same-consecutive l)
    (define (loop x l n)
      (if (null? l)
        n
        (loop (first l) (rest l)
              (if (eq? x (first l)) (add1 n) n))))
    (loop (first l) (rest l) 0))
  (assert (eq? q2 (cond [(eq? q2 q3) 'a]
                        [(eq? q3 q4) 'b]
                        [(eq? q4 q5) 'c]
                        [(eq? q5 q6) 'd]
                        [(eq? q6 q7) 'e]
                        [else (assert #f)])))
  (assert (= 1 (count-same-consecutive all))) ; exactly one
  ;; 3. The answer to this question is the same as the answer to
  ;;    question (a) 1; (b) 2; (c) 4; (d) 7; (e) 6.
  (assert (eq? q3 (cond [(eq? q3 q1) 'a]
                        [(eq? q3 q2) 'b]
                        [(eq? q3 q4) 'c]
                        [(eq? q3 q7) 'd]
                        [(eq? q3 q6) 'e]
                        [else (assert #f)])))
  ;; 4. The number of questions with the answer a is (a) 0; (b) 1;
  ;;    (c) 2; (d) 3; (e) 4.
  (define (count x l)
    (define (loop l n)
      (if (null? l)
        n
        (loop (rest l) (if (eq? x (first l)) (add1 n) n))))
    (loop l 0))
  (define num-of-a (count 'a all))
  (define num-of-b (count 'b all))
  (define num-of-c (count 'c all))
  (define num-of-d (count 'd all))
  (define num-of-e (count 'e all))
  (assert (eq? q4 (case num-of-a
                    [(0) 'a]
                    [(1) 'b]
                    [(2) 'c]
                    [(3) 'd]
                    [(4) 'e]
                    [else (assert #f)])))
  ;; 5. The answer to this question is the same as the answer to
  ;;    question (a) 10; (b) 9; (c) 8; (d) 7; (e) 6.
  (assert (eq? q5 (cond [(eq? q5 q10) 'a]
                        [(eq? q5 q9) 'b]
                        [(eq? q5 q8) 'c]
                        [(eq? q5 q7) 'd]
                        [(eq? q5 q6) 'e]
                        [else (assert #f)])))
  ;; 6. The number of questions with answer a equals the number of
  ;;    questions with answer (a) b; (b) c; (c) d; (d) e; (e) none
  ;;    of the above.
  (assert (eq? q6 (cond [(= num-of-a num-of-b) 'a]
                        [(= num-of-a num-of-c) 'b]
                        [(= num-of-a num-of-d) 'c]
                        [(= num-of-a num-of-e) 'd]
                        [else 'e])))
  ;; 7. Alphabetically, the answer to this question and the answer
  ;;    to the following question are (a) 4 apart; (b) 3 apart; (c)
  ;;    2 apart; (d) 1 apart; (e) the same.
  (define (choice->integer x)
    (case x [(a) 1] [(b) 2] [(c) 3] [(d) 4] [(e) 5]))
  (define (distance x y)
    (if (eq? x y)
      0
      (abs (- (choice->integer x) (choice->integer y)))))
  (assert (eq? q7 (case (distance q7 q8)
                    [(4) 'a]
                    [(3) 'b]
                    [(2) 'c]
                    [(1) 'd]
                    [(0) 'e]
                    [else (assert #f)])))
  ;; 8. The number of questions whose answers are vowels is (a) 2;
  ;;    (b) 3; (c) 4; (d) 5; (e) 6.
  (assert (eq? q8 (case (+ num-of-a num-of-e)
                    [(2) 'a]
                    [(3) 'b]
                    [(4) 'c]
                    [(5) 'd]
                    [(6) 'e]
                    [else (assert #f)])))
  ;; 9. The number of questions whose answer is a consonant is (a) a
  ;;    prime; (b) a factorial; (c) a square; (d) a cube; (e)
  ;;    divisible by 5.
  (assert (eq? q9 (case (+ num-of-b num-of-c num-of-d)
                    [(2 3 5 7) 'a]
                    [(1 2 6)  'b]
                    [(0 1 4 9) 'c]
                    [(0 1 8)  'd]
                    [(0 5 10)  'e]
                    [else (assert #f)])))
  ;; 10. The answer to this question is (a) a; (b) b; (c) c; (d) d;
  ;;    (e) e.
  (assert (eq? q10 q10)) ; (note: does nothing...)
  ;; The solution should be: (c d e b e e d c b a)
  all)

Note that the solution is simple because of the freedom we get with continuations: the search is not a sophisticated one, but we’re free to introduce ambiguity points anywhere that fits, and mix assertions with other code without worrying about control flow (as you do in an implementation that uses explicit loops). On the other hand, it is not too efficient since it uses a naive search strategy. (This could be improved somewhat by deferring ambiguous points, for example, don’t assign q7, q8, q9, and q10 before the first question; but much of the cost comes from the strategy for implementing continuation in Racket, which makes capturing continuations a relatively expensive operation.)

When we started out with the modified loop, we had a representation of an arbitrary natural number — but with the move to lists of choices we lost the ability to deal with such infinite choices. Getting it back is simple: delay the evaluation of the amb expressions. We can do that by switching to a list of thunks instead. The change in the code is in the result: just return the result of calling choice instead of returning it directly. We can then rename amb to amb/thunks and reimplement amb as a macro that wraps all of its sub-forms in thunks.

(define (amb/thunks choices)
  (let/cc k (set! failures (cons k failures)))
  (if (pair? choices)
    (let ([choice (first choices)])
      (set! choices (rest choices))
      (choice))                    ;*** call the choice thunk
    (begin (set! failures (rest failures))
          (fail))))

(define-syntax-rule (amb E ...)
  (amb/thunks (list (lambda () E) ...)))

With this, we can implement code that computes choices rather than having them listed:

(define (integers-between n m)
  (assert (<= n m))
  (amb n (integers-between (add1 n) m)))

or even ones that are infinite:

(define (integers-from n)
  (amb n (integers-from (add1 n))))

As with any infinite sequence, there are traps to avoid. In this case, trying to write code that can find any Pythagorean triplet as:

(collect 7
  (let ([a (integers-from 1)]
        [b (integers-from 1)]
        [c (integers-from 1)])
    (assert (= (* a a) (+ (* b b) (* c c))))
    (list a b c)))

will not work. The problem is that the search loop will keep incrementing c, and therefore will not find any solution. The search can work if only the top-most choice is infinite:

(collect 7
  (let* ([a (integers-from 1)]
        [b (integers-between 1 a)]
        [c (integers-between 1 a)])
    (assert (= (* a a) (+ (* b b) (* c c))))
    (list a b c)))

The complete code follows:


amb.rkt D 
;; The ambiguous operator and related utilities

#lang racket

(define failures null)

(define (fail)
  (if (pair? failures)
    ((first failures))
    (error "no more choices!")))

(define (amb/thunks choices)
  (let/cc k (set! failures (cons k failures)))
  (if (pair? choices)
    (let ([choice (first choices)])
      (set! choices (rest choices))
      (choice))
    (begin (set! failures (rest failures))
          (fail))))

(define-syntax-rule (amb E ...)
  (amb/thunks (list (lambda () E) ...)))

(define (assert condition)
  (unless condition (fail)))

(define (integers-between n m)
  (assert (<= n m))
  (amb n (integers-between (add1 n) m)))

(define (integers-from n)
  (amb n (integers-from (add1 n))))

(define (collect/thunk n thunk)
  (define results null)
  (let/cc too-few
    (set! failures (list too-few))
    (define result (thunk))
    (set! results (cons result results))
    (set! n (sub1 n))
    (unless (zero? n) (fail)))
  (set! failures null)
  (reverse results))

(define-syntax collect
  (syntax-rules ()
    ;; collect N results
    [(_ N E) (collect/thunk N (lambda () E))]
    ;; collect all results
    [(_ E) (collect/thunk -1 (lambda () E))]))

As a bonus, the code includes a collect tool that can be used to collect a number of results — it uses fail to iterate until a sufficient number of values is collected. A simple version is:

(define (collect/thunk n thunk)
  (define results null)
  (define result (thunk))
  (set! results (cons result results))
  (set! n (sub1 n))
  (unless (zero? n) (fail))
  (reverse results))

(Question: why does this code use mutation to collect the results?)

But since this might run into a premature failure, the actual version in the code installs its own failure continuation that simply aborts the collection loop. To try it out:

(collect (* (integers-between 1 3) (integers-between 1 5)))

extra Generators (and Producers)

Another popular facility that is related to continuations is generators. The idea is to split code into separate “producers” and “consumers”, where the computation is interleaved between the two. This simplifies some notoriously difficult problems. It is also a twist on the idea of co-routines, where two functions transfer control back and forth as needed. (Co-routines can be developed further into a “cooperative threading” system, but we will not cover that here.)

A classical example that we have mentioned previously is the “same fringe” problem. One of the easy solutions that we talked about was to run two processes that spit out the tree leaves, and a third process that grabs both outputs as they come and compares them. Using a lazy language allowed a very similar solution, where the two processes are essentially represented as two lazy lists. But with continuations we can find a solution that works in a strict language too, and in fact, one that is very close to the two processes metaphor.

The fact that continuations can support such a solution shouldn’t be surprising: as with the kind of server-client interactions that we’ve seen with the web language, and as with the amb tricks, the main theme is the same — the idea of suspending computation. (Intuitively, this also explains why a lazy language is related: it is essentially making all computations suspendable in a sense.)

To implement generators, we begin with a simple code that we want to eventually use:

(define (producer)
  (yield 1)
  (yield 2)
  (yield 3))

where yield is expected to behave similarly to a return — it should make the function return 1 when called, and then somehow return 2 and 3 on subsequent calls. To make it easier to develop, we’ll make yield an argument to the producer:

(define (producer yield)
  (yield 1)
  (yield 2)
  (yield 3))

To use this producer, we need to find a proper value to call it with. Sending it an identity, (lambda (x) x), is clearly not going to work: it will make all yields executed on the first call, returning the last value. Instead, we need some way to abort the computation on the first yield. This, of course, can be done with a continuation, which we should send as the value of the yield argument. And indeed,

> (let/cc k (producer k))
1

returns 1 as we want. But if we use this expression again, we get more 1s as results:

> (let/cc k (producer k))
1
> (let/cc k (producer k))
1

The problem is obvious: our producer starts every time from scratch, always sending the first value to the given continuation. Instead, we need to make it somehow save where it stopped — its own continuation — and on subsequent calls it should resume from that point. We start with adding a resume continuation to save our position into:

(define (producer yield)
  (define resume #f)
  (if (not resume)  ; we just started, so no resume yet
    (begin (yield 1)
          (yield 2)
          (yield 3))
    (resume 'blah))) ; we have a resume, use it

Next, we need to make it so that each use of yield will save its continuation as the place to resume from:

(define (producer yield)
  (define resume #f)
  (if (not resume)
    (begin (let/cc k (set! resume k) (yield 1))
          (let/cc k (set! resume k) (yield 2))
          (let/cc k (set! resume k) (yield 3)))
    (resume 'blah)))

But this is still broken in an obvious way: every time we invoke this function, we define a new local resume which is always #f, leaving us with the same behavior. We need resume to persist across calls — which we can get by “pulling it out” using a let:

(define producer
  (let ([resume #f])
    (lambda (yield)
      (if (not resume)
        (begin (let/cc k (set! resume k) (yield 1))
              (let/cc k (set! resume k) (yield 2))
              (let/cc k (set! resume k) (yield 3)))
        (resume 'blah)))))

And this actually works:

> (let/cc k (producer k))
1
> (let/cc k (producer k))
2
> (let/cc k (producer k))
3

(Tracing how it works is a good exercise.)

Before we continue, we’ll clean things up a little. First, to make it easier to get values from the producer, we can write a little helper:

(define (get producer)
  (let/cc k (producer k)))

Next, we can define a local helper inside the producer to improve it in a similar way by making up a yield that wraps the raw-yield input continuation (also flip the condition):

(define producer
  (let ([resume #f])
    (lambda (raw-yield)
      (define (yield value)
        (let/cc k (set! resume k) (raw-yield value)))
      (if resume
        (resume 'blah)
        (begin (yield 1)
              (yield 2)
              (yield 3))))))

And we can further abstract out the general producer code from the specific 1-2-3 producer that we started with. The complete code is now:

(define (make-producer producer)
  (let ([resume #f])
    (lambda (raw-yield)
      (define (yield value)
        (let/cc k (set! resume k) (raw-yield value)))
      (if resume
        (resume 'blah)
        (producer yield)))))

(define (get producer)
  (let/cc k (producer k)))

(define producer
  (make-producer (lambda (yield)
                  (yield 1)
                  (yield 2)
                  (yield 3))))

When we now evaluate (get producer) three times, we get back the three values in the correct order. But there is a subtle bug here, first try this (after re-running!):

> (list (get producer) (get producer))

Seems that this is stuck in an infinite loop. To see where the problem is, re-run to reset the producer, and then we can see the following interaction:

> (* 10 (get producer))
10
> (* 100 (get producer))
20
> (* 12345 (get producer))
30

This looks weird… Here’s a more clarifying example:

> (list (get producer))
'(1)
> (get producer)
'(2)
> (get producer)
'(3)

Can you see what’s wrong now? It seems that all three invocations of the producer use the same continuation — the first one, specifically, the (list <*>) continuation. This also explains why we run into an infinite loop with (list (get producer) (get producer)) — the first continuation is:

(list <*> (get producer))

so when we get the first 1 result we plug it in and proceed to evaluate the second (get producer), but that re-invokes the first continuation again, getting into an infinite loop. We need to look closely at our make-producer to see the problem:

(define (make-producer producer)
  (let ([resume #f])
    (lambda (raw-yield)
      (define (yield value)
        (let/cc k (set! resume k) (raw-yield value)))
      (if resume
        (resume 'blah)
        (producer yield)))))

When (make-producer (lambda (yield) ...)) is first called, resume is initialized to #f, and the result is the (lambda (raw-yield) ...), which is bound to the global producer. Next, we call this function, and since resume is #f, we apply the producer on our yield — which is a closure that has a reference to the raw-yield that we received — the continuation that was used in this first call. The problem is that on subsequent calls resume will contain a continuation which it is called, but this will jump back to that first closure with the original raw-yield, so instead of returning to the current calling context, we re-return to the first context — the same first continuation. The code can be structured slightly to make this a little more obvious: push the yield definition into the only place it is used (the first call):

(define (make-producer producer)
  (let ([resume #f])
    (lambda (raw-yield)
      (if resume
        (resume 'blah)
        (let ([yield (lambda (value)
                      (let/cc k
                        (set! resume k)
                        (raw-yield value)))])
          (producer yield))))))

yield is not used elsewhere, so this code has exactly the same meaning as the previous version. You can see now that when the producer is first used, it gets a raw-yield continuation which is kept in a newly made closure — and even though the following calls have different continuations, we keep invoking the first one. These calls get new continuations as their raw-yield input, but they ignore them. It just happened that the when we evaluated (get producer) three times on the REPL, all calls had essentially the same continuation (the P part of the REPL), so it seemed like things are working fine.

To fix this, we must avoid calling the same initial raw-yield every time: we must change it with each call so it is the right one. We can do this with another mutation — introduce another state variable that will refer to the correct raw-yield, and update it on every call to the producer. Here’s one way to do this:

(define (make-producer producer)
  (let ([resume #f]
        [return-to-caller #f])
    (lambda (raw-yield)
      (set! return-to-caller raw-yield)
      (if resume
        (resume 'blah)
        (let ([yield (lambda (value)
                      (let/cc k
                        (set! resume k)
                        (return-to-caller value)))])
          (producer yield))))))

Using this, we get well-behaved results:

> (list (get producer))
'(1)
> (* 8 (get producer))
16
> (get producer)
3

or (again, after restarting the producer by re-running the code):

> (list (get producer) (get producer) (get producer))
'(1 2 3)

Side-note: a different way to achieve this is to realize that when we invoke resume, we’re calling the continuation that was captured by the let/cc expression. Currently, we’re sending just 'blah to that continuation, but we could send raw-yield there instead. With that, we can make that continuation be the target of setting the return-to-caller state variable. (This is how PLAI solves this problem.)

(define (make-producer producer)
  (let ([resume #f])
    (lambda (raw-yield)
      (define return-to-caller raw-yield)
      (define (yield value)
        (set! return-to-caller
              (let/cc k
                (set! resume k)
                (return-to-caller value))))
      (if resume
        (resume raw-yield)
        (producer yield)))))

Continuing with our previous code, and getting the yield back into a a more convenient definition form, we have this complete code:


producer.rkt D 
;; An implementation of producer functions

#lang racket

(define (make-producer producer)
  (let ([resume #f]
        [return-to-caller #f])
    (lambda (raw-yield)
      (define (yield value)
        (let/cc k (set! resume k) (return-to-caller value)))
      (set! return-to-caller raw-yield)
      (if resume
        (resume 'blah)
        (producer yield)))))

(define (get producer)
  (let/cc k (producer k)))

(define producer
  (make-producer (lambda (yield)
                  (yield 1)
                  (yield 2)
                  (yield 3))))

There is still a small problem with this code:

> (list (get producer) (get producer) (get producer))
'(1 2 3)
> (get producer)
;; infinite loop

Tracking this problem is another good exercise, and finding a solution is easy. (For example, throwing an error when the producer is exhausted, or returning 'done, or returning the return value of the producer function.)

extra Delimited Continuations

While the continuations that we have seen are a useful tool, they are often “too global” — they capture the complete computation context. But in many cases we don’t want that, instead, we want to capture a specific context. In fact, this is exactly why producer code got complicated: we needed to keep capturing the return-to-caller continuation to make it possible to return to the correct context rather than re-invoking the initial (and wrong) context.

Additional work on continuations resulted in a feature that is known as “delimited continuations”. These kind of continuations are more convenient in that they don’t capture the complete context — just a potion of it up to a specific point. To see how this works, we’ll restart with a relatively simple producer definition:

(define producer
  (let ()
    (define (cont)
      (let/cc ret
        (define (yield value)
          (let/cc k (set! cont k) (ret value)))
        (yield 1)
        (yield 2)
        (yield 3)
        4))
    (define (generator) (cont))
    generator))

This producer is essentially the same as one that we’ve seen before: it seems to work in that it returns the desired values for every call:

> (producer)
1
> (producer)
2
> (producer)
3

But fails in that it always returns to the initial context:

> (list (producer))
'(1)
> (+ 100 (producer))
'(2)
> (* "bogus" (producer))
'(3)

Fixing this will lead us down the same path we’ve just been through: the problem is that generator is essentially an indirection “trampoline” function that goes to whatever cont currently holds, and except for the initial value of cont the other values are continuations that are captured inside yield, meaning that the calls are all using the same ret continuation that was grabbed once, at the beginning. To fix it, we will need to re-capture a return continuation on every use of yield, which we can do by modifying the ret binding, giving us a working version:

(define producer
  (let ()
    (define (cont)
      (let/cc ret
        (define (yield value)
          (let/cc k
            (set! cont (lambda () (let/cc r (set! ret r) (k))))
            (ret value)))
        (yield 1)
        (yield 2)
        (yield 3)
        4))
    (define (generator) (cont))
    generator))

This pattern of grabbing the current continuation and then jumping to another — (let/cc k (set! cont k) (ret value)) — is pretty common, enough that there is a specific construct that does something similar: control. Translating the let/cc form to it produces:

(control k (set! cont ...) value)

A notable difference here is that we don’t use a ret continuation. Instead, another feature of the control form is that the value returns to a specific point back in the current computation context that is marked with a prompt. (Note that the control and prompt bindings are not included in the default racket language, we need to get them from a library: (require racket/control).) The fully translated code simply uses this prompt in place of the outer capture of the ret continuation:

(define producer
  (let ()
    (define (cont)
      (prompt
        (define (yield value)
          (control k
            (set! cont ???)
            value))
        (yield 1)
        (yield 2)
        (yield 3)
        4))
    (define (generator) (cont))
    generator))

We also need to translate the (lambda () (let/cc r (set! ret r) (k))) expression — but there is no ret to modify. Instead, we get the same effect by another use of prompt which is essentially modifying the implicitly used return continuation:

(define producer
  (let ()
    (define (cont)
      (prompt
        (define (yield value)
          (control k
            (set! cont (lambda () (prompt (k))))
            value))
        (yield 1)
        (yield 2)
        (yield 3)
        4))
    (define (generator) (cont))
    generator))

This looks like the previous version, but there’s an obvious advantage: since there is no ret binding that we need to maintain, we can pull out the yield definition to a more convenient place:

(define producer
  (let ()
    (define (yield value)
      (control k
        (set! cont (lambda () (prompt (k))))
        value))
    (define (cont)
      (prompt
        (yield 1)
        (yield 2)
        (yield 3)
        4))
    (define (generator) (cont))
    generator))

Note that this is an important change, since the producer machinery can now be abstracted into a make-producer function, as we’ve done before:

(define (make-producer producer)
  (define (yield value)
    (control k
      (set! cont (lambda () (prompt (k))))
      value))
  (define (cont) (prompt (producer yield)))
  (define (generator) (cont))
  generator)

(define producer
  (make-producer (lambda (yield)
                  (yield 1)
                  (yield 2)
                  (yield 3)
                  4)))

This is, again, a common pattern in such looping constructs — where the continuation of the loop keeps modifying the prompt as we do in the thunk assigned to cont. There are two other operators that are similar to control and prompt, which re-instate the point to return to automatically. Confusingly, they have completely different name: shift and reset. In the case of our code, we simply do the straightforward translation, and drop the extra wrapping step inside the value assigned to cont since that is done automatically. The resulting definition becomes even shorter now:

(define (make-producer producer)
  (define (yield value) (shift k (set! cont k) value))
  (define (cont) (reset (producer yield)))
  (define (generator) (cont))
  generator)

(Question: which set of forms is the more powerful one?)

It even looks like this code works reasonably well when the producer is exhausted:

> (list (producer) (producer) (producer) (producer) (producer))
'(1 2 3 4 4)

But the problem is still there, except a but more subtle. We can see it if we add a side-effect:

(define producer
  (make-producer (lambda (yield)
                  (yield 1)
                  (yield 2)
                  (yield 3)
                  (printf "Hey!\n")
                  4)))

and now we get:

> (list (producer) (producer) (producer) (producer) (producer))
Hey!
Hey!
'(1 2 3 4 4)

This can be solved in the same way as we’ve discussed earlier — for example, grab the result value of the producer (which means that we get the value only after it’s exhausted), then repeat returning that value. A particularly easy way to do this is to set cont to a thunk that returns the value — since the resulting generator function simply invokes it, we get the desired behavior of returning the last value on further calls:

(define (make-producer producer)
  (define (yield value) (shift k (set! cont k) value))
  (define (cont)
    (reset (let ([retval (producer yield)])
            ;; we get here when the producer is done
            (set! cont (lambda () retval))
            retval)))
  (define (generator) (cont))
  generator)

(define producer
  (make-producer (lambda (yield)
                  (yield 1)
                  (yield 2)
                  (yield 3)
                  (printf "Hey!\n")
                  4)))

and now we get the improved behavior:

> (list (producer) (producer) (producer) (producer) (producer))
Hey!
'(1 2 3 4 4)

Continuation Conclusions

Continuations are often viewed as a feature that is too complicated to understand and/or are hard to implement. As a result, very few languages provide general first-class continuations. Yet, they are an extremely useful tool since they enable implementing new kinds of control operators as user-written libraries. The “user” part is important here: if you want to implement producers (or a convenient web-read, or an ambiguous operator, or any number of other uses) in a language that doesn’t have continuations your options are very limited. You can ask for the new feature and wait for the language implementors to provide it, or you can CPS the relevant code (and the latter option is possible only if you have complete control over the whole code source to transform). With continuations, as we have seen, it is not only possible to build such libraries, the resulting functionality is as if the language has the desired feature already built-in. For example, Racket comes with a generator library that is very similar to Python generators — but in contrast to Python, it is implemented completely in user code. (In fact, the implementation is very close to the delimited continuations version that we’ve seen last.)

Obviously, in cases where you don’t have continuations and you need them (or rather when you need some functionality that is implementable via continuations), you will likely resort to the CPS approach, in some limited version. For example, the Racket documentation search page allows input to be typed while the search is happening.

This is a feature that by itself is not available in JavaScript — it is as if there are two threads running (one for the search and one to handle input), where JS is single-threaded on principle. This was implemented by making the search code side-effect free, then CPS-ing the code, then mimic threads by running the search for a bit, then storing its (manually built) continuation, handling possible new input, then resuming the search via this continuation. An approach that solves a similar problem using a very different approach is node.js — a JavaScript-based server where all IO is achieved via functions that receive callback functions, resulting in a style of code that is essentially writing CPSed code. For example, it is similar in principle to write code like:

;; copy "foo" to "tmp", read a line, delete "tmp", log the line
(copy-file "foo" "tmp"
  (lambda ()
    (read-line "tmp"
      (lambda (line)
        (delete-file "tmp"
          (lambda ()
            (log-line line
              (lambda ()
                (printf "All done.\n")))))))))

or a concrete node.js example — to swap two files, you could write:

function swap(path1, path2, callback) {
  fs.rename(path1, "temp-name",
    function() {
      fs.rename(path2, path1,
        function() {
          fs.rename("temp-name", path2, callback);
        });
    });
}

and if you want to follow the convention of providing a “convenient” synchronous version, you would also add:

function swapSync(path1, path2) {
  fs.renameSync(path1, "temp-name");
  fs.renameSync(path2, path1);
  fs.renameSync("temp-name", path2);
}

As we have seen in the web server application example, this style of programming tends to be “infectious”, where a function that deals with these callback-based functions will itself consume a callback —

;; abstract the above as a function
(define (safe-log-line in-file callback)
  (copy-file in-file "tmp"
    (lambda ()
      ... (log-line line callback))))

You should be able to see now what is happening here, without even mentioning the word “continuation” in the docs… See also this Node vs Apache video and read this extended JS rant. Quote:

No one ever for a second thought that a programmer would write actual code like that. And then Node came along and all of the sudden here we are pretending to be compiler back-ends. Where did we go wrong?

(Actually, JavaScript has gotten better with promises, and then even better with async/await — but these are new, so it is actually common to find libraries that provide two such versions and a third promise-based one, and even a fourth one, using async. See for example the replace-in-file package on NPM.)

Finally, as mentioned a few times, there has been extensive research into many aspects of continuations. Different CPS approaches, different implementation strategies, a zoo-full of control operators, assigning types to continuation-related functions and connections between continuations and types, even connections between CPS and certain proof techniques. Some research is still going on, though not as hot as it was — but more importantly, many modern languages “discover” the utility of having continuations, sometimes in some more limited forms (eg, Python and JavaScript generators), and sometimes in full form (eg, Ruby’s callcc).

Sidenote: JavaScript: Continuations, Promises, and Async/Await

JavaScript went through a long road that is directly related to continuations. The motivating language feature that started all of this is the early decision to make the language single-threaded. This feature does make code easier to handle WRT side effects of all kinds. For example, consider:

let x = 0;
x = x + 1;
console.log(`A. x = ${x}`);
x = x + 1;
console.log(`B. x = ${x}`);

In JS, this code is guaranteed to show exactly the two output lines, with the expected values of 1 and 2. This is not guaranteed in any language that has threads (or any form of parallelism), since other threads might kick in at any point, possibly producing more output or mutating the value of x.

For several years JS was used only as a lightweight scripting language for simple tasks on web pages, and life was simple with the single threaded-ness nature of the language. Note that the environment around the language (= the browser) was very early quite thread-heavy, with different threads used for text layout, rendering, loading, user interactions, etc. But then came Node and JS was used for more traditional uses, with a web server being one of the first use cases. This meand that there was a a need for some way to handle “multiple threads” — it’s impractical to wait for any server-side interaction to be done before serving the next.

This was addressed by designing Node as a program that makes heavy use of callback functions and IO events. In cases where you really want a linear sequence of operations you can use *Sync functions, such as the one we’ve seen above:

function swapSync(path1, path2) {
  fs.renameSync(path1, "temp-name");
  fs.renameSync(path2, path1);
  fs.renameSync("temp-name", path2);
}

But you can also use the non-Sync version which allows to switch between different active jobs — translating the above to (ignoring errors for simplicity):

function swap(path1, path2) {
  fs.rename(path1, "temp-name", () => {
    fs.rename(path2, path1, () => {
      fs.rename("temp-name", path2);
    });
  });
}

This could be better in some cases — for example, if the filesystem is a remote-mounted directory, each rename will likely take noticeable time. Using callbacks means that after firing each rename request the whole process is free to do any other work, and later resume the execution with the next operation when the last request is done.

But there is a major problem with this code: we can be more efficient since the main process can do anything while waiting for the three operations, but if you want to run this function to swap two files then you likely have some code that needs to run after executing it — code that expects the name swapping to have happened:

swap("foo", "bar");
... more code ...

The problem is that the function call returns immediately, and just registers the first rename operation. There might be a small chance (very unlikely) for the first rename to have happened, but even if it happens fast, the callback is guaranteed to not be called (again, since the core language is single-threaded), making this code completely broken.

This means that for such uses we must switch to using callbacks in the main function too:

function swap(path1, path2, cb) {
  fs.rename(path1, "temp-name", () => {
    fs.rename(path2, path1, () => {
      fs.rename("temp-name", path2, () =>
        cb()); // or just use cb directly
    });
  });
}

And the uses too:

swap("foo", "bar", () => {
  ... more code ...
});

This is the familiar manual CPS-ing of code that was for many years something that all JS programmes needed to use.

As we’ve seen, the problem here is that writing such code is difficult, and even more so working with such code when it needs to be extended, debugged, etc. Eventually, the JS world settled on using “promises” to simplify all of this: the idea is that instead of building “towers of callbacks”, we abstract them as promises that are threaded using a .then property for the chain of operations.

function swapPromise(path1, path2) {
  return fsPromises.rename(path1, "temp-name")
          .then(() => fsPromises.rename(path2, path1))
          .then(() => fsPromises.rename("temp-name", path2));
}

Note that here too, the changes in the function body are reflated in how the whole function is used: the body uses promises, and therefore the whole function returns a promise.

This is a little better in that we don’t suffer from nested code towers, but it’s still not too convenient. The last step in this evolution was the introduction of async/await — language features that declare a promise-ified function which can wait for promises to be executed in a more-or-less plain looking code:

async function swap(path1, path2) {
  await fsPromises.rename(path1, "temp-name");
  await fsPromises.rename(path2, path1);
  await fsPromises.rename("temp-name", path2);
}

(The implementation of this feature is interesting: it relies on using generator functions, and using the fact that you can send values into a generator when you resume it — and all of this is, roughly speaking, used to implement a threading system.)

This looks almost like normal code, but note that while doing so we lose the simplification of a single-threaded language. Uses of await are equivalent to function calls that can switch the context to other code which might introduce unexpected side-effects. This is not really getting back to a proper multi-threaded language like Racket: it’s rather a language with “cooperative threads”. You still have the advantage that if you don’t use await, then code executes sequentially and uninterrupted.

Whether it is useful or not to complicate these cases just to be able to write non-async code is a question…