There is an inherent problem when macros are being used, in any form and any language (even in CPP): you must remember that you are playing with expressions, not with values — which is why this is problematic:
And the reason for this should be clear. The standard solution for this is to save the value as a binding — so back to the drawing board, we want this transformation instead:
(Note that we would have the same problem in the version that used simple functions and thunks.)
And to write the new code:
and this works like we want it to.
As can be seen, writing a simple macro doesn’t look too good — what if we want to write a more complicated macro? A solution to this is to look at the above macro and realize that it almost looks like the code we want — we basically want to return a list of a certain fixed shape, we just want some parts to be filled in by the given arguments. Something like:
if we had a way to make the <...>
s not be a fixed part of the result,
but we actually want the values that the transformation function
received. (Remember that the <
and >
are just a part of the name, no
magic, just something to make these names stand out.) This is related to
notational problems that logicians and philosophers had problems with
for centuries. One solution that Lisp uses for this is: instead of a
quote, use backquote (called quasiquote
in Racket) which is almost
like quote, except that you can unquote
parts of the value inside.
This is done with a “,
” comma. Using this, the above code can be
written like this:
You should be able to guess what’s this problem about. The basic problem of these macros is that they cannot be used reliably — the name that is produced by the macro can shadow a name that is in a completely different place, therefore destroying lexical scope. For example, in:
the val
in the macro shadows the use of this name in the above. One
way to solve this is to write macros that look like this:
or:
or (this is actually similar to using UUIDs):
Which is really not too good because such obscure variables tend to clobber each other too, in all kinds of unexpected ways.
Another way is to have a function that gives you a different variable name every time you call it:
but this is not safe either since there might still be clashes of these
names (eg, if they’re using a counter that is specific to the current
process, and you start a new process and load code that was generated
earlier). The Lisp solution for this (which Racket’s gensym
function
implements as well) is to use uninterned symbols — symbols that have
their own identity, much like strings, and even if two have the same
name, they are not eq?
.
Note also that there is the mirror side of this problem — what happens if we try this:
? This leads to capture in the other direction — the code above
shadows the if
binding that the macro produces.
Some Schemes will allow something like
but this is a hack since the macro outputs something that is not a pure
s-expression (and it cannot work for a syntactic keyword like if
).
Specifically, it is not possible to write the resulting expression (to a
compiled file, for example).
We will ignore this for a moment.
Another problem — manageability of these transformations.
Quasiquotes gets us a long way, but it is still insufficient.
For example, lets write a Racket bind
that uses lambda
for binding.
The transformation we now want is:
The code for this looks like this:
This already has a lot more pitfalls. There are list
s and cons
es
that you should be careful of, there are map
s and there are cadr
s
that would be catastrophic if you use car
s instead. The quasiquote
syntax is a little more capable — you can write this:
where “,@
” is similar to “,
” but the unquoted expression should
evaluate to a list that is spliced into its surrounding list (that is,
its own parens are removed and it’s made into elements in the containing
list). But this is still not as readable as the transformation you
actually want, and worse, it is not checking that the input syntax is
valid, which can lead to very confusing errors.
This is yet another problem — if there is an error in the resulting
syntax, the error will be reported in terms of this result rather than
the syntax of the code. There is no easy way to tell where these errors
are coming from. For example, say that we make a common mistake: forget
the “@
” character in the above macro:
Now, someone else (the client of this macro), tries to use it:
Yes? Now what? Debugging this is difficult, since in most cases it is not even clear that you were using a macro, and in any case the macro comes from code that you have no knowledge of and no control over. [The problem in this specific case is that the macro expands the code to:
so Racket will to use 1
as a function and throw a runtime error.]
Adding error checking to the macro results in this code:
Such checks are very important, yet writing this is extremely tedious.
Scheme, Racket included (and much extended), has a solution that is
better than defmacro
: it has define-syntax
and syntax-rules
. First
of all, define-syntax
is used to create the “magical connection”
between user code and some macro transformation code that does some
rewriting. This definition:
makes foo
be a special syntax that, when used in the head of an
expression, will lead to transforming the expression itself, where the
result of this transformation is what gets used instead of the original
expression. The “...something...
” in this code fragment should be a
transformation function — one that consumes the expression that is to
be transformed, and returns the new expression to run.
Next, syntax-rules
is used to create such a transformation in an easy
way. The idea is that what we thought to be an informal specification of
such rewrites, for example:
and
can actually be formalized by automatically creating a syntax transformation function from these rule specifications. (Note that this example has round parentheses so we don’t fall into the illusion that square brackets are different: the resulting transformation would be the same.) The main point is to view the left hand side as a pattern that can match some forms of syntax, and the right hand side as producing an output that can use some matched patterns.
syntax-rules
is used with such rewrite specifications, and it produces
the corresponding transformation function. For example, this:
evaluates to a function that is somewhat similar to:
but match
is a little closer, since it uses similar input patterns:
Such transformations are used in a define-syntax
expression to tie the
transformer back into the compiler by hooking it on a specific keyword.
You can now appreciate how all this work when you see how easy it is to
define macros that are very tedious with define-macro
. For example,
the above bind
:
and let*
with its two rules:
These transformations are so convenient to follow, that Scheme
specifications (and reference manuals) describe forms by specifying
their definition. For example, the Scheme report, specifies let*
as a
“derived form”, and explains its semantics via this transformation.
The input patterns in these rules are similar to match
patterns, and
the output patterns assemble an s-expression using the matched parts in
the input. For example:
does the thing you expect it to do — matches a parenthesized form with
two sub-forms, and produces a form with the two sub-forms swapped. The
rules for “...
” on the left side are similar to match
, as we have
seen many times, and on the right side it is used to splice a matched
sequence into the resulting expression and it is required to use the
...
for sequence-matched pattern variables. For example, here is a
list of some patterns, and a description of how they match an input when
used on the left side of a transformation rule and how they produce an
output expression when they appear on the right side:
(x ...)
LHS: matches a parenthesized sequence of zero or more expressions, and the
x
pattern variable is bound to this whole sequence;match
analogy:(match ? [(list x ...) ?])
RHS: when
x
is bound to a sequence, this will produce a parenthesized expression containing this sequence;match
analogy:(match ? [(list x ...) x])
(x1 x2 ...)
LHS: matches a parenthesized sequence of one or more expressions, the first is bound to
x1
and the rest of the sequence is bound tox2
;
match
analogy:(match ? [(list x1 x2 ...) ?])
RHS: produces a parenthesized expression that contains the expression bound to
x1
first, then all of the expressions in the sequence thatx2
is bound to;
match
analogy:(match ? [(list x1 x2 ...) (cons x1 x2)])
((x y) ...)
LHS: matches a parenthesized sequence of 2-form parenthesized sequences, binding
x
to all the first forms of these, andy
to all the seconds of these (so they will both have the same number of items);
match
analogy:(match ? [(list (list x y) ...) ?])
RHS: produces a list of forms where each one is made of consecutive forms in the
x
sequence and consecutive forms in they
sequence (both sequences should have the same number of elements);
match
analogy:(match ? [(list (list x y) ...)
(map (lambda (x y) (list x y)) x y)])
Some examples of transformations that would be very tedious to write code manually for:
((x y) ...) --> ((y x) ...)
Matches a sequence of 2-item sequences, produces a similar sequence with all of the nested 2-item sequences flipped.
((x y) ...) --> ((x ...) (y ...))
Matches a similar sequence, and produces a sequence of two sequences, one of all the first items, and one of the second ones.
((x y ...) ...) --> ((y ... x) ...)
Similar to the first example, but the nested sequences can have 1 or
more items in them, and the nested sequences in the result have the
first element moved to the end. Note how the ...
are nested: the
rule is that for each pattern variable you count how many ...
s apply
to it, and that tells you what it holds — and you have to use the
same ...
nestedness for it in the output template.
This is solving the problems of easy code — no need for list
, cons
etc, not even for quasiquotes and tedious syntax massaging. But there
were other problems. First, there was a problem of bad scope, one that
was previously solved with a gensym
:
Translating this to define-syntax
and syntax-rules
we get something
simpler:
Even simpler, Racket has a macro called define-syntax-rule
that
expands to a define-syntax
combined with a syntax-rules
— using
it, we can write:
This looks like like a function — but you must remember that it is a transformation rule specification which is a very different beast, as we’ll see.
The main thing here is that Racket takes care of making bindings follow the lexical scope rules:
works fine. In fact, it fully respects the scoping rules: there is no
confusion between bindings that the macro introduces and bindings that
are introduced where the macro is used. (Think about different colors
for bindings introduced by the macro and other bindings.) It’s also fine
with many cases that are much harder to cope with otherwise (eg, cases
where there is no gensym
magic solution):
or combining both:
(You can try DrRacket’s macro debugger to see how the various bindings get colored differently.)
define-macro
advocates will claim that it is difficult to make a macro
that intentionally plants a known identifier. Think about a loop
macro that has an abort
that can be used inside its body. Or an
if-it
form that is like if
, but makes it possible to use the
condition’s value in the “then” branch as an it
binding. It is
possible with all Scheme macro systems to “break hygiene” in such ways,
and we will later see how to do this in Racket. However, Racket also
provides a better way to deal with such problems (think about it
being
always “bound to a syntax error”, but locally rebound in an if-it
form).
Scheme macros are said to be hygienic — a term used to specify that
they respect lexical scope. There are several implementations of
hygienic macro systems across Scheme implementations, Racket uses the
one known as “syntax-case system”, named after the syntax-case
construct that we discuss below.
All of this can get much more important in the presence of a module system, since you can write a module that provides transformations rules, not just values and functions. This means that the concept of “a library” in Racket is something that doesn’t exist in other languages: it’s a library that has values, functions, as well as macros (or, “compiler plugins”).
The way that Scheme implementations achieve hygiene in a macro system is by making it deal with more than just raw S-expressions. Roughly speaking, it deals with syntax objects that are sort of a wrapper structure around S-expression, carrying additional information. The important part of this information when it gets to dealing with hygiene is the “lexical scope” — which can roughly be described as having identifiers be represented as symbols plus a “color” which represents the scope. This way such systems can properly avoid confusing identifiers with the same name that come from different scopes.
There was also the problem of making debugging difficult, because a macro can introduce errors that are “coming out of nowhere”. In the implementation that we work with, this is solved by adding yet more information to these syntax objects — in addition to the underlying S-expression and the lexical scope, they also contain source location information. This allows Racket (and DrRacket) to locate the source of a specific syntax error, so locating the offending code is easy. DrRacket’s macro debugger heavily relies on this information to provide a very useful tool — since writing macros can easily become a hard job.
Finally, there was the problem of writing bad macros. For example, it is easy to forget that you’re dealing with a macro definition and write:
just because you want to inline the addition — but in this case you end up duplicating the input expression which can have a disastrous effect. For example:
expands to a lot of code to compile.
Another example is:
the problem here is that (* foo 2) will be used as an identifier to be
bound by the let
expression — which can lead to a confusing syntax
error.
Racket provides many tools to help macro programmers — in addition to
a user-interface tool like the macro debugger there are also
programmer-level tools where you can reject an input if it doesn’t
contain an identifier at a certain place etc. Still, writing macros is
much harder than writing functions — some of these problems are
inherent to the problem that macros solve; for example, you may want a
twice
macro that replicates an expression. By specifying a
transformation to the core language, a macro writer has full control
over which expressions get evaluated and how, which identifiers are
binding instances, and how is the scope of the given expression is
shaped.