We’ve seen that a lazy language without the call-by-need optimization is too slow to be practical, but the optimization makes using side-effects extremely confusing. Specifically, when we deal with side-effects (I/O, mutation, errors, etc) the order of evaluation matters, but in our interpreter expressions are getting evaluated as needed. (Remember tracing the prime-numbers code in Lazy Racket — numbers are tested as needed, not in order.) If we can’t do these things, the question is whether there is any point in using a purely functional lazy language at all — since computer programs often interact with an imperative world.
There is a solution for this: the lazy language does not have any (sane)
facilities for doing things (like
printf that prints something in
plain Racket), but it can use a data structure that describes such
operations. For example, in Lazy Racket we cannot print stuff sanely
printf, but we can construct a string using
format (which is
printf, except that it returns the formatted string instead
of printing it). So (assuming Racket syntax for simplicity), instead
we will write:
and get back a string. We can now change the way that our interpreter deals with the output value that it receives after evaluating a lazy expression: if it receives a string, then it can take that string as denoting a request for printout, and simply print it. Such an evaluator will do the printout when the lazy evaluation is done, and everything works fine because we don’t try to use any side-effects in the lazy language — we just describe the desired side-effects, and constructing such a description does not require performing side-effects.
But this only solves printing a single string, and nothing else. If we want to print two strings, then the only thing we can do is concatenate the two strings — but that is not only inefficient, it cannot describe infinite output (since we will not be able to construct the infinite string in memory). So we need a better way to chain several printout representations. One way to do so is to use a list of strings, but to make things a little easier to manage, we will create a type for I/O descriptions — and populate it with one variant holding a string (for plain printout) and one for holding a chain of two descriptions (which can be used to construct an arbitrarily long sequence of descriptions):
Now we can use this to chain any number of printout representations by
turning them into a single
Begin2 request, which is very similar to
simply using a loop to print the list. For example, the eager printout
turns to the following code:
This will basically scan an input list like the eager version, but
instead of printing the list, it will convert it into a single output
request that forms a recipe for this printout. Note that within the
lazy world, the result of
print-list is just a value, there are no
side effects involved. Turning this value into the actual printout is
something that needs to be done on the eager side, which must be part of
the implementation. In the case of Lazy Racket, we have no access to
the implementation, but we can do so in our Sloth implementation: again,
run will inspect the result and either print a given string (if it
Begin2 value). (To implement this, we will add an
IOV variant to
VAL type definition, and have it contain an
IO description of
the above type.)
Because the sequence is constructed in the lazy world, it will not
require allocating the whole sequence in memory — it can be forced
bits by bits (using
strict) as the imperative back-end (the
of the implementation) follows the instructions in the resulting IO
description. More concretely, it will also work on an infinite list:
the translation of an infinite-loop printout function will be one that
returns an infinite IO description tree of
Begin2 values. This loop
will also force only what it needs to print and will go on recursively
printing the whole sequence (possibly not terminating). For example
(again, using Racket syntax), the infinite printout loop
is translated into a function that returns an infinite tree of print operations:
When this tree is converted to actions, it will result in an infinite loop that produces the same output — it is essentially the same infinite loop, only now it’s derived by an infinite description rather than an infinite process.
Finally, how should we deal with inputs? We can add another variant to
our type definition that represents a
read-line operation, assuming
read-line it does not require any arguments:
Now the eager implementation can invoke
read-line when it encounters a
ReadLine value — but what should it do with the resulting string?
Even worse, naively binding a value to
doesn’t get us the string that is read — instead, the value is a description of a read operation, which is very different from the actual string value that we want in the binding.
The solution is to take the “code that acts on the string value” and
make it be the argument to
ReadLine. In the above example, that
could would be the
let expression without the
(ReadLine) part —
and as you rememebr from the time we introduced
away a named expression from a binding expression leads to a
function. With this in mind, it makes sense to make
ReadLine take a
function value that represents what to do in the future, once the
reading is actually done.
This receiver value is a kind of a continuation of the computation, provided as a callback value — it will get the string that was read on the terminal, and will return a new description of side-effects that represents the rest of the process:
Now, when the eager side sees a
ReadLine value, it will read a line,
and invoke the callback function with the string that it has read. By
doing this, the control goes back to the lazy world to process the value
and get back another IO value to continue the processing. This results
in a process where the lazy code generates some IO descriptions, then
the imperative side will execute it and control goes back to the lazy
code, then back to the imperative side, etc.
As a more verbose example of all of the above, this silly loop:
is now translated to:
Using this strategy to implement side-effects is possible, and you will do that in the homework — some technical details are going to be different but the principle is the same as discussed above. The last problem is that the above code is difficult to work with — in the homework you will see how to use syntactic abstractions to make thing much simpler.
Programming languages differ in numerous ways:
Each uses different notations for writing down programs. As we’ve observed, however, syntax is only partially interesting. (This is, however, less true of languages that are trying to mirror the notation of a particular domain.)
Control constructs: for instance, early languages didn’t even support recursion, while most modern languages still don’t have continuations.
The kinds of data they support. Indeed, sophisticated languages like Racket blur the distinction between control and data by making fragments of control into data values (such as first-class functions and continuations).
The means of organizing programs: do they have functions, modules, classes, namespaces, …?
Automation such as memory management, run-time safety checks, and so on.
Each of these items suggests natural questions to ask when you design your own languages in particular domains.
Obviously, there are a lot of domain specific languages these days — and that’s not new. For example, four of the oldest languages were conceived as domain specific languages:
Only in the late 60s / early 70s languages began to get free from their special purpose domain and become general purpose languages (GPLs). These days, we usually use some GPL for our programs and often come up with small domain specific languages (DSLs) for specific jobs. The problem is designing such a specific language. There are lots of decisions to make, and as should be clear now, many ways of shooting your self in the foot. You need to know:
What is your domain?
What are the common notations in this domain (need to be convenient both for the machine and for humans)?
What do you expect to get from your DSL? (eg, performance gains when you know that you’re dealing with a certain limited kind of functionality like arithmetics.)
Do you have any semantic reason for a new language? (For example, using special scoping rules, or a mixture of lazy and eager evaluation, maybe a completely different way of evaluation (eg, makefiles).)
Is your language expected to envelope other functionality (eg, shell scripts, TCL), perhaps throwing some functionality on a different language (makefiles and shell scripts), or is it going to be embedded in a bigger application (eg, PHP), or embedded in a way that exposes parts of an application to user automation (Emacs Lisp, Word Basic, Visual Basic for Office Application or Some Other Long List of Buzzwords).
If you have one language embedded in another enveloping language — how do you handle syntax? How can they communicate (eg, share variables)?
And very important:
To clarify why this can be applicable in more situations than you think,
consider what programming languages are used for. One example that
should not be ignored is using a programming language to implement a
programming language — for example, what we did so far (or any other
interpreter or compiler). In the same way that some piece of code in a
PL represent functions about the “real world”, there are other programs
that represent things in a language — possibly even the same one. To
make a side-effect-full example, the meaning of
abstract over laying a brick when making a wall — it abstracts all the
little details into a function:
and we can now write
instead of all of the above. We might use that in a loop:
This is a common piece of looping code that we’ve seen in many forms, and a common complaint of newcomers to functional languages is the lack of some kind of a loop. But once you know the template, writing such loops is easy — and in fact, you can write code that would take something like:
and produce the previous code. Note the main point here: we switch from code that deals with bricks to code that deals with code.
Now, a viable option for implementing a new DSL is to do so by transforming it into an existing language. Such a process is usually tedious and error prone — tedious because you need to deal with the boring parts of a language (making a parser etc), and error prone because it’s easy to generate bad code (especially when you’re dealing with strings) and you get bad errors in terms of the translated code instead of the actual code, resorting to debugging the intermediate generated programs. Lisp languages traditionally have taken this idea one level further than other languages: instead of writing a new transformer for your language, you use the host language, but you extend and customize it by adding you own forms.