Lecture #8, Tuesday, January 30th ================================= - Introducing Racket's `lambda` - Using Functions as Objects - Using `define-type` for new "type aliases" - Currying - Using Higher-Order & Anonymous Functions - Side-note: "Point-Free" combinators - This is not Runtime Code Generation - Substitution Caches - Implementation of Cache Functionality - Formal Rules for Cached Substitutions ------------------------------------------------------------------------ # Introducing Racket's `lambda` > Quick laundry list of things to go over: > - `fun` & `lambda` > - difference between lambda and simple values > - not being able to do recursive functions with `let` > - let* as a derived form > - let with lambda in Racket --> can be a derived form > - how `if` can be used to implement `and` and `or` as derived forms > - Newtonian syntax vs. a lambda expression. Almost all modern languages have this capability. For example, this: (define (f g) (g 2 3)) (f +) ==> 5 (f *) ==> 6 (f (lambda (x y) (+ (square x) (square y)))) ==> 13 Can be written in JavaScript like this: function f(g) { return g(2,3); } function square(x) { return x*x; } console.log(f(function (x,y) { return square(x) + square(y); })); or in ES6 JavaScript: let f = (g) => g(2,3); let square = (x) => x*x; console.log(f((x,y) => square(x) + square(y))); In Perl: sub f { my ($g) = @_; return $g->(2,3); } sub square { my ($x) = @_; return $x * $x; } print f(sub { my ($x, $y) = @_; return square($x) + square($y); }); In Ruby: def f(g) g.call(2,3) end def square(x) x*x end puts f(lambda{|x,y| square(x) + square(y)}) etc. Even [Java has lambda expressions], and "recently" [C++ added them too]. [Java has lambda expressions]: http://www.drdobbs.com/jvm/lambda-expressions-in-java-8/240166764 [C++ added them too]: http://www.cprogramming.com/c++11/c++11-lambda-closures.html ------------------------------------------------------------------------ # Using Functions as Objects A very important aspect of Racket --- using "higher order" functions --- functions that get and return functions. Here is a very simple example: (define (f x) (lambda () x)) (define a (f 2)) (a) --> 2 (define b (f 3)) (b) --> 3 Note: what we get is actually an object that remembers (by the substitution we're doing) a number. How about: (define aa (f a)) (aa) --> # (this is a) ((aa)) --> 2 Take this idea to the next level: (define (kons x y) (lambda (b) (if b x y))) (define (kar p) (p #t)) (define (kdr p) (p #f)) (define a (kons 1 2)) (define b (kons 3 4)) (list (kar a) (kdr a)) (list (kar b) (kdr b)) Or, with types: (: kons : (All (A B) A B -> (Boolean -> (U A B)))) (define (kons x y) (lambda (b) (if b x y))) (: kar : (All (T) (Boolean -> T) -> T)) (define (kar p) (p #t)) (: kdr : (All (T) (Boolean -> T) -> T)) (define (kdr p) (p #f)) (define a (kons 1 2)) (define b (kons 3 4)) (list (kar a) (kdr a)) (list (kar b) (kdr b)) Even more --- why should the internal function expect a boolean and choose what to return? We can simply expect a function that will take the two values and return one: (define (kons x y) (lambda (s) (s x y))) (define (kar p) (p (lambda (x y) x))) (define (kdr p) (p (lambda (x y) y))) (define a (kons 1 2)) (define b (kons 3 4)) (list (kar a) (kdr a)) (list (kar b) (kdr b)) And a typed version, using our own constructor to make it a little less painful: (define-type (Kons A B) = ((A B -> (U A B)) -> (U A B))) (: kons : (All (A B) A B -> (Kons A B))) (define (kons x y) (lambda (s) (s x y))) (: kar : (All (A B) (Kons A B) -> (U A B))) (define (kar p) (p (lambda (x y) x))) (: kdr : (All (A B) (Kons A B) -> (U A B))) (define (kdr p) (p (lambda (x y) y))) (define a (kons 1 2)) (define b (kons 3 4)) (list (kar a) (kdr a)) (list (kar b) (kdr b)) Note that the `Kons` type definition is the same as: (define-type Kons = (All (A B) (A B -> (U A B)) -> (U A B))) so `All` is to polymorphic type definitions what `lambda` is for function definitions. Finally, in JavaScript: function kons(x,y) { return function(s) { return s(x, y); } } function kar(p) { return p(function(x,y){ return x; }); } function kdr(p) { return p(function(x,y){ return y; }); } a = kons(1,2); b = kons(3,4); console.log('a = <' + kar(a) + ',' + kdr(a) + '>' ); console.log('b = <' + kar(b) + ',' + kdr(b) + '>' ); or with ES6 *arrow functions*, the function definitions become: const kons = (x,y) => s => s(x,y); const kar = p => p((x,y) => x); const kdr = p => p((x,y) => y); and using Typescript to add types: type Kons = (s: (x:A, y:B) => A|B) => A | B const kons = (x:A,y:B) => (s: ((x:A, y:B) => A|B)) => s(x,y); const kar = (p: Kons) => p((x,y) => x); const kdr = (p: Kons) => p((x,y) => y); ------------------------------------------------------------------------ # Using `define-type` for new "type aliases" As seen in these examples, there is another way to use `define-type`, using a `=` to create a new type name "alias" for an *existing* type. For example: (define-type Strings = (Listof String)) These uses of `define-type` do not define any new kind of type, they are essentially a convenience tool for making code shorter and more readable. (define-type NumericFunction = Number -> Number) (: square : NumericFunction) (define (square n) (* n n)) Note in particular that this can also be used to define "alias type constructors" too: somewhat similar to creating new "type functions". For example: (define-type (BinaryFun In Out) = In In -> Out) (: diagonal : (BinaryFun Natural Number)) (define (diagonal width height) (sqrt (+ (* width width) (* height height)))) This is something that we will only need in a few rare cases. ------------------------------------------------------------------------ # Currying A *curried* function is a function that, instead of accepting two (or more) arguments, accepts only one and returns a function that accepts the rest. For example: (: plus : Number -> (Number -> Number)) (define (plus x) (lambda (y) (+ x y))) It's easy to write functions for translating between normal and curried versions. (define (currify f) (lambda (x) (lambda (y) (f x y)))) Typed version of that, with examples: (: currify : (All (A B C) (A B -> C) -> (A -> (B -> C)))) ;; convert a double-argument function to a curried one (define (currify f) (lambda (x) (lambda (y) (f x y)))) (: add : Number Number -> Number) (define (add x y) (+ x y)) (: plus : Number -> (Number -> Number)) (define plus (currify add)) (test ((plus 1) 2) => 3) (test (((currify add) 1) 2) => 3) (test (map (plus 1) '(1 2 3)) => '(2 3 4)) (test (map ((currify add) 1) '(1 2 3)) => '(2 3 4)) (test (map ((currify +) 1) '(1 2 3)) => '(2 3 4)) Usages --- common with H.O. functions like map, where we want to *fix* one argument. When dealing with such higher-order code, the types are very helpful, since every arrow corresponds to a function: (: currify : (All (A B C) (A B -> C) -> (A -> (B -> C)))) It is common to make the `->` function type associate to the right, so you can find this type written as: currify : (A B -> C) -> (A -> B -> C) or even as currify : (A B -> C) -> A -> B -> C but that can be a little confusing... ------------------------------------------------------------------------ # Using Higher-Order & Anonymous Functions Say that we have a function for estimating derivatives of a function at a specific point: (define dx 0.01) (: deriv : (Number -> Number) Number -> Number) ;; compute the derivative of `f' at the given point `x' (define (deriv f x) (/ (- (f (+ x dx)) (f x)) dx)) (: integrate : (Number -> Number) Number -> Number) ;; compute an integral of `f' at the given point `x' (define (integrate f x) (: loop : Number Number -> Number) (define (loop y acc) (if (> y x) (* acc dx) (loop (+ y dx) (+ acc (f y))))) (loop 0 0)) And say that we want to try out various functions given some `plot` function that draws graphs of numeric functions, for example: (plot sin) The problem is that `plot` expects a single `(Number -> Number)` function --- if we want to try it with a derivative, we can do this: (: sin-deriv : Number -> Number) ;; the derivative of sin (define sin-deriv (lambda (x) (deriv sin x))) (plot sin-deriv) But this will get very tedious very fast --- it is much simpler to use an anonymous function: (plot (lambda (x) (deriv sin x))) we can even verify that our derivative is correct by comparing a known function to its derivative (plot (lambda (x) (- (deriv sin x) (cos x)))) But it's still not completely natural to do these things --- you need to explicitly combine functions, which is not too convenient. Instead of doing this, we can write H.O. functions that will work with functional inputs and outputs. For example, we can write a function to subtract functions: (: fsub : (Number -> Number) (Number -> Number) -> (Number -> Number)) ;; subtracts two numeric 1-argument functions (define (fsub f g) (lambda (x) (- (f x) (g x)))) and the same for the derivative: (: fderiv : (Number -> Number) -> (Number -> Number)) ;; compute the derivative function of `f' (define (fderiv f) (lambda (x) (deriv f x))) Now we can try the same in a much easier way: (plot (fsub (fderiv sin) cos)) More than that --- our `fderiv` could be created from `deriv` automatically: (: currify : (All (A B C) (A B -> C) -> (A -> B -> C))) ;; convert a double-argument function to a curried one (define (currify f) (lambda (x) (lambda (y) (f x y)))) (: fderiv : (Number -> Number) -> (Number -> Number)) ;; compute the derivative function of `f' (define fderiv (currify deriv)) Same principle with `fsub`: we can write a function that converts a binary arithmetical function into a function that operates on unary numeric function. But to make things more readable we can define new types for unary and binary numeric functions: (define-type UnaryFun = (Number -> Number)) (define-type BinaryFun = (Number Number -> Number)) (: binop->fbinop : BinaryFun -> (UnaryFun UnaryFun -> UnaryFun)) ;; turns an arithmetic binary operator to a function operator (define (binop->fbinop op) (lambda (f g) (lambda (x) (op (f x) (g x))))) (: fsub : UnaryFun UnaryFun -> UnaryFun) ;; functional pointwise subtraction (define fsub (binop->fbinop -)) We can do this with anything --- developing a rich library of functions and functionals (functions over functions) is extremely easy... Here's a pretty extensive yet very short library of functions: #lang pl untyped (define (currify f) (lambda (x) (lambda (y) (f x y)))) (define (binop->fbinop op) (lambda (f g) (lambda (x) (op (f x) (g x))))) (define (compose f g) (lambda (x) (f (g x)))) (define dx 0.01) (define (deriv f x) (/ (- (f (+ x dx)) (f x)) dx)) (define (integrate f x) (define over? (if (< x 0) < >)) (define step (if (< x 0) - +)) (define add (if (< x 0) - +)) (define (loop y acc) (if (over? y x) (* acc dx) (loop (step y dx) (add acc (f y))))) (loop 0 0)) (define fadd (binop->fbinop +)) (define fsub (binop->fbinop -)) (define fmul (binop->fbinop *)) (define fdiv (binop->fbinop /)) (define fderiv (currify deriv)) (define fintegrate (currify integrate)) ;; ... This is written in the "untyped dialect" of the class language, but it should be easy now to add the types. Examples: ;; want to verify that `integrate' is the opposite of `deriv': ;; take a function, subtract it from its derivative's integral (plot (fsub sin (fintegrate (fderiv sin)))) ;; want to magnify the errors? -- here's how you magnify: (plot (compose ((currify *) 5) sin)) ;; so: (plot (compose ((currify *) 20) (fsub sin (fintegrate (fderiv sin))))) ------------------------------------------------------------------------ ## Side-note: "Point-Free" combinators > Forming functions without using `lambda` (or an implicit `lambda` > using a `define` syntactic sugar) is called *point-free style*. It's > especially popular in Haskell, where it is easier to form functions > this way because of implicit currying and a large number of higher > level function combinators. If used too much, it can easily lead to > obfuscated code. ------------------------------------------------------------------------ ## This is not Runtime Code Generation All of this is similar to run-time code generation, but not really. The only thing that `fderiv` does is take a function and store it somewhere in the returned function, then when that function receives a number, it uses the stored function and send it to deriv with the number. We could simply write deriv as what `fderiv` is --- which is the *real* derivative function: (define (deriv f) (lambda (x) (/ (- (f (+ x dx)) (f x)) dx))) but again, this is not faster or slower than the plain `deriv`. However, there are some situations where we can do some of the computation on the first-stage argument, saving work from the second stage. Here is a cooked-to-exaggeration example --- we want a function that receives two inputs `x`, `y` and returns `fib(x)*y`, but we must use a stupid `fib`: (define (fib n) (if (<= n 1) n (+ (fib (- n 1)) (fib (- n 2))))) The function we want is: (define (bogus x y) (* (fib x) y)) If we currify it as usual (or just use `currify`), we get: (define (bogus x) (lambda (y) (* (fib x) y))) And try this several times: (define bogus36 (bogus 36)) (map bogus36 '(1 2 3 4 5)) But in the definition of `bogus`, notice that `(fib x)` does not depend on `y` --- so we can rewrite it a little differently: (define (bogus x) (let ([fibx (fib x)]) (lambda (y) (* fibx y)))) and trying the above again is much faster now: (define bogus36 (bogus 36)) (map bogus36 '(1 2 3 4 5)) This is therefore not doing any kind of runtime code generation, but it *enables* doing similar optimizations in our code. A proper RTCG facility would recompile the curried function for a given first input, and (hopefully) automatically achieve the optimization that we did in a manual way. ------------------------------------------------------------------------ # Substitution Caches > [PLAI ยง5] (called "deferred substitutions" there) Evaluating using substitutions is very inefficient --- at each scope, we copy a piece of the program AST. This includes all function calls which implies an impractical cost (function calls should be *cheap*!). To get over this, we want to use a cache of substitutions. Basic idea: we begin evaluating with no cached substitutions, then collect them as we encounter bindings. [Implies another change for our evaluator: we don't really substitute identifiers until we get to them; when we reach an identifier, it is no longer an error --- we must consult the substitution cache.] ------------------------------------------------------------------------ # Implementation of Cache Functionality First, we need a type for a substitution cache. For this we will use a list of lists of two elements each --- a name and its value FLANG: ;; a type for substitution caches: (define-type SubstCache = (Listof (List Symbol FLANG))) We need to have an empty substitution cache, a way to extend it, and a way to look things up: (: empty-subst : SubstCache) (define empty-subst null) (: extend : Symbol FLANG SubstCache -> SubstCache) ;; extend a given substitution cache with a new mapping (define (extend id expr sc) (cons (list id expr) sc)) (: lookup : Symbol SubstCache -> FLANG) ;; lookup a symbol in a substitution cache, return the value it is ;; bound to (or throw an error if it isn't bound) (define (lookup name sc) (cond [(null? sc) (error 'lookup "no binding for ~s" name)] [(eq? name (first (first sc))) (second (first sc))] [else (lookup name (rest sc))])) Actually, the reason to use such list of lists is that Racket has a built-in function called `assq` that will do this kind of search (`assq` is a search in an association list using `eq?` for the key comparison). This is a version of `lookup` that uses `assq`: (define (lookup name sc) (let ([cell (assq name sc)]) (if cell (second cell) (error 'lookup "no binding for ~s" name)))) ------------------------------------------------------------------------ # Formal Rules for Cached Substitutions The formal evaluation rules are now different. Evaluation carries along a *substitution cache* that begins its life as empty: so `eval` needs an extra argument. We begin by writing the rules that deal with the cache, and use the above function names for simplicity --- the behavior of the three definitions can be summed up in a single rule for `lookup`: lookup(x,empty-subst) = error! lookup(x,extend(x,E,sc)) = E lookup(x,extend(y,E,sc)) = lookup(x,sc) if `x` is not `y` And now we can write the new rules for `eval` eval(N,sc) = N eval({+ E1 E2},sc) = eval(E1,sc) + eval(E2,sc) eval({- E1 E2},sc) = eval(E1,sc) - eval(E2,sc) eval({* E1 E2},sc) = eval(E1,sc) * eval(E2,sc) eval({/ E1 E2},sc) = eval(E1,sc) / eval(E2,sc) eval(x,sc) = lookup(x,sc) eval({with {x E1} E2},sc) = eval(E2,extend(x,eval(E1,sc),sc)) eval({fun {x} E},sc) = {fun {x} E} eval({call E1 E2},sc) = eval(B,extend(x,eval(E2,sc),sc)) if eval(E1,sc) = {fun {x} B} = error! otherwise Note that there is no mention of `subst` --- the whole point is that we don't really do substitution, but use the cache instead. The `lookup` rules, and the places where `extend` is used replaces `subst`, and therefore specifies our scoping rules. Also note that the rule for `call` is still very similar to the rule for `with`, but it looks like we have lost something --- the interesting bit with substituting into `fun` expressions.