June 20, 2011
Partial function application in F#, part 5: design considerations
I showed earlier in this series how important and idiomatic partial function application is in F#. Although you can always replace a partial application by a lambda, partial application is much more concise and readable.
So when you design a function in F#, it’s worth bearing in mind how it might be partially applied. I’ve ducked this issue throughout this series by using a very simple example which doesn’t really have any design choices about it, but real functions don’t have that luxury.
A simple predicate
Suppose we want to create a divisibleby function, which takes two arguments and returns whether one is divisible by the other. There are two ways we could write this function:
let divisibleby value factor = (value % factor = 0)
let divisibleby factor value = (value % factor = 0)
Obviously, these are the same except for whether we put the value we’re testing first or last in the argument list. Which should we prefer? You might instinctively go for the first one because the arguments appear in the same order as they do in the English sentence ‘value is divisible by factor.’ But take a look at what happens when you try to use that as a predicate in a filter expression.
let evens = List.filter (fun x –> divisibleby x 2) ints
Lambdarrific! By contrast, here’s how the predicate looks if we write divisibleby the other way around, with the factor first.
let evens = List.filter (divisibleby 2) ints
We can use partial application, and all the lambda noise goes away. The resulting expression is shorter, less noisy and more readable. Clearly if we want to use our divisibleby function in predicates, we should be writing it the second way, testee last.
A simple mapping
The List.map (LINQ Select) function provides another example. Suppose we want to subtract 1 from every member of a list:
List.map (subtract 1) ints
How should we design the subtract function? We want subtract 1 to result in subtracting 1 from its argument:
(subtract 1) y = y – 1
So, counterintuitively, we should design subtract to take its subtrahend first and its minuend second:
let subtract x y = y – x
Of course, this could be very confusing to use in a non-partial form because it reverses the order of the subtraction operator! A user writing subtract 5 2 would probably expect a result of 3, not -3. So this is a case where it might be better to stick to using a lambda in the List.map call.
The pipeline operator
F#’s pipeline operator, |>, is a nice way to make long chains of operations more readable. The pipeline operator just flips function application around:
let (|>) x f = f x
Partial application means we can use the pipeline operator even with multi-argument functions:
23 |> subtract 17;;
val it : int = 6
A more realistic example of this performs a LINQ-like query by composing List functions in a pipeline:
ints |> List.filter (divisibleby 2)
|> List.map (subtract 1)
Notice that this example depends on partial application in two ways. Not only are we partially applying the divisibleby and subtract functions to create our filter predicate and mapping function, but we are also partially applying the List.filter and List.map functions themselves.
Looking at list functions
F#’s list functions provide another case study in function design. Both List.filter and List.map take two arguments, and in both cases the filter or map function appears first, and the list to apply it to appears last. Similarly, List.fold (the equivalent to the LINQ Aggregate operator) puts the accumulator function first, the seed second and the list to aggregate over last.
This isn’t an accident. It’s precisely so that you can partially apply these functions.
The F# designers had two choices for the design of List.filter:
let filter predicate list = ... // actual
let filter list predicate = ... // hypothetical
The implementation is the same, of course, so that doesn’t help decide between them. But we can compare the meaning of the partially applied versions.
In the actual version, we can write List.filter predicate – for example, List.filter isEven – and get a function that we can apply to lists. This seems like it will give us useful functions: for example, List.filter isEven gives us a function that we can apply to a list to choose its even members.
In the hypothetical version, the partial application is List.filter list. So we can write something like List.filter ints and get a function that we can apply to predicates. In this version, List.filter ints takes a predicate, and returns all the elements of ints for which that predicate is true. This isn’t necessarily useless, but it’s kind of obscure, and more critically it is evidently a function of predicates, not of lists. If it is useful, it belongs in a Predicate module, not in a List module.
Just tell me the rules already
It’s impossible to lay down a firm set of rules on how to design F# functions. The two versions of List.filter above are functionally equivalent: the F# designers were able to choose between them because they knew that users were much more likely to be choosing a predicate and then passing different lists through it than to be choosing a list and trying different predicates against it. If you have a scenario where you have a fixed list and predicates coming in from the outside, then List.filter is the wrong design for you: you need a function which works the same as List.filter but reverses the arguments. (Needless to say, it’s trivial to write such a function: let trueIn list f = List.filter f list.) The ‘right’ design depends on how you’ll be applying the function.
Still, here’s my rule of thumb:
The thing you want to apply the function to should be the last argument.
This rule is, on the face of it, nonsense. The function takes multiple arguments, and you apply it to all of them.
But actually, a lot of the time it turns out that there’s one thing you want to apply the function to, and the other arguments are the extra information you need in order to apply the function. In the divisibleby example, I wanted to know if value was divisible by something. Of course, I had to say what that ‘something’ was; but it was the divisibility of value that I was interested in. And it turned out that putting value in the last place was the best choice. Similarly, in the subtract function, I wanted to subtract something from a minuend; I had to say what the subtrahend was, but the minuend was what it got applied to. And in List.filter, I wanted to filter things from a list; I had to say what criterion I wanted to use, but it was still the list that was being filtered.
Of course, this rule doesn’t help you to organise arguments other than the last, so what should you do if you have a function of three or more arguments? The rule of thumb is a bit more nebulous here, but here’s a stab at it:
Is there a particular argument (or set of arguments) which, if you pin it down, still gives you something useful? If so, put it at the front.
For a case study, let’s look at List.fold. This is roughly equivalent to the LINQ Aggregate operator: you give it a function, a seed and a list, and it applies the function to each member of the list in turn, starting with the seed. For example, List.fold (+) 0 list gives you the sum of the elements of list.
Obviously, the list argument is what List.fold is applied to, so that should be last. But why choose the actual List.fold (+) 0 list rather than the hypothetical List.fold 0 (+) list? Well, what is the meaning of the actual partial application List.fold (+)? It is a ‘sum from’ function – a sum operation starting with whatever seed it gets given. Contrast this with the meaning of the hypothetical partial application List.fold 0. It’s hard to find a meaning for this other than ‘aggregate using a seed of 0.’ Partially applying List.fold with the accumulation function gives us something more useful and meaningful than partially applying List.fold with the seed. So the accumulation function should go first, the list to accumulate over should go last, and by default the seed has to go in the middle.
These rules of thumb leave a lot of room for interpretation and judgment, and so they should. Design depends on context and usage. In part three we saw partial application being used to precompute a parse tree. In that example, was I applying the function to the formula text, or to the evaluation environment? Both perspectives are meaningful, and I put the formula text first and the evaluation environment last more for practical performance reasons than for conceptual reasons.
Ultimately, you’ll be guided by how you find yourself applying the function in practice. If you find yourself repeatedly writing lambda expressions that could be avoided by pushing a parameter to the front, then that’s a big hint. If you don’t, then either your design is already right, or it’s not enough of a problem to worry about. Ship it.
June 20, 2011 in Software | Permalink | Comments (3) | TrackBack
June 19, 2011
Partial function application in F#, part 4: underlying concepts
In previous parts, I discussed what partial application means and why it’s so important in F# programming. Because partial application is so idiomatic, it’s important to design your own functions to play nicely with partial application, and that means you need to have some idea of the conceptual model that underlies it.
A closer look at F# functions
Here’s a dead simple F# function, declared in F# Interactive.
> let inc x = x + 1;;
val inc : int –> int
F# Interactive’s response tells us that inc is a function from int to int. It represents this using the arrow notation. That is, a –> b is a type meaning ‘function which takes an a and returns a b.’ It’s roughly equivalent to Func<T, TResult> in C#:
Func<int, int> inc = x => x + 1;
So far so good. Now obviously F# also allows us to declare functions of more than one argument. In previous parts I used a trivial example I called incby. Let’s review its declaration and its type:
> let incby x y = x + y;;
val incby : int -> int –> int
As you can see, incby takes two integers, x and y, and returns another integer. So it’s a function from int and int to int.
But look closely at the way F# Interactive represents this function of two arguments. That notation looks really ambiguous, because you could also read it as:
int –> (int –> int)
which would mean ‘takes an int and returns an int –> int,’ i.e. a function of one argument which returns another function.
So what is the int –> int –> int function type? “Function which takes two int arguments and returns an int’ or ‘function which takes one int argument and returns an int –> int function’?
A difference that makes no difference is no difference
It turns out that this difference is no difference. int –> int –> int is unambiguous, because the two readings are conceptually the same. Partial application is the realisation of this.
Recall that partial application means that you can provide incby with only one argument instead of its full complement of two:
> incby 2 3;;
val it : int = 5
> incby 2;;
val it : (int -> int) = <fun:it@6>
> it 3;;
val it : int = 5
Look at the first line again. It’s natural to think of this as applying incby to 2 and 3. But it’s also correct to think of it as evaluating (incby 2) and applying that to 3:
> (incby 2) 3;;
val it : int = 5
It even turns out that if we specifically write a single-argument form of incby, we can apply it to two arguments using exactly the same syntax:
> let incbyc x = fun y -> x + y;;
val incbyc : int -> int –> int
> incby2 2;;
val it : (int -> int) = <fun:it@11-1>
> incbyc 2 3;; // Same calling syntax as incby
val it : int = 5 // Same result
At the F# language level, we just can’t tell the difference between incby and incbyc. They have the same type, and we can apply them in the same way, and in both cases we can apply them to one or two arguments and get the same result.
One way to think of this is that partial application resolves the ambiguity of the a –> b –> c type, by allowing an a –> b –> c to be used interchangeably with an a –> (b –> c). Another way to think of it is that the a –> b –> c type unambiguously means a –> (b –> c), and that the multiple-argument syntax we used in incby is just a handy way to avoid writing and thinking about the fiddly lambda we see in incbyc.
Which is the right way to think of it? Mathematically and conceptually, the second way is correct. Pragmatically, however, it’s often easier to think the first way. Lots of functions have more than two arguments, and trying to think of these all the time as functions that return functions that return functions that return functions will just make your brain turn to custard.
Let’s see that again in C#
It’s instructive to compare F#’s ‘ambiguity’ – though, as we’ve seen, ‘unification’ would be a better word – with the C# equivalent. C# represents functions of multiple arguments quite differently from functions which return functions.
Func<int, int, int> incby = (x, y) => x + y;
Func<int, Func<int, int> incbyc = x => (y => x + y);
In C#, the two versions have completely different types. Similarly, they are applied in different ways:
int sum = incby(2, 3);
int sumc = incbyc(2)(3);
// shorthand for
// Func<int, int> temp = incbyc(2);
// int sumc = temp(3)
F#’s trick is to always pretend that you wrote the second version, but provide a syntax for doing so that looks like the first version.
Enter the mathematicians
The trick of always pretending that the programmer wrote the second version is called currying. It’s an important technique to understand if you’re interested in the conceptual underpinnings of functional programming, but I’m not going to go into it any further here. Suffice it to say that it’s basically no more than the trick of interpreting (x, y) => something as being equivalent to x => (y => something) – and similarly of course for further arguments. In short, exactly what we’ve already seen F# doing automatically and what we were able to do explicitly in C#.
Does F# really do this?
So does F# really curry all your multi-argument functions to single-argument functions full of chains of lambdas? No, it doesn’t. The easiest way to see this is to call a multi-argument F# function from C#: you’ll see that you have to use the C# multiple argument syntax, not the C# repeated application syntax.
// In FSharpTest module
let incby x y = x + y
// In C#
var sum = FSharpTest.incby(2, 3); // Not incby(2)(3)
The way F# really implements partial application at the compiler level is more like what I described in the first part of this series: when the F# compiler sees you using a function in a partial way, it generates a private function which represents the partially-applied form. You can see how this works by compiling some F# code that performs partial application, then opening the compiled assembly in a C# decompiler such as Reflector.
What does all this mean for me?
If you find the mathematical and conceptual underpinnings a bit intimidating, don’t worry about it too much. You can pretty much get away with thinking about F# function signatures and application the ‘pragmatic’ way, and in fact you’ll be closer to the implementation reality than people who think about them the ‘mathematical’ way. The important thing to understand is that there are two ways of reading a function type like a –> b –> c –> d, and that you can read it whichever way is more convenient for the job at hand. F# doesn’t force you to make a choice the way C# does.
Nevertheless, it’s useful to have an appreciation for the meaning of the intermediate partial forms of a function, even if that’s not how they’re represented in the compiled code. We’ll see why when we talk about what partial application means for designing F# functions.
June 19, 2011 in Software | Permalink | Comments (2) | TrackBack
June 08, 2011
Partial function application in F#, part 3: precomputing partial results
In previous parts we saw that partially applying a function was a bit like creating a lambda to call that function. So far we’ve mostly used this as a convenience for calling LINQ-like functions. In this part we’ll look at it in a different context – precomputing partial results.
The expression engine
Suppose you’re writing a simple data analysis or reporting tool which operates over a data set. Users can enter formulae and the tool evaluates these formulae for each record in the data set – a bit like computed columns in a database. For example, maybe the input data contains a field named Age. Then the user could write 65 – Age and assign this to a field named YearsUntilRetirement.
Using the F# tools fslex and fsyacc, it’s fairly easy to build a parser that translates the user text to an abstract syntax tree (AST), and a set of evaluation functions to walk such a tree and perform the calculations. You can then put this behind a handy evaluate function which hides all the implementation from the caller.
The one wrinkle – and it’s an important one – is that evaluating an AST requires not only the AST itself, but also the values of the fields to which the user formula refers. In the 65 – Age example, the evaluator needs to be able to get the Age value for the record being processed. We’ll call this an evaluation environment. So the evaluate function looks something like this:
let evaluate text environment =
let ast = parse text // parse is private
evalAst ast environment // evalAst is private
That is, evaluate and evalAst takes two arguments, the formula string and the evaluation environment object.
Performance sucks
The evaluate function works fine for small data sets. But if you feed it a lot of records, you’ll notice that it feels a bit… well… sluggish.
It’s not hard to see why. For every record, evaluate is taking the string expression, parsing it, and then evaluating the parsed AST in the provided environment. And it turns out that parsing formulae is hard work.
Let them eat AST
Parsing the same formulae for every record is redundant. The only thing that varies from record to record is the environment: the same formula always results in the same AST.
This suggests one way to solve the performance problem: make the parse and evalAst functions public. Then the consumer can run parse once, store the resulting AST, and feed it back into evalAst as required. Sorted, right?
A functional solution
This works, but at the expense of requiring the consumer to know about parsers and ASTs. The consumer isn’t really interested in the parsing process and its intermediate results. The consumer just wants a way to get the result of a formula for a particular environment.
A way to get one value from another? That sounds like a function! Specifically, it sounds like a function from environments to results.
Without further ado, let’s write that function:
let makeEval text =
let ast = parse text
evalAst ast
makeEval starts out the same as evaluate, but because it doesn’t have an environment argument, it can’t finish the job. Instead, it partially applies the evalAst function. Partial application leaves us with something still waiting for an environment, and ready to return a result when it gets one – that is, a function from environments to results, just as we needed.
let f = makeEval "65 - Age" // can cache this for reuse
let result = f env
Like the previous fix, this solves the performance problem. When you call makeEval, F# precomputes the AST, doing all the hard work of parsing, and returns you a function with that precomputed AST all tucked up inside. When you apply that function, it uses the precomputed AST, so it doesn’t pay the parsing overhead again.
Spelling it out
This may be confusing, so I’ll rewrite makeEval in a more C#-like idiom:
let makeEval text =
let ast = parse text
fun env –> evalAst ast env
This makes it clearer that makeEval returns a function, and exactly what that function does, at the expense of some noise.
Finding the right partial results to precompute
What if we didn’t control the expression engine? Could we still take advantage of precomputing by partially applying evaluate?
let f = evaluate "65 – Age"
let result = f env
evaluate "65 – Age" does return a function from environments to results, so this superficially seems attractive. Alas, it is fools’ gold. The only thing precomputed by evaluate "65 – Age" is the string "65 – Age", which is no saving at all! Partial application doesn’t look inside evaluate to see if there’s anything it could precompute – all it does is bundle up the arguments it has in the form of a lambda. The above is equivalent to:
let f = fun env –> evaluate "65 – Age" env
It’s now obvious that the full evaluate function, including the repetitive parsing, would run each time the lambda was called. The makeEval function works only because it precomputes the AST: it relies on partially applying evalAst rather than partially applying evaluate.
Conclusion
Precomputing partial results is a handy technique in any language, and will work in any language where you can capture those partial results and provide an easy way to reuse them. Lambdas are a particularly convenient way to provide a function with precomputed state, and what we’ve seen here is basically partial application as a shorthand for lambdas.
I hope that after this part and the previous one, partial application doesn’t seem quite as intimidating or obscure as it seemed after the first part. In the next part, I’ll return to the underlying concepts and implementation.
June 8, 2011 in Software | Permalink | Comments (0) | TrackBack
June 07, 2011
Partial function application in F#, part 2: a technique for simplicity
In the previous part, I showed that if you didn’t supply enough arguments to a F# function, you didn’t get an error like you would in C#, but instead got back… another function. No doubt, this seemed like a weird, abstruse behaviour. ‘Functions that return functions’ tend to make people’s heads spin.
In practice, though, far from introducing complication and abstraction, partial application is a simplifying idiom, enabling cleaner, more readable code. Let’s see how.
Mapping and filtering with functions
You’re probably familiar with performing set-based operations like mapping and filtering in C# using LINQ. For example, you can use Select to perform mapping, or Where to perform filtering. Each takes a function which gets applied to each element of the sequence.
var squares = ints.Select(Square);
F# generally doesn’t use the LINQ operators, but has equivalent operators in the List and Seq modules. F#’s equivalent of Select is called map, and its equivalent of Where is filter. Here’s how the C# statement above might translate into F#.
let squares = List.map square ints
Lambdas
Since the mapping or filtering function gets applied to each element of the sequence, it has to be a function of one argument. Unfortunately, the map or filter function we want to apply might need more than one argument.
Imagine we want to use our incby function from the previous part in a mapping expression. We can’t do this directly, because the mapping function has to take one argument, but incby takes two arguments.
Fortunately, we can get around this using lambdas. Again, the F# syntax for lambdas is slightly different from C#, but it’s still easy to understand. Here’s how we can use incby to map each element to its successor.
var successors = ints.Select(x => IncBy(1, x));
let successors = List.map (fun x –> incby 1 x) ints
Lambdas are nice, but they could be nicer
This is okay, but is starting to look a little messy. Lambdas require us to provide parameters on the left, then to cite those parameters on the right, and the actual meat of the mapping has to compete with the noise. Below, I’ve highlighted in blue the important bit of the expression, and the lambda plumbing in red:
var successors = ints.Select(x => IncBy(1, x));
let successors = List.map (fun x –> incby 1 x) ints
In both cases, the meat of the expression is incby and 1. If we could write just those bits, it would save us having to write the lambda boilerplate.
Replacing lambdas with partially applied functions
The reason we had to drag in lambdas was that List.map requires a function of one argument to apply to each element of the sequence, but incby is a function of two arguments.
Well, what do we get if we just write incby 1 in F#, leaving out the second argument? We saw that in the previous post: we get a function of one argument. And that’s exactly what List.map needs!
> let successors = List.map (incby 1) ints;;
val successors : int list = [2; 3; 4]
By partially applying incby, we have got the mapping function we need, without typing all that lambda boilerplate.
Obviously we can apply the same technique in a filter:
let divisibleby n x = (x % n = 0)
let evens = List.filter (divisibleby 2) ints
And so on for all the LINQ-type operators.
Pause for breath
At this stage you may be feeling lost in all the ‘functions returning functions which then get passed to other functions’ craziness.
Don’t panic. Let’s take a look at the expression we just wrote:
List.map (incby 1) ints
Think about how you’d read and understand this if you’d never even heard of partial application. I’d read it something like “map ‘increment by one’ over ints.” Conversely, if I decided I needed to map an ‘increment by one’ function over ints, the expression above is a much more direct translation of that into code than the lambda version. incby 1 is as direct a translation of the description ‘increment by one’ as you could hope for, even if it doesn’t entirely register that incby is being given the ‘wrong’ number of arguments or that it’s returning a function.
So somehow, despite all the craziness going on under the surface, what we see at the point of use is simpler code, code that is a more natural translation of a problem statement, code that is easier to understand just by reading it.
So don’t let the head-spinning abstractions of the implementation confuse you. When you come to use it, partial application actually turns out to be pretty intuitive. In some cases, the usage can be so intuitive you may not even realise it’s happening.
Composing expressions…
C#’s extension methods provide a really nice way of composing LINQ expressions.
ints.Where(x => IsDivisibleBy(3, x))
.Select(x => IncBy(1, x));
The code flow reflects the flow of data through the operators - -first through the Where, then through the Select. If you write the same thing in normal function(arguments) style, it would seem inside out:
List.map (incby 1) (List.filter (divisibleby 3) ints)
You have to read this from right to left to get the clarity of the C# version. To deal with this, F# provides the pipeline operator, which just allows you to put the argument before the function:
3 |> square // equivalent to square 3
With pipeline, you can chain expressions together in ‘flow of data’ order just like C# extension methods:
ints |> List.filter (divisibleby 3)
|> List.map (incby 1)
Now it’s nice and clear and readable again.
…is another example of partial application
Does your spidey sense tingle when you look at the List.filter and List.map calls in the pipeline example? I hope it doesn’t. I hope you just think, “Yes, I see what’s going on there, you’re passing ints through a filter and then a map, duh.”
But take another look. List.filter and List.map are functions that take two arguments, the filter/map function and the list to apply it to. But in the pipeline, they appear with only one argument, the filter/map function.
Now does your spidey sense tingle?
If you know a bit about how LINQ works, you may think the compiler is transforming the pipeline syntax into the normal function(arguments) syntax, the same way the C# compiler transforms the extension method syntax into the normal function(arguments) syntax. But actually the pipeline operator can be defined as a normal infix operator, like + or *:
let (|>) x f = f x
(The brackets around the name just mean ‘infix.’)
For example, if you write 3 |> square, then x is 3 and f is square. So, logically, if you write ints |> List.map (incby 1), then x is ints, and f is List.map (incby 1). So the pipeline is partially applying List.map to (incby 1), giving rise to a new anonymous function, and then applying that anonymous function to ints.
If you were to think about this in detail, you would probably lose track of it in short order. I know I would. We’re partially applying a function to a function (itself the result of partial application!) in order to get another function, in order to apply that function to some arguments. Did I miss any? I honestly can’t tell.
But as you’ve seen, the syntax is natural and readable enough that you may not even have noticed it until I pointed it out.
ints |> List.filter (divisibleby 3)
|> List.map (incby 1)
“Take ints, filter for ‘divisible by three,’ and map ‘increment by one’ over it.” No teetering mental stacks required.
A simplifying technique
In this part, we’ve seen how partial application simplifies using higher-order functions like List.map. Higher-order functions are pervasive in F#, just as they are in LINQ, and being able to use partial application syntax instead of lambdas is extremely convenient.
In fact, we’ve seen that partial application makes it possible to create simplifying idioms like the pipeline (|>) operator entirely in user code, without having to wait for compiler support as we did for C# extension methods. If we didn’t have partial application, List.map and List.filter would have been a bit noisy, but pipelines would be unusable!
Most important, I hope you’ve seen that you don’t need to hold together a teetering mental stack of higher-order functions in order to make use of partial application. A lot of the time, it will be an almost transparent shorthand for lambda notation; occasionally, as in the pipeline example, it will be so transparent that you hardly realise it’s there.
In the next part, I’ll show you how partial application can be handy even when you’re not working with higher-order functions.
June 7, 2011 in Software | Permalink | Comments (0) | TrackBack
June 06, 2011
Partial function application in F#, part 1: the basics
Partial function application is an important technique in functional languages like F#, but it tends to get blank stares when I try to explain it to C# and Java programmers. I don’t know if I’ll have any more luck putting it in writing, but here goes all the same.
First, the how
When you first see it, partial application seems like a weird and confusing quirk of the language. It isn’t until you see it in action that it makes sense. However, before I can show you it in action, I need to briefly show you how to do it.
Here’s a function of two arguments in F#:
> let incby x y = x + y;;
val incby : int -> int –> int
We can apply this to two values, and everything works just like you’d expect:
> incby 2 3;;
val it : int = 5
But, rather unexpectedly, we can also apply it to one value, and instead of getting a compiler error, we get something back:
> incby 2;;
val it : (int -> int) = <fun:it@6>
That response like looks a bit like line noise, but it’s easy to understand by comparing it to the response in the ‘normal’ case. val it : int = 5 means the result was a value which F# Interactive has called it, of type int and value 5. So val it : (int –> int) = <fun:it@6> should mean that the result was a value which F# Interactive has again called it, of type (int –> int) and value <fun:it@6>.
What is type (int –> int)? The arrow is F#’s notation for ‘function from a to b.’ So int –> int is a function that takes an integer and returns an integer, like Func<int, int> in C#. So the response implies that incby 2 returned a function which we can apply to an integer:
> it 3;;
val it : int = 5
And indeed it did!
By not passing enough arguments, we have partially applied the incby function, and got another function back.
Let’s see that again in C#
Functions that return functions tend to make people’s heads spin, so let’s drop back to a more familiar language and take another look. Here’s the incby function in C#:
public int IncBy(int x, int y) { return x + y; }
Obviously, IncBy takes two arguments:
IncBy(2, 3);
// returns 5
IncBy(2);
// compiler error
But we can imagine overloading IncBy with a single-argument version.
public Func<int, int> IncBy(int x) { return y => IncBy(x, y); }
You can imagine how we could use this overload:
IncBy(2, 3);
// calls 2-argument overload, returns 5
IncBy(2);
// calls 1-argument overload, returns Func<int, int>
IncBy(2)(3);
// calls 1-argument overload, which returns a Func<int, int>
// that Func then calls 2-argument overload, which returns 5
(You may still not be able to imagine why we would use this overload, but I’ll get to that, I promise.)
Let’s walk through the last line there, to try to make it clear what’s going on when we use the 1-argument overload.
When we call IncBy(2), it calls the 1-argument overload with an argument of 2. This substitutes 2 for x in the lambda, and returns the resulting lambda, which is now y => IncBy(2, y). We now pass 3 into that lambda. This substitutes 3 for y in the lambda, leaving us with a call to IncBy(2, 3). This is the 2-argument overload, so all the function madness ends here and we are back to the sane and simple world of adding two integers and getting an integer back.
Partial application is like the crazy overload
The two forms of IncBy in C# correspond to ‘full’ and partial application in F#. The big difference is that in C# we had to create the 1-argument overload explicitly, whereas in F# we didn’t: F# saw that we were trying to use the 1-argument overload, and automatically created it for us.
You can see that if we were overloading a lot of functions in this way, the F# automatic ‘overloads’ would save us a lot of repetitive boilerplate code. But that just invites the question, why on earth would you need these crazy function-returning overloads? Why does F# think they’re important enough that the compiler should go to the trouble of detecting them and working them out for you? Now that we’ve seen a bit of the how and the what, the next part will take a step back and try to show you a bit of the motivation.
June 6, 2011 in Software | Permalink | Comments (1) | TrackBack
March 18, 2011
F# party at the old Trask place
I’ve been posting a bunch of stuff about F# over at my employer’s site, so for those who are interested, here’s the links:
First class functions in F#, parts 1, 2, 3, 4 and of course the oh-yes-I-guess-I-should-have-mentioned part 0
Functions versus member methods in F#
March 18, 2011 in Software | Permalink | Comments (0) | TrackBack
February 23, 2011
Wellington .NET user group – CLR fundamentals
Thanks to everyone who came along to the CLR Fundamentals talk this evening – and sorry for overrunning so much! Despite going way over time I had to leave out a great deal of stuff. I don’t have slides to summarise it, but here are my notes on the areas I wanted to cover.
You can find more about these topics (and especially the ones I didn’t have time to talk about!) in:
- Jeffrey Richter, CLR Via C#
- Jim Miller and Susann Ragsdale, The Annotated CLI Standard (the un-annotated one is available online, and is much more recent… but cannot be used to squash spiders)
- Mario Hewardt, Advanced .NET Debugging
- Eric Lippert’s blog
- The BCL and CLR team blogs
Drop me a line if you want pointers on any particular area I mentioned.
One gentleman asked about ways of detecting which methods had been JITted as a way of identifying potential dead code. I forgot to mention it at the time, but the NCover utility not only does this, but even shows unexecuted code down to the line level.
Those of you who were interested in the JITter might be interested to look at the CLR Code Generation blog – it’s pretty quiet but there’s some useful info in the archives.
Finally, the diagrams I briefly showed came from this article -- old, but still a good guide to the internals of CLR types and objects.
February 23, 2011 in Software | Permalink | Comments (3) | TrackBack
December 13, 2010
F# computation expressions for beginners, part 2: keeping it simple
When I started trying to get my head around F# computation expressions, I went looking for nice simple, basic examples. Unfortunately, even the simplest examples I could find were rather intimidating. Here’s my attempt to strip it back to the very basics.
Use case
The example I’m going to use is based on the null-safe member example from the previous instalment. However, F# tends to eschew both nulls and members in favour of options and functions, so instead of dealing with members that might be null, I’m going to deal with functions that return options.
So my computation expression will work just like normal F# control flow, except that if a function call returns None instead of Some, I’ll treat that as a failure and bail out on the rest of the expression.
Here’s an example of the kind of function I want to be able to use:
let tryDecr x n =
if x > n then Some (x - n) else None
And here’s an example of how I want to be able to use it:
let maybeDecr x = maybe {
let! y = tryDecr x 10
let! z = tryDecr y 30
let! t = tryDecr z 50
return t
}
The idea is that if the first tryDecr call fails, the maybe block will stop it going on to the second tryDecr, and so on. I don’t need to clutter the code up with explicit if expressions or pattern matching: the computation expression will magically abort out of the block (and return None) the first time any function returns None.
let! – all the letting, but with more excitement
Notice that inside the maybe block I’m using let! for assignment instead of the normal let. This is how my computation expression gets to interfere in the control flow. The normal let doesn’t give the custom flow logic a chance to get involved. Inside a computation expression, I have to use let! instead when I want my custom logic to run. That is, code inside a computation expression block still has to opt into whatever weird behaviour the computation expression defines.
Once we’ve seen how to build a computation expression, it will be clear that there are other reasons we need to keep the standard assignment behaviour around. I’ll try to come back to this.
A workflow builder for maybe
What is the custom flow logic for maybe expressions? Informally it looks like this:
If the expression at hand returns None, then the value of the whole computation expression will be None; don’t bother evaluating the rest of the expressions.
If the expression at hand returns Some a, then carry out the rest of the computation expression on the value a.
(That “on the value a” bit in the second paragraph is deliberately nebulous for now.)
We need to tighten up this informal specification into code, but before we can do that we need to know where that code is going to sit. The answer is in a builder class. The methods you implement on the class depend on what constructs you need in your computation expressions. For overriding control flow in a let!, we need to implement a method on the builder class called Bind.
The signature of Bind is defined by F# in terms of the kind of data we want to be able to use in let! expressions. Basically it takes a value representing the result of the expression at hand – the right hand side of the let! – and a function representing the rest of the computation expression, and it has to return a value representing the result of the computation expression as a whole.
Implementing the custom flow logic for maybe
In the maybe example, the right hand side of the let! has to evaluate to an option value. And the rest of the computation expression is going to operate on the content of that option value, if it has any. And the result of the maybe block as a whole must be an option, because we have to be able to return None if any of the contained expressions evaluate to None.
So our Bind method has to take an option value and a function from non-option to option, and return an option:
Bind : 'a option -> ('a -> 'b option) -> 'b option
With F# forcing this signature on us, and our informal specification at hand, writing the implementation becomes almost trivial:
type MaybeBuilder() =
member this.Bind(p, rest) = match p with
| None -> None
| Some a -> rest a
What is this saying?
If p is None, then Bind doesn’t evaluate the rest function, and returns None.
If p is Some a, then Bind returns the rest function applied to a.
But when F# calls Bind, p is the result of the expression at hand – the right hand side of the let! – and rest is a function comprising the rest of the computation expression, and whatever Bind returns becomes the result of the computation expression as a whole. Calling rest on a is exactly “carrying out the rest of the computation expression on a.” So this seems like a pretty exact translation of our informal specification.
Getting a value out of a maybe block
So now we can write a sequence of let! operations, and our Bind implementation will take care of conditionally running the rest of the computation. But eventually that ‘rest’ is going to have to come up with a value.
Here’s the ‘how to use it’ sample again:
let maybeDecr x = maybe {
let! y = tryDecr x 10
let! z = tryDecr y 30
let! t = tryDecr z 50
return t
}
Obviously, we need to implement the return keyword. To do this, we need to add another method to our builder class, the Return method.
What does Return look like? Well, at some point we know that Return is going to be the ‘rest of the computation.’ That means it’s got to be compatible with the rest argument to Bind. Which means Return has to be a function from non-option to option. If the computation got as far as Return, then it means Bind didn’t abort the computation early, which means the computation was successful and we want to get the value out. How do we get a value in an option? Using Some, of course!
So Return pretty much writes itself:
type MaybeBuilder() =
member this.Return(x) = Some x
A builder object
It turns out that Bind and Return are all we need for a simple conditional chaining strategy. So all we need to do now is define that ‘maybe’ we’ve been blithely flinging around and hook it up to our builder class.
This turns out to be deceptively easy: just declare maybe to be an instance of the builder class!
let maybe = new MaybeBuilder()
‘maybe’ isn’t a keyword: it’s just a variable.
Putting it all together
Here’s our conditional flow engine in full:
type MaybeBuilder() =
member this.Return(x) = Some x
member this.Bind(p, rest) = match p with
| None -> None
| Some a -> rest a
let maybe = new MaybeBuilder()
I’ll add a bit of logging to the tryDecr method so we can see how often it gets called, then use it in a maybe block:
let tryDecr x n =
printfn "Conditionally decrementing %A by %A" x n
if x > n then Some (x - n) else None
let maybeDecr x = maybe {
let! y = tryDecr x 10
let! z = tryDecr y 30
let! t = tryDecr z 50
return t
}
And let’s see what happens:
> maybeDecr 100;;
Conditionally decrementing 100 by 10
Conditionally decrementing 90 by 30
Conditionally decrementing 60 by 50
val it : int option = Some 10
> maybeDecr 30;;
Conditionally decrementing 30 by 10
Conditionally decrementing 20 by 30
val it : int option = None
> maybeDecr 5;;
Conditionally decrementing 5 by 10
val it : int option = None
In the first test, tryDecr has been successful each time, so normal control flow has been sustained, and maybeDecr has produced the correct result.
In the second and third tests, tryDecr has failed on the second and first attempt respectively (calculating z and y respectively). But instead of going on to attempt the remaining calculations, the maybe block has returned None immediately – without us having to write explicit code to check for failure. The block allows us to write top-to-bottom code and let the ‘maybe’ object take care of the more complex flow that is actually required.
Conclusion
The maybe block is about as simple as a computation expression can get. I’ve tried to strip it back to the absolute basics, so that it’s simple enough for me to get my head around. Nevertheless, it’s not a toy (although I have illustrated it with a toy example): it does real work which could be used to simplify real code.
With this basic example at hand, we have a foundation for building more sophisticated and useful workflows, and for exploring what’s actually going on when F# encounters a computation expression.
December 13, 2010 in Software | Permalink | Comments (3) | TrackBack
December 12, 2010
F# computation expressions for beginners, part 1: what’s the problem?
One of F#’s most intriguing and baffling features is computation expressions, also known as workflows. Computation expressions are intriguing because they override the normal F# control flow, allowing user code to decide when and if to proceed to the next step. And they’re baffling for a number of reasons, but the first is probably, ‘why the heck would I want to override the normal F# control flow?’
To answer that, let’s look at a couple of examples.
Null-safe member calls
If I had a dollar for every time someone asked for a null-safe member operator in C#, I’d have $18.50. The idea is that if I should be able to write a long chain of member calls in C#, but if one of the intermediate calls returns null then the whole expression would be null instead of throwing an exception.
string ppname = p.Partner.Puppy.Name;
// but what if p or p.Partner or p.Partner.Puppy is null?
A few people have suggested that a new operator, typically written .? or ?., be added to C# to handle this situation.
string ppname = p.?Partner.?Puppy?.Name;
And maybe these people will get lucky in 2015 or whenever, but what if you need to write some code before 2015? You have to do a bunch of explicit control flow:
string ppname = null;
if (p != null)
{
if (p.Partner != null)
{
if (p.Partner.Puppy != null)
{
ppname = p.Partner.Puppy.Name;
}
}
}
This works fine, but (a) is boring to write and (b) obscures the core logic (the partner’s puppy’s name) in the mess of if statements.
Asynchronous programming
How do you access a remote or slow resource in C# or Visual Basic, without blocking? Through the joy of callbacks, of course.
private static void GetWebStuff1()
{
WebClient client = new WebClient();
client.DownloadStringCompleted += OnDownloadStringCompleted;
client.DownloadStringAsync(new Uri("http://www.google.co.nz/"));
}
private static void OnDownloadStringCompleted(object sender,
DownloadStringCompletedEventArgs e)
{
Console.WriteLine(e.Result);
}
As has been observed by everyone and his dog, the problem with this is that it splits the logical flow (download this page and write it to the console) across multiple methods, making it hard to understand and reason about.
As with the null-safe chain, it’s possible to address this at the language level, and in fact C# 5.0 will do exactly that via the async and await keywords:
private static async void GetWebStuff2()
{
WebClient client = new WebClient();
string response = await
client.DownloadStringTaskAsync(new Uri("http://www.google.co.nz/"));
Console.WriteLine(response);
}
This will be jolly nice when it arrives, but once again, it puts developers at the mercy of the language designers. Even if the C# team do decide to support your control flow scenario, you’ll probably have to wait a few years before that support arrives. By which time your project has long since failed and you are living in a shopping trolley and yelling at clouds.
Computation expressions
On the surface, these two examples seem entirely unrelated. But both of them involve wanting to write code in a simple, linear, top-to-bottom or left-to-right style, but being foiled by the fact that you can’t proceed naively from one statement to the next. In the first example, we want to proceed to the next member call only if the previous one returned non-null; in the second, we want to proceed to the next statement only when the async statement has finished doing its work.
F# computation expressions are an attempt to handle this at a library level instead of a language level. Roughly speaking, computation expressions allow you to control if and when – and indeed how – control passes from one statement to the next. (They actually do more than this, but this will do for now.)
And, unlike C#, you can create your own kinds of computation expression to handle control flow idioms that are particular to your own projects.
Which brings us to the second baffling thing about F# computation expressions, which is that they do my head in. (Technical term.) Computation expressions are a bit like LINQ. They’re dead easy and incredibly convenient to use. But implementing your own kind of computation expression is a whole other matter. So that’s what I’ll be trying to get my head around in this series.
December 12, 2010 in Software | Permalink | Comments (7) | TrackBack
December 02, 2010
Reactive Extensions - Wellington .NET user group
Thanks to everyone who came along to undergo the "reactive functional brain inversion process." The dizziness you are now experiencing is normal and should fade over the next eleven to twelve months. In the meantime, here are the bits.
Introduction to the Reactive Extensions - slides
(The zip file of the demos includes the Rx binaries but I've found that you will probably need to either install Rx or fix up the reference paths. The original copy of the WQL demo, by Bart de Smet, is here.)
The Reactive Extensions home page includes several useful resource links. I'd particularly draw people's attention to Jeffrey van Gogh's blog which has been working through a detailed example of using Rx for asynchronous file reading, and don't forget to check out Bart de Smet's PDC presentation.
December 2, 2010 in Software | Permalink | Comments (0) | TrackBack