Inside F#

Brian's thoughts on F# and .NET

Monadic parser combinators, part seven

Posted by Brian on December 12, 2007

Today’s blog entry will be a little scattered, as there’s a handful of small topics I want to write about.  I was looking for an excuse to download and try out the trial version of VSTS, so of course I decided that monadic parser combinators were a good reason to try out the VSTS features for unit testing, code coverage, and performance profiling. 

Debugging monadic parser combinators

To start today off, I want to discuss what it’s like to use the debugger on the monadic parser combinator code.  Among the most useful features a debugger provides are the abilities to set break points, see call stacks, and inspect the values of local variables.  All three of these features I just named are somewhat hindered by the monadic style of writing parsers.  Let’s talk about each problem and possible mitigations.


The only thing problematic about breakpoints is how they interact with lambdas whose source code is formatted on a single line.  When you write code like this:

s => f(s)

and press F9 on that line of code to set a breakpoint in VS, the breakpoint is set at the lambda creation point.  If instead you want to break when the function f is about to be called, it seems like either you have to be single-stepping through the code and hit F9 when you get inside the body of the lambda (which sets a breakpoint on the same "line", but with only the body of the lambda highlighted, rather than the whole lambda expression), or you must break up the code so that the body is a statement on a separate line like so:

s => 
    return f(s);

in which case you can press F9 on the first line to break at the lambda-creation point, or set a breakpoint on the return statement to break inside the body of the lambda.  Perhaps there is a way to tell VS that you want to set a breakpoint inside the lambda body when the whole thing is on one line, but if there is, I haven’t discovered it yet.  (I think there are similar issues arising from use of the ?: operator, and as we shall see, "line-oriented tools" are a little problematic when it comes to code coverage, too.)  To sum up, you may have to exert a little more effort/care to get breakpoints where you want them when using lambdas.

Call stacks

An unfortunate consequence of the monadic style of programming parsers is that practically every call stack during the course of the program is an unreadable pile of calls to Then(), Or(), and the lambdas nested therein.  Whereas in more typical styles of programming, a glance at a call stack provides immediate insight into "where you are" in the course of the computation, in the case of monadic parsers there’s no such luck.  I haven’t come up with any good way to deal with this issue; call stacks are just not very useful for debugging monadic parsers.

Local variables 

Things here start off dismally, but there is potential for salvation.  The representation type we have chosen for parsers (P<T>) has been a delegate type, and delegates are effectively opaque.  That is to say, when you inspect a local parser variable p in the debugger, it has no useful members that suggest what p actually is (e.g. is this a parser for expressions, for natural numbers, or for what?).  This is straightforward to overcome, however; we can alter the representation type like so:

public delegate Consumed<T, Token> PFunc<T, Token>(ParserState<Token> input);
public class P<T, Token>
    public readonly string Name;
    readonly PFunc<T, Token> func;
    public Consumed<T, Token> Run(ParserState<Token> input)
    public P(PFunc<T, Token> f, string name) { this.func = f; this.Name = name; }
    public override string ToString()
        return this.Name;

Representing parsers as a class type, rather than a delegate type, allows us to decorate parsers with extra metadata that provides informative descriptions about parser objects in the debugger.  The Tag() function, which is used to provide useful runtime error messages for failed parses, can be re-leveraged to set high-level names for parsers.  The other core functions can be written to also supply useful names, as suggested by one example here:

public static P<T, Token> Or<T, Token>(this P<T, Token> p1, P<T, Token> p2)
    return new P<T,Token>(input => { /* usual logic */ }, 
                          string.Format("'{0}'.Or('{1}')", p1.Name, p2.Name));

so that in the end, code like this:

var p3 = nat.Or(paren);

yields a value for p3 that shows up as:

'natural number'.Or('parenthesized expression')

inside the debugger when inspecting the value of p3.  This drastically improves the debugging experience, as parsers are no longer completely opaque – you can now look at a given parser object and know what it should parse.

Other pragmatics

I was talking to my friend Jeff about parser combinators, and whining about the poor debugging experience and how there is no context for "where you currently are in the parse", and he offered a great suggestion – keep the entire history as you’re parsing, and display where you currently are.  I chewed on this a little and came up with an effective history representation, as well as a (barely) adequate visualization of the history.  Here’s a picture of what it looks like to parse an expression that starts with "(611-2^3":


The idea is to have a tree view: each time we have a Tag()-ed parser, we start a new level of nesting, and each Then() or Or() adds a new child to the current level.  Since it’s a tree view, you can expand/collapse parts so as to drill down or sum up the history of the parse as you like.  I don’t know enough about WinForms (yet) to make this pretty, but rather than the "starting at (line,col) ending at (line,col)" you could imagine a much nicer visualization that highlighted the input in context, so that maybe it shows the input string and when you mouse over a successful portion of the parse, it highlights that portion of the input: "(611-2^3…".  Anyway, there is a lot of potential here.  The representation is easy, you just add a "history" field to the ParserState<Token> type, and have the core combinators update the history as they go (being especially careful in Or() to "backtrack" on the input string, but thread the history forward, so failures are not forgotten).  Of course you’d want to add all this extra stuff to the library only in DEBUG mode, so that you’re not lugging around the entire history in a release build.  I’m not sharing the code for all this, but it is straightforward to implement.

(This is also a good opportunity to take an aside to point out the utility of DebuggerVisualizers in Visual Studio.  If you haven’t even seen or used them, you absolutely must check them out.  They are a tremendously useful tool for debugging; I authored a couple of useful ones back when I was a developer on the WCF team which I’ll have to dig up and post in a separate blog entry.  Additionally, for PM-types, DebuggerVisualizers can effectively be used to provide a kind of "live" algorithm animation, which can be a great alternative to powerpoint slides for presenting some new code-oriented topic.)

A final pragmatic note about debugging these parsers – putting a single breakpoint in the SatShow() function is a great way to keep track of the progress of the parse.  In the most recent bit of code I posted, things were factored so that single function was the only place in the entire library that actually consumed a token of input and advanced down the token stream, making it a great location for a breakpoint to keep track of the running progress of the computation.

Unit testing and code coverage

I am a fan of test-driven development, as you may already know if you have read this previous blog entry of mine on the topic.  I saw that VSTS has a unit test framework and tools for automatically creating tests, as well as ways to measure code coverage and complexity, so of course I had to try it out.  Though I haven’t used these VSTS features enough yet to form a strong opinion about them, I am very glad to see a tool "in the box" that enables you to very easily instrument your code for coverage, run a set of unit tests, and then bring up data about coverage, either in a class/method table breakdown (showing percentage numbers), or by showing the code with ‘hit’ blocks highlighted in green and ‘miss’ blocks highlighted in red.  I love the red-and-green code coverage highlighting, as it make it easy to quickly see if your unit tests are at least covering all the various branches of the code you are writing.

One interesting thing to note about coverage as it relates to monadic parser combinators is that block coverage data alone is not very useful.  Since parsers are functions, the combinators like Then and Or end up getting all the coverage that describes different paths through a parse.  For example, if part of your parser has "nat.Or(paren)", that line of code will get covered, but that just tells you that such a parser was constructed.  When the resultant parser eventually runs and makes a choice between parsing a natural number of a parenthesized expression, the corresponding if-then-else branch is the one inside the lambda inside the definition of the Or combinator.  And there is no easy way to tell one instance of a call to Or from another.  Whereas a traditionally structured parser may have two unique blocks in the source code that represent whether a portion of an expression was parsed as a natural number or a parenthesized expression (e.g. so there was a line of code you could put a breakpoint on to determine which alternative was taken at this choice point of the grammar you are parsing), there is no corresponding unique line of code in a monadic parser.  Instead, our expression parser is built using a big block of straight-line code, and the result is a giant lambda that repeatedly calls back into functions like Or() to do all of the decisions that affect the control of the algorithm.  As a result, you would need more discriminating coverage tools (that consider callers or the original constructors of lambdas) to discern the coverage of all the different choice points of a language grammar you are parsing.  And you’d need some kind of conditional breakpoint to break, for example, just on the Or() that chooses between "nat" and "paren" (but not break on every other Or).  I haven’t thought about this much yet, but it’s very much a similar kind of issue as the call-stack problem I described earlier in this blog entry, and I wonder if there’s somehow some clever solution to both.  Deserves more thought.

The take-home moral of the previous paragraph is to be wary of the coverage numbers.  The coverage metrics I got showed nearly 100% block coverage for the handful of examples I do in the Main() algorithm, but there are undoubtedly still many uncovered "branches" which would have shown up as un-covered code blocks in a traditionally structured parser.

One final note about the code coverage highlighting that VSTS provides is that it is "per line", which means that you need to be careful about separating lambda bodies out onto separate lines of code if you want to be able to see the lambda body coverage distinct from the coverage of the code creating the lambda.


VSTS has tools for performance profiling, so of course I put them to use by profiling the code I posted last time.  Good performance analysis requires more effort that I was willing to put forth this morning, so here are a bunch of quick caveats regarding the data I’m sharing.  I just used the main program from the previously posted code as the code to profile.  It’s not necessarily a very representative sample, it runs on very short inputs, the whole program is pretty short and runs in about 1.5 seconds on my box, and it doesn’t appear to be CPU-bound.  But despite all that, I did a profile run to check out "where the time was spent" to see if there were any hot spots to identify.

Two things stood out.  First is that the top five "application exclusive" time users were all calls to delegate constructors (e.g. constructors of P<T,Token> or Func<X,Y,Z> objects).  This is, of course, partly because there are just so many lambdas being constructed, but it also makes me wonder about the performance of creating lambdas in general.  I haven’t tried rewriting "hot" lambdas as classes-with-invoke-methods that explicitly capture the environment in their constructors, to see if the perf changes at all, but it might be interesting to try.  (I doubt it would have an effect, though, as I imagine the C# compiler ends up emitting similar CIL code in either case.)  The second thing that stood out was how many objects were allocated.  There were about 140,000 calls to the object constructor in that run, and I knew those numbers were high as a result of using pure functional data structures (that is, data structures without any side effects).  So the obvious thing to do (which I wanted an excuse to do anyway) was to rewrite the core of the library to use effects and see if and how this affected perf.

Rewriting the library core to use effects under the hood was surprisingly easy.  In addition to using mutable variables, I also changed some of the data structures away from IEnumerable<T>s to more concrete types.  Here is a example of a typical change:

// old, "pure" code
public ErrorInfo SetExpectation(string label)
    if (string.IsNullOrEmpty(label))
        return new ErrorInfo(this.Position, this.History, Enumerable.Empty<string>(), this.Message);
        return new ErrorInfo(this.Position, this.History, Parser.Cons(label, Enumerable.Empty<string>()), this.Message);
// new imperative code
public void SetExpectation(string label)
    if (!string.IsNullOrEmpty(label))

This is a method inside the ErrorInfo class that gets called by Tag() – it updates the list of expectations that correspond to error messages like "expected character ‘(‘ or natural number".  Whereas previously this function returned a new ErrorInfo object containing the new values, now it mutates the existing object, which now contains a List<string> to represent the expectations.

Of course, when you use effects, you need to be careful about aliasing.  If you mutate an object, and some other object has a reference to that object, you are changing the view of the world for that other object, too.  Fortunately inside the core of a monadically-structured parser library, you can impose simple clear rules of object ownership.  The ParserState<T,Token> objects passed as input to each P<T,Token> delegate "belong" to the delegate, which means if you run p(input), then you are giving p permission to modify input.  The result of running a parser is a Consumed<T, Token> object which may contain references pointing back into the input.  Most of the parser is "single-threaded" in the sense that data has a single source and sink, and so the same named object is rarely used in two different expressions inside a function.  The main exception is Or(), which can pass the same input to both p1 and p2.  Since Or() only calls p2 when p1 doesn’t consume (which means p1 doesn’t mutate the input object), this is usually ok.  The exception is Try(), which lies about whether it mutates the input.  As a result, Try() has to clone the input and restore it back to the previous value if it consumes and fails:

public static P<T, Token> Try<T, Token>(this P<T, Token> p)
    return input =>
        ParserState<Token> copy = input.Clone();
        Consumed<T, Token> consumed = p(input);
        if (consumed.HasConsumedInput && !consumed.ParseResult.Succeeded)
            consumed.HasConsumedInput = false;
        return consumed;

Clone() and ResetBackTo() are new functions on the ParserState<T,Token> class that exist to enable Try to backtrack to a previous state after a mutation.  You can see the full code at the end of this blog entry.

Since these changes are all limited to the "core" of the library, we don’t need to make any changes to the rest of the library (e.g. combinators like Chainl1 and SepBy) or to client code (e.g. the expression parser).  The public interface and behavior remain the same.

Profiling this new program that used effects under the hood revealed a couple noteworthy things.  First, the number of allocated objects dropped dramatically, from about 140,000 objects in the original to about 34,000 in the new version.  Allocating fewer objects is almost always a good thing.  Furthermore, the "application exclusive" time still listed the same top five functions, which means these changes did not introduce new CPU hogs.  (I did note that some new functions like "List.Clear()" now appeared in the top 20 or so functions, but these newcomers were not significant contributors overall.)  In terms of wall-clock time, the program runs about 15% faster, dropping from about 1.4s to 1.2s for a release build on my box.  It’s hard to draw any firm conclusions from the meager perf analysis I did today.  But in the end, I was able to change the code so that it runs faster and allocates less, and I localized those changes to the core of the library without changing the main public interface.  So that makes me happy, as I think it’s suggestive of a general potential design and implementation approach: start things off effect-free, get a correct implementation with a nice interface, and then later, as necessary, go back and add side effects under the hood to make things more efficient.

One thing that might be interesting is to rewrite the same parsing program using a traditional recursive-descent structure, and see how it performs in comparison.  I doubt I’ll find the time and motivation to do that, though.

There’s clearly other potential low-hanging fruit to make some portions faster.  For example, representing the input as a character stream all the way through and peeling off individual characters one by one is pretty awful; buffering big blocks of characters under the hood and then using fast access to those buffers might make a big difference on large inputs.  But for the purposes of this blog, I am less concerned with specific points of practical performance, and more concerned with thinking about the big picture and ensuring that most such optimizations could be made without altering the overall structure and interface of the parsing library.


Though today’s blog entry is a smattering of different topics, they are all related in the sense that, for each topic I discuss how monadic parser combinators are different from typical applications with respect to specific concerns.  I’ve discussed how the parsing library has an unusual appearance in the debugger, and how to get the debugging information you desire.  I’ve talked about how this code interacts with code coverage tools and needs special consideration to meaningfully understand the resultant coverage metrics.  And I’ve shown a little about how a pure functional program’s performance characteristics can be altered by introducing effects under the hood.  All of these topics demonstrate ways that this kind of code is relatively unique compared to typical applications and libraries you find in C#.

Full source code

You can find the code for the library-using-side-effects-under-the-hood here.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: