(A quick reminder to update your feed readers to point at https://lorgonblog.wordpress.com/feed/ for my new blog home.)
One of the best features of Visual Studio is Intellisense, the auto-completion that suggests legal completions when you “dot into” an object, and brings up a list of identifier completions when you press a keyboard shortcut like Ctrl-Space. This kind of auto-completion is one of those most-addictive productivity features, where once you’ve used it, you can’t imagine how you ever lived without it. Nowadays I think most every IDE supports this feature in one way or another.
The Intellisense for F# that we shipped as part of VS2010 is pretty great; I use (read: “depend on”) it all the time. Nevertheless, there are a number of corner cases where the completion lists fall down, and pressing “dot” yields incomplete or incorrect information. On the F# product team, we refer to these cases as “bugs”. :)
I’ve recently been spending time fixing these Intellisense bugs. The associated product code is the F# Language Service; a “language service” in Visual Studio is the code that provides all the logic for editor support of a given programming language: features like syntax coloring, Intellisense, “Go To Definition”, etc.
So I start looking at these bugs and spelunking through the language service, and guess what – most of it is code I have either (1) never looked at or (2) completely forgotten. Before I can fix anything, I’ll need to understand how it all works!
Fortunately, much of this code was originally authored by my esteemed colleague Jomo Fisher. Among Jomo’s many admirable developer qualities: he is a TDD-er and avid unit tester. I had started doing a little TDD and unit testing on my own before joining the F# team, but then I joined F# and saw the foundation Jomo had laid for unit-testing the F# language service, and I started to really learn about unit testing. The Visual Studio architecture is all about interfaces and services, which means you can mock out all of your VS dependencies and create tests against your VS language service that don’t require Visual Studio to be running. And that’s what Jomo had already established when I first joined the team – a bunch of NUnit tests against the F# language service he was authoring.
Fast-forward two-and-a-half years, to earlier this week when I wanted to fix an Intellisense bug. Step one, of course, is to author the tests. I had a few repros in hand of places where Intellisense fails (uncovered by some reliability tests written by Jack (a tester on F#) and inspired by Dmitry (another developer on F#)), so I authored a few small test cases to put the bugs under test. I’m now a one-third the way through the TDD mantra “red, green, refactor!” The next step it harder, though.
Now I need to try to fix the bug. Only I haven’t the faintest idea where to start, since I don’t know or recall any of this code. So I fire up the Visual Studio debugger and start stepping through the code. The debugger is a good tool for code understanding – when coupled with good unit tests, it is fantastic. Stepping through the code, setting interesting breakpoints, and inspecting call-stacks helped me understand how the various components and functions related to one another. Debugging individual unit tests (where each test is effectively a specification of a tiny feature) helped me identify which pieces of code were attached to which individual features. I probably spent about 4 hours meandering through perhaps some 7000 lines of code, learning how it fit together and locating likely places I would need to eventually make some fixes.
So I finally start making changes and try to make one of my newly-created red tests turn green. Make a change, run that test. Hurrah, it turns green! Now, run the other 600 language service tests. :) And of course, I’ve broken like 50 of those. Great, my unit tests are fulfilling one of their purposes – preventing me from regressing existing functionality. Once again I pop open the debugger, and step through a test I just broke, to understand how what-I-just-changed interacted with this old test. Here, a good debugger really shines. I see it going past some code I just changed, and now because of the computations I just changed, I think it’s now going into this “else” branch, whereas previously I suspect it was going down the “if” branch. Well, I could back out my change, and re-compile, and re-debug, and see if it does take the other branch and if that’s the key difference between red and green for this test. But hey, I’m in the Visual Studio debugger, so I can poke at the live program state however I like. So once I reach the “else”, I drag the little yellow arrow (the ‘next statement’ icon in the margin) up into the “if” to “Set Next Statement” as though the if condition were true rather than false, and then press F5 to keep going. And sure enough, now the test is green again. So I’ve quickly verified my hypothesis about “what changed” inside the debugger, without having to recompile the program. Alternatively, I could poke new values into variables in the “locals” window as another way to test hypotheses about changes without having to actually change and recompile the program. It takes a few minutes to recompile these components and re-install the newly-compiled components, so it really is time-saving (as well as keeping you engaged/focused) to do this hypothesis-testing quickly inside the debugger without needing a recompile.
Ok, so now I think I grok how my change interacted with and broke existing tests, so I’m ready to try a different fix. I make a change, recompile, and while it’s compiling I speculate about which tests I expect to turn green and which I think will stay red. Re-run some tests and continue to hone my understanding. The test I’m looking at, incidentally, is when you type
you don’t get Intellisense, even though e.g.
is a legal eventual completion. Debugging… ok, I’ve definitely found a bug in a function called “QuickParse”, which parses the current line of text where you pressed dot and tries to find The.Qualified.Identifier that appears just to the left of the dot you’re pressing. It would typically return a list of dot-separated identifiers, like [“The”;”Qualified”;”Identifier”], but in the case above, it has something like [“1”;””;”System”]. QuickParse doesn’t know about the range operator (..) and so it thinks all those bits are part of a qualified identifier, with e.g. the empty identifier “” as the “namespace between the two dots” of the range operator, rather than understanding “System” as being the start of the current identifier. Indeed, if you do
[1 .. System.
with a space after the range operator, then you do get Intellisense again, as QuickParse correctly handles that case. Ok, so I can fix this. Eventually after a number of iterations of this kind of work, I have my new tests turning green, and only 4 existing tests red, and all of those involve Intellisense of Obsolete entities. I debug through those tests, and find the special logic for Obsolete-handling, and just above that code I discover a comment in the code that basically says “here we rely on the fact that we get an empty identifier in certain cases…” aha! The “bug” in QuickParse is actually a “feature” used by this other code.
Now maybe if I had found that code/comment sooner, it would have saved me a couple hours of debugging. But it would have cost me all the code understanding I gained during the debugging session. I have a number of Intellisense reliability issues to fix, as well as some new feature work in the area ahead of me, so it is very much worthwhile for me to get a deep understanding of how everything here works, so that I have this all “cached into my brain” for my next few weeks of work.
Anyway, that “feature” of QuickParse and the nearby code is a rather subtle and ugly to my eye, so now it’s time to…
In the course of my investigation, I’ve already found a few cleanups that don’t break any tests, and so I can move the code in the general direction I want it to go, even if I’m not all done fixing things yet. Our suite of unit tests gives me the confidence to make changes to this “code I’d never seen prior to a couple days ago”. (My own manual testing inside Visual Studio, getting a code review approved, and having the QA group poking at the product, all also inspire more confidence that regressions will be discovered, but the unit tests provide the most immediate feedback that things are ok.)
I don’t yet have a tidy end to this fixing-an-Intellisense-bug story, as I’ve brought you up to date now (right now I understand the issue, but haven’t fixed the bug and got all the tests green yet). But that’s ok, because this particular narrative is just a means to an end, a way to describe how unit tests and debugging lead to understanding of code. So let me wrap up.
When you have good unit tests, everything is better; if you’re already a TDD-er, then saying that is just preaching to the choir. And when you have a good debugger, using that tool is a great way to learn about an unfamiliar code base (stepping through code, setting breakpoints, looking at stacks, changing values of locals or the next statement to see how things react). Put those two together, and the effect is even more powerful. Unit tests + debugger = code understanding!