Fit and User Interfaces
July 23, 2005
I've only been at the Agile 2005 conference for three hours and already I'm meeting lots of interesting people and having good conversations.
This evening, I found myself closing out the opening ice breaker reception with David Chelimsky (author of the .NET version of FitNesse), Rick Mugridge (co-author of the Fit book) and Jeff Patton (agile-usability list moderator).
Somehow (ha!) the conversation turned to Fit. Not just Fit, actually, but Fit's place in user interface design.
It was a very interesting conversation, one that sparked dozens of ideas. I'm writing about them now, rather than getting ready for bed as I ought to be, because I'm afraid that if I don't put them onto paper now, I'll lose them. I won't attempt to summarize David's, Rick's, or Jeff's points of view... it's all I can do right now to keep my own thoughts straight.
One of our main points of discussion was about how Fit meshes with user interface definition. I took a contrary point of view. I said that I didn't think Fit was very good for testing UIs. Naturally, this sparked some surprise.
I have to admit--I'm prone to taking contrary points of view. But there's always a reason for, and exceptions to, my point of view. In this case, I'm not saying, "Fit should never be used for UIs." I'm saying, "In many cases, there are better ways than Fit of accomplishing the same thing."
Please let me add here that I'm exploring ideas here, not laying down the Truth with a capital 'T'.
So, anyway, what does Fit give you? I think it gives you three things:
- A communication mechanism--my number one preferred use of Fit
- Tests
- Documentation
In the case of user interfaces, perhaps all three of these things can be achieved more effectively in other ways than using Fit. Let's explore that idea:
-
Business experts have trouble visualizing software from abstract definitions (like Fit tests). If you are exploring UI concepts with a business expert, whiteboard sketches, screen mock-ups, and paper prototypes will communicate much better. And only actual "we're done with what you asked for" software will convey it all.
So... if you're communicating with business experts about UI, Fit may not be the best tool for the job.
Except... I would want to provide Fit examples for complex UI interactions, although I would wonder if the complex UI interactions are a symptom of an underlying business rule. I might try to create Fit examples for the underlying business rule rather than the UI behavior.
-
Tests. Automated tests aren't so good at finding unknown problems. Exploratory testing is better. And when it comes to proving something about how part of the system works, xUnit tests are easier to write and maintain than Fit tests. And for most parafunctional testing, like scalability, integration, reliability, other tools may be easier to use than Fit.
So... I would prefer exploratory testing when looking for defects. To prove to myself that I was writing the correct software, I would use test-driven development and xUnit--then regularly review the completed screens with my business expert. And I would ask our team's testers to use their expertise and specialized tools to handle parafunctional testing.
Except... when a defect was found after everybody thought a story was done, I would ask how it got into the system. I would naturally write an xUnit test for it. But I would also ask whether or not there was some sort of communication breakdown--perhaps the UI was complex enough to warrant some Fit examples--and look to see if using Fit would have prevented the defect in the first place.
-
Documentation. If you have a need to document what your software does, I can think of no better way than Fit. Even a few Fit examples sprinkled in with a lot of prose is better than prose alone. However, in the case of a simple UI, I would ask why it needed to be documented. Isn't the UI itself an excellent description of how it looks and behaves?
So... I may not document my UI. I may rely on the completed UI to act as its own documentation.
Except... there may be legitimate and powerful reasons to document the UI. It may be complicated, or there may be important history behind decisions, or there may need to be a technical reference for users. All of those factors would influence my decision about whether or not the UI should be documented.
(By the way, I'd rather use direct collaboration than a paper document when I'm yet to build my UI. When I say "documentation" here, I mean "documenting what we have already finished," not "describing what we have yet to build." See point #1.)
There were a few other points that we talked about that I want to write about. It's getting late, so I'll summarize. One is that lately I've been thinking that automated tests have two dimensions--a 'unit test' vs. 'integration test' dimension and a 'programmer test' vs. 'customer test' dimension. So we can have "programmer integration tests" and "customer unit tests." More and more, I'm coming to believe that unit tests are cheaper to create and maintain than integration tests and that we should turn integration tests into unit tests wherever possible. (Except... the minimum necessary to make sure the software is integrated.)
And the final point was a conversation about the right amount of testing. The question was, "Without Fit tests, how do you demonstrate to the customer that the software is tested?" My response was that I don't need to demonstrate that the software is tested, but that it works properly. And in that case I'll use external exploratory testing and end-user feedback to expose bugs. If these techniques find too many bugs, I'm not doing enough testing. If they don't, I'm doing the right amount, or even too much.
That leads to the question of "how many bugs are too many?" Assuming you're not generating design debt, that's a question of business tradeoffs, isn't it?
Thanks David, Rick, and Jeff, for such a thought-provoking conversation. This kind of discussion is exactly why I came to this conference and I'm thrilled to have had such a good conversation so soon.