A paper published in 2013 about Test Driven Development included the following diagram. Unfortunately, it gets some things wrong:
A tweet from Nat Pryce sparked discussion:
Grumpy request to academics: if you're going to publish ideas about how to improve TDD, get the original process right! pic.twitter.com/FaSU8CF6ol
— Nat Pryce (@natpryce) September 7, 2017
First, let me say I’m happy to see more studies on TDD. The thrust of this particular study is that TDD can be soft on negative tests. That is, maybe the code works for good data, but it’ll break on bad data.
TDD is a development discipline, so I’m all for learning more from traditional testing disciplines. I certainly don’t want to discourage folks from doing studies and writing papers.
But. Let’s first make sure we’re doing proper TDD, shall we? Otherwise any studies, especially studies about efficacy, may be flawed.
The center of this diagram is “Add a new Test Case” and “Execute all Test Cases”. Curiously, if all tests pass, try adding another test case.
But when you add a new test case in TDD, you expect it to fail. If a new test passes, surprise! That could mean one of several things:
Actual process: Add new test case > Run tests > Test passes > OMG NO! THIS SHOULD DEFINITELY FAIL! *spend hours trying to make it fail* pic.twitter.com/mr1GnuxIFh
— Bonnie Aumann (@bonniea) September 7, 2017
The location of the “Code Refactoring” node is a major problem. In the diagram, it looks like refactoring occurs after you’ve finished creating your suite of tests.
But refactoring isn’t something you do at the end. It’s step 3 of the TDD Waltz. Each part of the waltz is an action with fast feedback.
I tell my students, “Refactoring is the secret sauce of TDD.” It’s what the tests empower: the ability to fearlessly change the design of the code. But it’s a continuous process, not one shot at the end.
And where the diagram is careful to include “Execute all Test Cases” after “Add a new Test Case” and “Make minimal code changes”, it strangely omits it from refactoring.
(And don’t forget to refactor test code, too.)
Like I said, I want to see more studies on TDD, not less. I applaud the focus of this particular paper: we should give thought to tests for robustness, not just basic functionality.
Part of having more robust tests is learning to question assumptions about input. You can see me do this at the end of my screencast on JSON parsing.
At the same time, robust code is important mainly for handling input from external sources. Making bulletproof code may be wasted effort. Context matters.
There’s also a question of improving our tooling:
So through discipline or tooling, we can make our code more robust.
But we can do this without sacrificing the basic principles of TDD.
Have you encountered malformed TDD? Have you TDD’d something that turned out not to be robust? Or can you share any information about new tools? Please leave a comment below.
I first experienced the joy of programming in junior high. But on the job, some of that joy was sucked away by seeing code my teammates were afraid to touch. Poor code led to fear, and fear led to our entire team being let go. I began searching for ways to improve code. I stumbled upon the first wiki, which was about Design Patterns, Extreme Programming, and Test Driven Development (TDD). I rediscovered joy on the job. I've now been doing TDD in Apple environments for 17 years. I'm committed to software crafting as a discipline, with the hope of raising you, my fellow programmers, to greater effectiveness and joy.
Please log in again. The login page will open in a new window. After logging in you can close it and return to this page.