Cost of convergence

I was recently developing a significant new feature on an ASP.NET application. The client’s regular dev team is located in India, but for this new feature they wanted somebody whom they could work with face-to-face, and having me dedicated to the project (for a short time) seemed like an advantage.

We started off really well. I was able to get into TFS for version control, connect to the development database, and add a few basic pages to get started.

Then we talked about testing. That’s always a conundrum when implementing a deliberately short-term coding project. If I finish my work and hand it over to QA, I’m done already and thus off to the next project, which usually means the next client.

After that, if they find any defects, or even if they think there are defects–then what? I’m not there. Sure, I can return a phone call or work around my next assignment, but most of my projects are extra-chunky. I’m often working more than full time with a given client, so backtracking with a prior client is hard to fit in.

It’s not like law or some other kinds of consulting, where clients tend to coexist actively over a long period of time. Really, I’m either working on your thing or I’m not.


So I proposed a solution. In short, I broke up the functionality into four testable units that I nicknamed (in order) Diamond, Club, Heart, and Spade. Spade was to be the final deliverable.

The idea was for Sarah’s team in the Indian office to pull code from version control first thing, run a build, call it “Diamond,” and test the subset of functionality described by the KanBan cards that I’d already marked as done. On my side, I gave both Sarah and Robin, our customer representative, an idea of what to expect of “Diamond,” “Club,” and so on.

I chose card suits whimsically, because that’s how I roll, but as with a lot of things I understood the deeper reason soon afterwards: There are only four of them! It subtly reinforced my commitment to getting this thing done in no more than four cycles. You can’t just slide and make a new suit.

Here’s how it went.

Robin and I had a nice chat with Sarah, describing the incremental release concept. She seemed to understand the idea of increments and the reason for having them. (One major reason was simply that I wouldn’t be around after “Spade” was done; this was the best way of making sure we got feedback on everything up through “Heart.”) Sarah was a little skeptical of my having used only inline SQL in my C# code; up to then the shop standard had been to use stored procedures for everything. We agreed that migrating all that inline code to procedures would happen, just not in time for Diamond.

So we agreed that I’d create the Diamond release on a certain day. Around 5:00 p.m. my time I did so, and sent an email saying to let me know right away if there were any problems getting started. I’m a night person, and their work day starts around my midnight, so I specifically promised to check my email around 1:00 a.m. to keep the process moving.

Okay, this is frustrating.

I got an email from Sarah at the end of her workday reporting some problem having to do with an installation script. sigh Turns out, she was looking for what you might call “convergence” on this little incremental release.

That’s one of the nuances of incremental delivery. You really can’t afford to: create the installer, finish the release notes, and tie up all those little loose ends. Not for every increment. It takes a long time and the effort simply isn’t worth it. Convergence on a small project like this one sets you back about a day each time, because you may have to back out or shelve recent changes in source control, perhaps regression-test your database changes, and generally make a “package” of your team’s work to that point. Additionally it breaks the momentum of what you’d been working on.

Communicating is hard when all you have is email and the occasional early-morning (for you) or late-afternoon (for them) phone call. I wonder how we could have talked through the convergence issue more productively. But rather than sticking to the plan for four mini-releases, I told the project owner that we’d fall back on the existing practice of finishing a project–in however agile a way it can be done–and basically throwing it over the wall for testing.

If there is rework to be done in light of testing results, so be it. We’ll figure something out.