Tuesday, May 31, 2005

Taxonomy of Tests: Everything Else

While the majority of your focus on an agile project is on either unit tests or acceptance tests, there are quite a few other types of useful tests that fall through the cracks. All of these types of tests can and should be part of your continuous integration strategy.

Developer Written Partial Integration Tests

Bigger than a unit test, but smaller in scope than an acceptance test. That's a big mouthful for a category, but I've never seen any two development teams that use the same name for this broad category of tests. For multiple reasons, most of my team's unit tests are disconnected -- i.e. databases, web services, and the like are mocked or stubbed. I want to test each class in isolation, but they have to interact with the real services too. A secondary battery of developer tests that test partial code stacks are very advantageous in uncovering problems in the actual interaction between classes or subsystems. I've known some XP developers who actually emphasize these semi-integrated tests more than the disconnected unit tests. I like to keep the types of tests in separate test assemblies, but that is just preference. These are really hard to describe, so here's a couple of ideas I've used or heard of --

  • Testing a view and controller or presenter together. I would still try to mock out the backend business services from the controller in order to focus on testing the integration of view and controller. This is especially important in the new Model-View-Presenter mode of UI development.
  • Testing from a UI controller down to the persistence layer.
  • Test from the service layer down. SOA is surely overhyped and overused, but it presents some great opportunities for automated testing. Because the .Net SOAP serialization is a bit brittle I always put automated tests in for the web service proxy class communicating with a local web service. It's also good to test the service code without IIS being involved.

Environment Tests

Environment tests verify that a build or test environment is fully connected and correctly configured. I helped do a presentation on continuous integration to the Austin Agile group last week and we spent quite a bit of time talking about this very topic. The gist of the slide was preventing lost time by testing teams by uncovering disconnects or failures in the testing environment at code migration time. Nothing is more aggravating than reported defects that end up being merely a mistake in code migration. I spent a week one time getting grilled because a data interface wasn't working correctly. It turned out that the middleware team had shut down the webMethods server in the test environment and not told the development team. Putting aside the total incompetence of everybody involved (including me), had an automated environment test been in place to verify the connectivity to webMethods, a full week of testing would not have been wasted.

I had some periodic trouble on a project last year with maintaining various copies of the system configuration (database connections, file paths, web service locations, etc.) for development, build and test environments. To combat this problem in the future I've actually added some support into my StructureMap tool for doing environment checks and configuration validation inside of an automated deployment. We sometimes move code a lot faster in agile development and it provides a lot of opportunity to screw things up. These little environment tests go a long way towards keeping project friction down.

I got the name for this off Nat Pryce's post here.

Smoke Tests

Smoke tests are simple crude tests to exercise the system just to see if the code blows up. They don't provide that much benefit, but they're generally low effort as well. On my current project we're replacing a large VB6 component with a new C# equivalent. One of the things we did to create a safety "tripwire" effect was to run a huge amount of historical input XML messages and configurations through the new C# engine just to find exceptions. We found and fixed several problems from edge cases and some flatout bugs. It was definitely worth the effort.

Regression Tests

My last two projects have been more or less rewrites of existing legacy code. Oftentimes hairy legacy code isn't understood very well by the organization (hence the rewrite). Twice now I've had to jury-rig the legacy code to create repeatable automated tests that simply prove the new code creates the same output for known input. It's not entirely a good test because it has no connection to the intent of the code, but both times it found problems. The irritation level was high because the legacy code wasn't pretty or easy to get running, but otherwise unknown defects removed and fixed made it worth the effort. Automate the regression tests in the build if you can. If nothing else, there are always business processes and requirements buried in the code that aren't known or documented anywhere else.

"Build Time" Validations

There are always some aspects of the system that the compiler and maybe even the unit test library can't validate. Use the automated build as an opportunity to create "build time" checks for potential problems or enforce coding standards. Here's some ideas of things to enforce in the build:
  • A compiler warning or "TODO" threshold. I've heard of people making a build fail for this.
  • A limit on ignored unit tests
  • SQL validator. Gotta keep the sql external to the code somehow, though.
  • O/R Mapping. We're starting to adopt NHibernate for persistence. One of the first steps will be validating the mapping in the build
  • Automated code coverage is kind of a test of your tests
  • Static code analysis

There's probably more, but that's all I can think of. I'd love to hear other ideas.

0 Comments:

Post a Comment

<< Home