Tuesday, May 24, 2005

Taxonomy of Tests: Acceptance Tests

From Ward Cunningham's wiki: "A formal test conducted to determine whether or not a system satisfies its acceptance criteria and to enable the customer to determine whether or not to accept the system."

The acceptance test is exactly what it sounds like, it verifies that the implementation of a user story works as the customer requires. In a literal interpretation of XP theory, the customer writes the acceptance test. In reality, the acceptance tests are going to be written by some combination of the business partner, testers, business analysts, or product managers. It really doesn't matter to me, as long as they are done in a timely manner with some degree of correctness. You'll never have the ideal XP customer, but it doesn't necessarily mean the project is hopeless.

It is a huge advantage when acceptance tests are approved and ready during coding. User stories, use cases or huge paper System Requirements Specification documents -- the best and most effective requirement is a cut and dried acceptance test. When the coders can (and do) easily execute the acceptance tests themselves, the bug counts will go down. The development team as a whole can use the acceptance tests as the “Red Bar” metric to watch during an iteration. A user story is “done” only when all the finalized acceptance tests pass. James Shore has a good post on acceptance tests as requirements here.

Developer/Tester Cooperation

Testing features during an iteration as soon as the code dries is a tremendous advantage. It's much, much simpler to fix bugs that were just coded before the memory of the code fades and the bug is covered up my much more code.

My current project has not had acceptance tests written during iterations, and it has clearly hurt us in terms of wasted effort and misunderstood requirements. Never again.

Organizational issues between testing and development are not specific to agile development, but the desire for rapid iterations will expose any friction between requirements, code, and testing. Moving fast without sacrificing testing requires the ability to cycle code quickly between development and testing environments. Code that is easy to test helps quite a bit, too. Developers have to support the testers with the mechanics of testing. Both testing and development can benefit from a good Continuous Integration (CI) strategy. Testers should be involved in the CI effort. I feel very strongly that it is the developers’ responsibility to ensure the test environment is in a valid state. “BuildMaster” folks are a desirable luxury, but I always want visibility into the test server environment.

Automating Acceptance Tests

Acceptance tests are a good and necessary thing. As far as testing is concerned, an automated test that runs often and visibly is far more valuable than a slow manual test buried in a test plan. A common objection to iterative development is the need for so much regression testing. The best answer to this perfectly valid objection is to automate as much as you can.
Enterprise applications are rarely finished; they're abandoned when they are judged to be too expensive to change. Automating acceptance tests is a great way to extend an application's lifecycle by reducing the cost of regression testing. Also, to paraphrase a former colleague -- "If your tests aren't automated, how do you know they're passing?" Think on that one.

Who writes the test is one decision. Who automates the tests, and how, is a different decision. Hopefully you're lucky enough to have testers with programming or scripting abilities. Even if you do have technical testers, plan on the developers assisting the testers with test automation. I'm not a big fan of FIT and its derivatives, but it is definitely a step in the right direction for acceptance testing (I love the concept, but I hate the implementation of NFit). The FIT model allows for writing acceptance tests in near English, with some non-trivial work behind the scenes to complete the wiring for the custom FIT fixtures.

Conventional wisdom says that acceptance tests are black box tests that test the system end-to-end. This doesn't necessarily have to be the case. I’ve seen both Brian Marick and Brett Pettichord write about using white box tests for acceptance tests, especially for user interfaces. Using Service Orientation principles to create a cohesive service layer can be a great aid for automated acceptance testing. I think (N)FIT chokes on user interfaces, but it is perfect for driving an isolated service point (note I didn’t necessarily say web service) in an automated test. These white box tests can be much easier to setup and execute. Some flavors of MVC architecture allow for slipping in a screen proxy in place of a real UI view (I've had mixed results with this). Again, it is not a “perfect” black box test, but it goes a long way towards validating the behavior of the system.

Good testers can work with developers to write these automated whitebox tests. It does take pretty good communication and cooperation between coders and testers to take advantage of these opportunities for test automation. It is definitely advantageous for the testers to be familiar with some of the inner workings of the system. I definitely think agile development starts to blur the line between developers and testers, and that's not an entirely bad thing.


Anonymous Anonymous said...

Thank you!
[url=http://xrfkdtrb.com/eolq/uulp.html]My homepage[/url] | [url=http://ieubwrsm.com/rhbj/sofd.html]Cool site[/url]

12:19 PM, November 04, 2006  
Anonymous Anonymous said...

Great work!
My homepage | Please visit

12:19 PM, November 04, 2006  
Anonymous Anonymous said...

Thank you!
http://xrfkdtrb.com/eolq/uulp.html | http://kaolfrsv.com/yroo/nfke.html

12:19 PM, November 04, 2006  
Blogger No Hassle Loans said...

Has anybody tried this Hoodia Diet Pills. I heard of the Hoodia Weightloss pills. Here is the Pure Hoodia Diet Pills or the Phentramine diet pill

11:10 PM, November 19, 2006  

Post a Comment

<< Home