Thursday, October 27, 2005
Acceptance Testing and Process
I've been spending the last few days mostly working on Acceptance Tests. We don't have a test harness in place, and so one of the developers created one. I took it and used it to build one test suite (4 tests). Just to get a feel for how it works, how our acceptance tests would be organized vs our unit tests and to see how fast it was. Additionally there was a problem that did not show up in unit testing that showed up when we ran the system, so I needed an acceptance test to show the issue, so I could go fix it.
Issue: Not writing acceptance tests up-front, or as we go.
In general, our testers do not write what XP would consider true acceptance tests - at least based on my understanding. They test the system and they have tools that help them do that, but I don't believe tests are automated. This based upon how long it takes a system to go through "integration testing". For this project, we have adopted the motto "Its not a test unless it is automated. Unless it is checked into CM and anyone on the team can pull it and issue a command and have it run." If I can't run your test, then it isn't really a test yet. This is a change from how we do things and has caused a bit of a problem, from an XP view. Our iterations are one week, and until this week (iteration 5) we have had no acceptance tests. We have unit tests, and we are "finishing" stories based upon those unit tests. In hindsight this very much violates the Running Tested Features principle. The reason we have no acceptance tests is because the tester wasn't exactly sure how to get them set up, and development was to busy writing functionality to help. Again, in hindsight, probably a bad thing. Development started helping out last week, by building a tool to generate data, and then the test harness, and early this week I wrote one of the first full acceptance tests. This gives a model from which to work.
So we are doing better, and the number of acceptance tests written and passing are growing every day. This is a great thing. A problem is that our velocity is slipping (because we really aren't working on any new stories, we are just getting "completed" ones to pass the acceptance tests). At least we know why it is slipping - so that is good. But it also makes it a bit harder to give an "end date". When will all these stories be done? Well... they are "done", they just aren't "finished". If we had done this all at once: write the unit test, run the test, write the code until the test passes, then run the acceptance test. We'd not only have finished stories, but we'd know how long it takes to get them to a "finished" state, not just a "done" state.
And yeah, I know - "done" means "running tested features"... but in our haste to get some code written and somewhat ignoring the test person "because I'm sure he'll have some acceptance tests any day now" - we went 4 weeks before we really had anything.
Learnings
So - it was a good learning. It puts us in a "weird" spot right now, somewhat. But we are making good progress on getting acceptance tests written and passing and the system overall seems to be working well. We know there are some issues, because manually running the system with particular data shows undesirable results; but we'll get that built into *automated* acceptance tests, run them to show they fail, and then start working the system until they pass. Right now there doesn't appear to be anything majority wrong. Especially given we had a few issues and yesterday I was able to get an acceptance tests finished which showed the issues and fix both of them in that day. While this might seem like a long time from an XP point of view (not sure) - it is pretty good for us. Identify a problem *with an automated test* and resolve it in one day, confident that it hasn't broken anything else because all the unit tests still work. It would be even better if all the acceptance tests were passing, but as noted, we don't have them all in place yet. But, now that we've learned this, we won't make the same mistake again, and once these tests are written, we'll have them "all" and as we do more stories, we'll write the tests at the same time we are implementing the stories... Which is what the books say to do, but sometimes you just have to make your mistakes and learn things the hard way. :)
Issue: Not writing acceptance tests up-front, or as we go.
In general, our testers do not write what XP would consider true acceptance tests - at least based on my understanding. They test the system and they have tools that help them do that, but I don't believe tests are automated. This based upon how long it takes a system to go through "integration testing". For this project, we have adopted the motto "Its not a test unless it is automated. Unless it is checked into CM and anyone on the team can pull it and issue a command and have it run." If I can't run your test, then it isn't really a test yet. This is a change from how we do things and has caused a bit of a problem, from an XP view. Our iterations are one week, and until this week (iteration 5) we have had no acceptance tests. We have unit tests, and we are "finishing" stories based upon those unit tests. In hindsight this very much violates the Running Tested Features principle. The reason we have no acceptance tests is because the tester wasn't exactly sure how to get them set up, and development was to busy writing functionality to help. Again, in hindsight, probably a bad thing. Development started helping out last week, by building a tool to generate data, and then the test harness, and early this week I wrote one of the first full acceptance tests. This gives a model from which to work.
So we are doing better, and the number of acceptance tests written and passing are growing every day. This is a great thing. A problem is that our velocity is slipping (because we really aren't working on any new stories, we are just getting "completed" ones to pass the acceptance tests). At least we know why it is slipping - so that is good. But it also makes it a bit harder to give an "end date". When will all these stories be done? Well... they are "done", they just aren't "finished". If we had done this all at once: write the unit test, run the test, write the code until the test passes, then run the acceptance test. We'd not only have finished stories, but we'd know how long it takes to get them to a "finished" state, not just a "done" state.
And yeah, I know - "done" means "running tested features"... but in our haste to get some code written and somewhat ignoring the test person "because I'm sure he'll have some acceptance tests any day now" - we went 4 weeks before we really had anything.
Learnings
- Do acceptance tests from the beginning. If this means you have to work on test harnesses, process, frameworks, etc. for the first week or so - do it. Even though you aren't getting any "stories" done, it will pay off in the long run. Because when they are "done", they'll really be "done".
- Don't forget the WholeTeam. Test is part of the team. If any member of the team isn't making progress, then the whole team isn't making progress. The feeling that "development is cranking out stories/test - we are just waiting on test to get some acceptance tests written" is NOT a whole team attitude.
So - it was a good learning. It puts us in a "weird" spot right now, somewhat. But we are making good progress on getting acceptance tests written and passing and the system overall seems to be working well. We know there are some issues, because manually running the system with particular data shows undesirable results; but we'll get that built into *automated* acceptance tests, run them to show they fail, and then start working the system until they pass. Right now there doesn't appear to be anything majority wrong. Especially given we had a few issues and yesterday I was able to get an acceptance tests finished which showed the issues and fix both of them in that day. While this might seem like a long time from an XP point of view (not sure) - it is pretty good for us. Identify a problem *with an automated test* and resolve it in one day, confident that it hasn't broken anything else because all the unit tests still work. It would be even better if all the acceptance tests were passing, but as noted, we don't have them all in place yet. But, now that we've learned this, we won't make the same mistake again, and once these tests are written, we'll have them "all" and as we do more stories, we'll write the tests at the same time we are implementing the stories... Which is what the books say to do, but sometimes you just have to make your mistakes and learn things the hard way. :)
