Thursday, October 27, 2005
Acceptance Tests and Speed
As noted in the previous post, our test group isn't really used to writing fully automated acceptance tests, as far as I know. Our system is made up of many processes, where the data comes into one process, is transformed and moves through several other processes via IPC. To get data from the start to the end can take some amount of time. Not that it takes a *long* time for data to get processed, but to bring up the system, run a small data set through, and then bring the system down and check the results, well it takes a while.
Today I'm going to talk to the team about two things with respect to acceptance testing.
These two things go hand in hand. We have a manager process that starts the other processes and manages them. It makes sure they come up in the right order, watches them and reports if they are having problems, restarts them, etc. This is good - but it takes time. I wrote an acceptance test that had to run parts of the system 4 times (because I'm testing four different test cases in the one test suite, which requires different configuration of the processes). It takes about 60 seconds to run. From my reading on the XP mailing list - that is "a long time".
Now given that you've never written (or run) and automated test. And given you usually just manually change your configuration, get everything set, run the system, and then check all the results by hand - this is a *vast* improvement. But given you are running in an XP environment, where all acceptance tests should be run multiple times a day, and you have 20+ tests, and they all take a minute - you are looking at 20 minutes before you get any feedback on how the latest change effected the system. Too long. And if the test are going to grow over time, it is getting even worse.
So - I have to see if I can reduce the time of my test. However, the reason I need to talk to the team is because the acceptance test that the tester wrote is taking 60 seconds - and it just runs one set of processes, processing a small set of data. So what my test does in about 15 seconds, this test is doing in 60. The reason is, that the testers test is using the manager to bring everything up and bring everything down. While this is the environment in which the system will run, is it really needed for every story, or can this just be run in a test for the story that the system should be automatically managed (or something like that).
Again - when you are running one test and it takes a minute and spits out automated results, that's a great improvement over manual tests. But if we don't pay attention to speed now, it will bite us later. If we have 100 tests and it takes 2 hours to run them - well that just isn't going to be good.
We'll see how it goes...
Today I'm going to talk to the team about two things with respect to acceptance testing.
- How much of the system do we need to run in order to acceptance test a particular story, and
- The speed of the tests
These two things go hand in hand. We have a manager process that starts the other processes and manages them. It makes sure they come up in the right order, watches them and reports if they are having problems, restarts them, etc. This is good - but it takes time. I wrote an acceptance test that had to run parts of the system 4 times (because I'm testing four different test cases in the one test suite, which requires different configuration of the processes). It takes about 60 seconds to run. From my reading on the XP mailing list - that is "a long time".
Now given that you've never written (or run) and automated test. And given you usually just manually change your configuration, get everything set, run the system, and then check all the results by hand - this is a *vast* improvement. But given you are running in an XP environment, where all acceptance tests should be run multiple times a day, and you have 20+ tests, and they all take a minute - you are looking at 20 minutes before you get any feedback on how the latest change effected the system. Too long. And if the test are going to grow over time, it is getting even worse.
So - I have to see if I can reduce the time of my test. However, the reason I need to talk to the team is because the acceptance test that the tester wrote is taking 60 seconds - and it just runs one set of processes, processing a small set of data. So what my test does in about 15 seconds, this test is doing in 60. The reason is, that the testers test is using the manager to bring everything up and bring everything down. While this is the environment in which the system will run, is it really needed for every story, or can this just be run in a test for the story that the system should be automatically managed (or something like that).
Again - when you are running one test and it takes a minute and spits out automated results, that's a great improvement over manual tests. But if we don't pay attention to speed now, it will bite us later. If we have 100 tests and it takes 2 hours to run them - well that just isn't going to be good.
We'll see how it goes...
Acceptance tests and speed
As noted in the previous post, our test group isn't really used to writing fully automated acceptance tests, as far as I know. Our system is made up of many processes, where the data comes into one process, is transformed and moves through several other processes via IPC. To get data from the start to the end can take some amount of time. Not that it takes a *long* time for data to get processed, but to bring up the system, run a small data set through, and then bring the system down and check the results, well it takes a while.
Today I'm going to talk to the team about two things with respect to acceptance testing.
These two things go hand in hand. We have a manager process that starts the other processes and manages them. It makes sure they come up in the right order, watches them and reports if they are having problems, restarts them, etc. This is good - but it takes time. I wrote an acceptance test that had to run parts of the system 4 times (because I'm testing four different test cases in the one test suite, which requires different configuration of the processes). It takes about 60 seconds to run. From my reading on the XP mailing list - that is "a long time".
Now given that you've never written (or run) and automated test. And given you usually just manually change your configuration, get everything set, run the system, and then check all the results by hand - this is a *vast* improvement. But given you are running in an XP environment, where all acceptance tests should be run multiple times a day, and you have 20+ tests, and they all take a minute - you are looking at 20 minutes before you get any feedback on how the latest change effected the system. Too long. And if the test are going to grow over time, it is getting even worse.
So - I have to see if I can reduce the time of my test. However, the reason I need to talk to the team is because the acceptance test that the tester wrote is taking 60 seconds - and it just runs one set of processes, processing a small set of data. So what my test does in about 15 seconds, this test is doing in 60. The reason is, that the testers test is using the manager to bring everything up and bring everything down. While this is the environment in which the system will run, is it really needed for every story, or can this just be run in a test for the story that the system should be automatically managed (or something like that).
Again - when you are running one test and it takes a minute and spits out automated results, that's a great improvement over manual tests. But if we don't pay attention to speed now, it will bite us later. If we have 100 tests and it takes 2 hours to run them - well that just isn't going to be good.
We'll see how it goes...
Today I'm going to talk to the team about two things with respect to acceptance testing.
- How much of the system do we need to run in order to acceptance test a particular story, and
- The speed of the tests
These two things go hand in hand. We have a manager process that starts the other processes and manages them. It makes sure they come up in the right order, watches them and reports if they are having problems, restarts them, etc. This is good - but it takes time. I wrote an acceptance test that had to run parts of the system 4 times (because I'm testing four different test cases in the one test suite, which requires different configuration of the processes). It takes about 60 seconds to run. From my reading on the XP mailing list - that is "a long time".
Now given that you've never written (or run) and automated test. And given you usually just manually change your configuration, get everything set, run the system, and then check all the results by hand - this is a *vast* improvement. But given you are running in an XP environment, where all acceptance tests should be run multiple times a day, and you have 20+ tests, and they all take a minute - you are looking at 20 minutes before you get any feedback on how the latest change effected the system. Too long. And if the test are going to grow over time, it is getting even worse.
So - I have to see if I can reduce the time of my test. However, the reason I need to talk to the team is because the acceptance test that the tester wrote is taking 60 seconds - and it just runs one set of processes, processing a small set of data. So what my test does in about 15 seconds, this test is doing in 60. The reason is, that the testers test is using the manager to bring everything up and bring everything down. While this is the environment in which the system will run, is it really needed for every story, or can this just be run in a test for the story that the system should be automatically managed (or something like that).
Again - when you are running one test and it takes a minute and spits out automated results, that's a great improvement over manual tests. But if we don't pay attention to speed now, it will bite us later. If we have 100 tests and it takes 2 hours to run them - well that just isn't going to be good.
We'll see how it goes...
Acceptance Testing and Process
I've been spending the last few days mostly working on Acceptance Tests. We don't have a test harness in place, and so one of the developers created one. I took it and used it to build one test suite (4 tests). Just to get a feel for how it works, how our acceptance tests would be organized vs our unit tests and to see how fast it was. Additionally there was a problem that did not show up in unit testing that showed up when we ran the system, so I needed an acceptance test to show the issue, so I could go fix it.
Issue: Not writing acceptance tests up-front, or as we go.
In general, our testers do not write what XP would consider true acceptance tests - at least based on my understanding. They test the system and they have tools that help them do that, but I don't believe tests are automated. This based upon how long it takes a system to go through "integration testing". For this project, we have adopted the motto "Its not a test unless it is automated. Unless it is checked into CM and anyone on the team can pull it and issue a command and have it run." If I can't run your test, then it isn't really a test yet. This is a change from how we do things and has caused a bit of a problem, from an XP view. Our iterations are one week, and until this week (iteration 5) we have had no acceptance tests. We have unit tests, and we are "finishing" stories based upon those unit tests. In hindsight this very much violates the Running Tested Features principle. The reason we have no acceptance tests is because the tester wasn't exactly sure how to get them set up, and development was to busy writing functionality to help. Again, in hindsight, probably a bad thing. Development started helping out last week, by building a tool to generate data, and then the test harness, and early this week I wrote one of the first full acceptance tests. This gives a model from which to work.
So we are doing better, and the number of acceptance tests written and passing are growing every day. This is a great thing. A problem is that our velocity is slipping (because we really aren't working on any new stories, we are just getting "completed" ones to pass the acceptance tests). At least we know why it is slipping - so that is good. But it also makes it a bit harder to give an "end date". When will all these stories be done? Well... they are "done", they just aren't "finished". If we had done this all at once: write the unit test, run the test, write the code until the test passes, then run the acceptance test. We'd not only have finished stories, but we'd know how long it takes to get them to a "finished" state, not just a "done" state.
And yeah, I know - "done" means "running tested features"... but in our haste to get some code written and somewhat ignoring the test person "because I'm sure he'll have some acceptance tests any day now" - we went 4 weeks before we really had anything.
Learnings
So - it was a good learning. It puts us in a "weird" spot right now, somewhat. But we are making good progress on getting acceptance tests written and passing and the system overall seems to be working well. We know there are some issues, because manually running the system with particular data shows undesirable results; but we'll get that built into *automated* acceptance tests, run them to show they fail, and then start working the system until they pass. Right now there doesn't appear to be anything majority wrong. Especially given we had a few issues and yesterday I was able to get an acceptance tests finished which showed the issues and fix both of them in that day. While this might seem like a long time from an XP point of view (not sure) - it is pretty good for us. Identify a problem *with an automated test* and resolve it in one day, confident that it hasn't broken anything else because all the unit tests still work. It would be even better if all the acceptance tests were passing, but as noted, we don't have them all in place yet. But, now that we've learned this, we won't make the same mistake again, and once these tests are written, we'll have them "all" and as we do more stories, we'll write the tests at the same time we are implementing the stories... Which is what the books say to do, but sometimes you just have to make your mistakes and learn things the hard way. :)
Issue: Not writing acceptance tests up-front, or as we go.
In general, our testers do not write what XP would consider true acceptance tests - at least based on my understanding. They test the system and they have tools that help them do that, but I don't believe tests are automated. This based upon how long it takes a system to go through "integration testing". For this project, we have adopted the motto "Its not a test unless it is automated. Unless it is checked into CM and anyone on the team can pull it and issue a command and have it run." If I can't run your test, then it isn't really a test yet. This is a change from how we do things and has caused a bit of a problem, from an XP view. Our iterations are one week, and until this week (iteration 5) we have had no acceptance tests. We have unit tests, and we are "finishing" stories based upon those unit tests. In hindsight this very much violates the Running Tested Features principle. The reason we have no acceptance tests is because the tester wasn't exactly sure how to get them set up, and development was to busy writing functionality to help. Again, in hindsight, probably a bad thing. Development started helping out last week, by building a tool to generate data, and then the test harness, and early this week I wrote one of the first full acceptance tests. This gives a model from which to work.
So we are doing better, and the number of acceptance tests written and passing are growing every day. This is a great thing. A problem is that our velocity is slipping (because we really aren't working on any new stories, we are just getting "completed" ones to pass the acceptance tests). At least we know why it is slipping - so that is good. But it also makes it a bit harder to give an "end date". When will all these stories be done? Well... they are "done", they just aren't "finished". If we had done this all at once: write the unit test, run the test, write the code until the test passes, then run the acceptance test. We'd not only have finished stories, but we'd know how long it takes to get them to a "finished" state, not just a "done" state.
And yeah, I know - "done" means "running tested features"... but in our haste to get some code written and somewhat ignoring the test person "because I'm sure he'll have some acceptance tests any day now" - we went 4 weeks before we really had anything.
Learnings
- Do acceptance tests from the beginning. If this means you have to work on test harnesses, process, frameworks, etc. for the first week or so - do it. Even though you aren't getting any "stories" done, it will pay off in the long run. Because when they are "done", they'll really be "done".
- Don't forget the WholeTeam. Test is part of the team. If any member of the team isn't making progress, then the whole team isn't making progress. The feeling that "development is cranking out stories/test - we are just waiting on test to get some acceptance tests written" is NOT a whole team attitude.
So - it was a good learning. It puts us in a "weird" spot right now, somewhat. But we are making good progress on getting acceptance tests written and passing and the system overall seems to be working well. We know there are some issues, because manually running the system with particular data shows undesirable results; but we'll get that built into *automated* acceptance tests, run them to show they fail, and then start working the system until they pass. Right now there doesn't appear to be anything majority wrong. Especially given we had a few issues and yesterday I was able to get an acceptance tests finished which showed the issues and fix both of them in that day. While this might seem like a long time from an XP point of view (not sure) - it is pretty good for us. Identify a problem *with an automated test* and resolve it in one day, confident that it hasn't broken anything else because all the unit tests still work. It would be even better if all the acceptance tests were passing, but as noted, we don't have them all in place yet. But, now that we've learned this, we won't make the same mistake again, and once these tests are written, we'll have them "all" and as we do more stories, we'll write the tests at the same time we are implementing the stories... Which is what the books say to do, but sometimes you just have to make your mistakes and learn things the hard way. :)
Thursday, October 20, 2005
TDD really works
Test Driven Design, and automated unit tests, really works - not that it is any surprise to anyone who is actually using XP. But its nice to see what is claimed in theory, personally works.
I've been building a number of low-level classes. I've built them following what I believe is the XP approach. Design by intention, build a test, run it, code it, run it again until it works. All my code is "done" - in that it is passing all the tests. (No acceptance tests yet because of our environment. An issue I know, but we are working on it). So... I'm not "running tested features" yet, but I am working stories and unit testing the code which implements them.
Yesterday another team member wrote some tests and then wrote code to pass that test, plugging it into my code. My code is very independent, using stubs, etc. It passes all the tests I've written for it. But when she tried to use it - it wasn't working.
"How can this be? It passes all the tests!"
I then looked at her data, which happens to be valid, but is more complex than the data I had tested with. It is a scenario that upon examination is totally valid, but not one I had planned for.
And this is where the beauty of XP comes in, imo.
Hmm... yeah, I hadn't planned on that, but I can see why it would be expected. How do I fix it. I'm not sure why it isn't working... but I wouldn't expect it to work seeing as how I wasn't thinking in that way. So - how to fix it. There are a number of classes and quite a few interactions, so I'm not even sure where to start. As some XP authors/writers say "I get this bad feeling in the pit of my stomach" when I start thinking about just digging into the code and trying to figure out what went wrong.
Doh... the XP light bulb goes on. WRITE A TEST FOR IT. Of course!
So I write a test, and sure enough it fails (no big surprise). OK... now what. I still don't know how to fix it, and the test fails which is what I expected... But the real reason I added the test is so that I will know when the code is actually fixed. Right. OK... now to start digging (but again, that feeling - I don't want to just dig around in the code). So, I go to the method that is failing, and I look at it. It (like most XP methods) is made up of other methods. So that the code is expressive and short. Hmm... it looks fine, but is obviously broke, so it must be one of the methods that it is calling. Well since they all have tests (YES! Good thing I followed the "test everything" rule, and didn't just test the public interfaces) - I can update each of their tests to test the new scenario. Maybe not all the data from the scenario, but given the scenario, what would their parameters look like.
Hmm... maybe its this first one. If it didn't work that would surely hose things up. Write the new test, run it. It passes. XP Bonus: Instantly I know it isn't this method. I didn't spend tons of time reading the code, thinking it all through, trying to figure out whether it should work or not. I just tried it, and it was PROVEN that it worked. Excellent.
OK... maybe this next method. Write the test. It fails. Excellent. I'm not the track now. At least this is where one problem is occurring (maybe there are others).
I look at this method (level 2, if you consider the method that is having trouble as level 1) and it makes a few calls. OK... lets just be recursive. Lets write tests for its methods. I write a test for the first method, and it fails. This method is very simple, and I go take a look. Light bulb. Of course, there are 3 conditions that this method has to process, and I only considered two of them. I've got a less-than clause, and it should really be less-than-or-equal-to. I make that change, rerun all the tests for this class, and all of a sudden I've got a "green bar". 100% of my tests are working. This tells me two things (as obvious as this might seem). 1) That I've fixed the low-level method that I just discovered was bad, and 2) that fixing that, fixed all my problems.
I commit the code, the other person pulls it, and now it works for her as well. I feel great.
Although we aren't "pair programming" yet, we are only 4 weeks into using the XP process, and we have things we need to learn, issues we are overcoming, etc. - the great thing is that we already see so much benefit. And to use an unscientific proof - it just "feels right". Every day as I write tests, and then write the code, and then continuing writing until the tests passes; and every night as I finish for the day, looking at all the tests I've passed, seeing how much closer we are getting to having something done (even if we take a few small steps back when something wasn't tested for and we have something that needs fixed) - I just leave work feeling so happy. One of the things I am is a "process guy". One of the people that thinks about how we should write software, not just writing software. And like I said, XP just "feels right". This is a blessing, and probably a little bit of a curse. I'm so happy the team I'm on is doing it, and I'm kind of sick feeling that all the other teams aren't. Especially when they are having problems that I know this will fix... But change comes slowly. First, my current team will prove it works, and show successes, and then we'll slowly spread it out to the other teams.
Its been a good XP day. :)
I've been building a number of low-level classes. I've built them following what I believe is the XP approach. Design by intention, build a test, run it, code it, run it again until it works. All my code is "done" - in that it is passing all the tests. (No acceptance tests yet because of our environment. An issue I know, but we are working on it). So... I'm not "running tested features" yet, but I am working stories and unit testing the code which implements them.
Yesterday another team member wrote some tests and then wrote code to pass that test, plugging it into my code. My code is very independent, using stubs, etc. It passes all the tests I've written for it. But when she tried to use it - it wasn't working.
"How can this be? It passes all the tests!"
I then looked at her data, which happens to be valid, but is more complex than the data I had tested with. It is a scenario that upon examination is totally valid, but not one I had planned for.
And this is where the beauty of XP comes in, imo.
Hmm... yeah, I hadn't planned on that, but I can see why it would be expected. How do I fix it. I'm not sure why it isn't working... but I wouldn't expect it to work seeing as how I wasn't thinking in that way. So - how to fix it. There are a number of classes and quite a few interactions, so I'm not even sure where to start. As some XP authors/writers say "I get this bad feeling in the pit of my stomach" when I start thinking about just digging into the code and trying to figure out what went wrong.
Doh... the XP light bulb goes on. WRITE A TEST FOR IT. Of course!
So I write a test, and sure enough it fails (no big surprise). OK... now what. I still don't know how to fix it, and the test fails which is what I expected... But the real reason I added the test is so that I will know when the code is actually fixed. Right. OK... now to start digging (but again, that feeling - I don't want to just dig around in the code). So, I go to the method that is failing, and I look at it. It (like most XP methods) is made up of other methods. So that the code is expressive and short. Hmm... it looks fine, but is obviously broke, so it must be one of the methods that it is calling. Well since they all have tests (YES! Good thing I followed the "test everything" rule, and didn't just test the public interfaces) - I can update each of their tests to test the new scenario. Maybe not all the data from the scenario, but given the scenario, what would their parameters look like.
Hmm... maybe its this first one. If it didn't work that would surely hose things up. Write the new test, run it. It passes. XP Bonus: Instantly I know it isn't this method. I didn't spend tons of time reading the code, thinking it all through, trying to figure out whether it should work or not. I just tried it, and it was PROVEN that it worked. Excellent.
OK... maybe this next method. Write the test. It fails. Excellent. I'm not the track now. At least this is where one problem is occurring (maybe there are others).
I look at this method (level 2, if you consider the method that is having trouble as level 1) and it makes a few calls. OK... lets just be recursive. Lets write tests for its methods. I write a test for the first method, and it fails. This method is very simple, and I go take a look. Light bulb. Of course, there are 3 conditions that this method has to process, and I only considered two of them. I've got a less-than clause, and it should really be less-than-or-equal-to. I make that change, rerun all the tests for this class, and all of a sudden I've got a "green bar". 100% of my tests are working. This tells me two things (as obvious as this might seem). 1) That I've fixed the low-level method that I just discovered was bad, and 2) that fixing that, fixed all my problems.
I commit the code, the other person pulls it, and now it works for her as well. I feel great.
- Immediate fast feedback
- Made "debugging" much easier - I didn't really have to debug (in the traditional sense), I just kept writing tests and following where they lead me
- Given the scenario that is "broken" - I can easily tell which methods are working, even though I didn't previously test them, when I write the new tests.
- I've proven that the code will work for this new test scenario
- I'm confident my code will work
- I'm confident I didn't break anything else (because all the other tests are still working)
- I feel good because I'm doing things the "right" way, and if I have another issue in the future, where I didn't think about it and test for it, I know I can easily write a few new tests and quickly find and fix the problem
Although we aren't "pair programming" yet, we are only 4 weeks into using the XP process, and we have things we need to learn, issues we are overcoming, etc. - the great thing is that we already see so much benefit. And to use an unscientific proof - it just "feels right". Every day as I write tests, and then write the code, and then continuing writing until the tests passes; and every night as I finish for the day, looking at all the tests I've passed, seeing how much closer we are getting to having something done (even if we take a few small steps back when something wasn't tested for and we have something that needs fixed) - I just leave work feeling so happy. One of the things I am is a "process guy". One of the people that thinks about how we should write software, not just writing software. And like I said, XP just "feels right". This is a blessing, and probably a little bit of a curse. I'm so happy the team I'm on is doing it, and I'm kind of sick feeling that all the other teams aren't. Especially when they are having problems that I know this will fix... But change comes slowly. First, my current team will prove it works, and show successes, and then we'll slowly spread it out to the other teams.
Its been a good XP day. :)
Saturday, October 15, 2005
Unit testing
Unit testing just rawks. I'm having such a blast doing "this XP thing" with respect to TDD, coding by intention, and writing unit tests.
We have no official unit test tool (such as JUnit or CPPUnit, etc.) since we are writing in CLIPS. However, a couple of cool things about CLIPS. One is that it is reflective and another is that you can extend the functions/methods of a class by adding them into another file. This makes it every easy to write a test harness framework (which I had previously done) and then start writing unit tests.
In general I'm doing very little design up front. Based upon the iteration and who signed up for what task, I'll say something like, "OK. So I'm supposed to write a class to do X and Y, right?" and then after we discuss this for a few minutes, I'll write a stub for the test code for the class. Setting up a directory (we have all our configuration, run scripts, and test code for a particular suite in the same directory). I can get this set up in about 5 minutes or less - creating all the code I need to create for a test suite and then running it to show it runs and fails the first test.
I then start writing my tests. "OK - the class needs an X method, and it needs to do Y." I write the test, and maybe as I'm doing that I realize that I need a way to easily prove it worked - so maybe I need a few more methods. I add those into the class, stubbed - along with X. And I just keep going, building helper classes (which now get their own test suite, which I can quickly set up and start testing.
The other day I started in the morning thinking I was going to write class A, but as I went to test it realized that I really needed to write B first - as a helper class for A. By the end of the day I hadn't gotten much progress done on A (although the test was still in place for it), but I had gotten B fully completed, along with 27 tests that proved everything it needed to do. The next day I could look at A and see that with B complete, there was much less to do in A, and I started testing and working on it.
The thing that makes unit testing rawk, to me, at this point in the life of this new project, isn't even all the cool long-term project stuff like being able to easily modify code, being able to fix a defect or add a feature with the hard and fast knowledge that you haven't broken anything yet. What is really cool is the other things that XP talks about. Feedback and progress. The ability to quickly write code and have feedback. To think of a needed functionality, write a test for it, and code it within minutes and see it work. To have that feeling of accomplishment. Not just that "I wrote some code today and I think it will probably work and I'm making some progress", but to know that "I added 5 new methods to the class today and they are passing 25 unit tests, doing both positive and negative tests." To see that test number climb from 1 to 3 to 7 to 10, etc. throughout the day. Its just a very cool thing... and so fast and immediate.
We have no official unit test tool (such as JUnit or CPPUnit, etc.) since we are writing in CLIPS. However, a couple of cool things about CLIPS. One is that it is reflective and another is that you can extend the functions/methods of a class by adding them into another file. This makes it every easy to write a test harness framework (which I had previously done) and then start writing unit tests.
In general I'm doing very little design up front. Based upon the iteration and who signed up for what task, I'll say something like, "OK. So I'm supposed to write a class to do X and Y, right?" and then after we discuss this for a few minutes, I'll write a stub for the test code for the class. Setting up a directory (we have all our configuration, run scripts, and test code for a particular suite in the same directory). I can get this set up in about 5 minutes or less - creating all the code I need to create for a test suite and then running it to show it runs and fails the first test.
I then start writing my tests. "OK - the class needs an X method, and it needs to do Y." I write the test, and maybe as I'm doing that I realize that I need a way to easily prove it worked - so maybe I need a few more methods. I add those into the class, stubbed - along with X. And I just keep going, building helper classes (which now get their own test suite, which I can quickly set up and start testing.
The other day I started in the morning thinking I was going to write class A, but as I went to test it realized that I really needed to write B first - as a helper class for A. By the end of the day I hadn't gotten much progress done on A (although the test was still in place for it), but I had gotten B fully completed, along with 27 tests that proved everything it needed to do. The next day I could look at A and see that with B complete, there was much less to do in A, and I started testing and working on it.
The thing that makes unit testing rawk, to me, at this point in the life of this new project, isn't even all the cool long-term project stuff like being able to easily modify code, being able to fix a defect or add a feature with the hard and fast knowledge that you haven't broken anything yet. What is really cool is the other things that XP talks about. Feedback and progress. The ability to quickly write code and have feedback. To think of a needed functionality, write a test for it, and code it within minutes and see it work. To have that feeling of accomplishment. Not just that "I wrote some code today and I think it will probably work and I'm making some progress", but to know that "I added 5 new methods to the class today and they are passing 25 unit tests, doing both positive and negative tests." To see that test number climb from 1 to 3 to 7 to 10, etc. throughout the day. Its just a very cool thing... and so fast and immediate.
User Stories
(I originally wrote this a few weeks ago - but accidently deleted it before saving it; so here I go again. I usually don't write as good the second time - but we'll see. ;) )
We use a customer proxy. Not the actual customer, but someone who represents the user. In MCI this person is a BA (Business Analyst). Their job is to act as a liason between the customer and development - turning "customer language" into "developer languge", writting the requirements, dealing with issues, etc.
For this project the BA is acting as the XP customer proxy. This has been a hugh change for the BA, but one she is mostly enjoying. The hardest part, from my prespective, is the fact that we don't have a requirements document - and that we don't need to know everything up front. As the pink book says, "Don't worry about whether or not you have all the requirements... you don't." This is showing the fallacy in the waterfall thought of "we'll get all the requirements defined, in concrete, as a contract, then we'll go write the software to fulfill them."
It has been interesting to watch the concept of having user stories, which are a promise of later conversation, take hold.
Development is pushing this, perhaps to the exteme - no pun intended. The BA says all the time things like, "Before you guys can start on, I need to find out from the customer all the details" and we are constantly coming back with, "No, you don't. If you do, great. But if not, it won't hold us up, because we aren't to the point where that matters yet." And it even goes past that to, "Well - we'll make an assumption and code it one way, and because we are so *extreme*, *fast*, having *unit tests*, etc. - it won't be hard for us to change that if we have to." This is so foreign of a concept. She keeps shaking her head and saying things like, "I keep forgetting... I don't need to know every little detail up-front." In the past, development wouldn't start anything, until everything was known.
The other day something changed (I forget what - which in itself is kind of telling. The fact that it changed was so minor that I don't even remember what it was) - and she said, "Do I have to create a change control for that?" A change control is where something changes and a document is issued which discusses the change, how the current requirements document will be updated, the impact to development, and all the other groups (test, QA, etc.), and have to get assesments from all the groups, etc. We said, "Change control? No, just change the story. We haven't started it yet, and so if you change it before we get to it, no big deal. And even if we had already done it, the change is minor and we'd just update the code and rerun the test."
While this is very different from what the BA is used to, and hard for her to internalize at times, I believe she is loving it. She occasionally shakes her head and says things like, "This is so much easier.", "This is great that I don't have to know it all before you guys even start things", "This is awesome that I can ask for a change before the release goes out and it isn't a big deal." This is a BA that really wants what is best for her customer, and frequently gets teased that she is the "Change Control Queen" - because as a 3 month project drags on, she is always coming up with new things that she'd like to get "squeezed into the release". With XP, she has the ability to add things at any time (as long as she removes something else) - and she is totally cool with it.
Anyway - this kind of turned into more than just a User Story discussion. To get back on track - we think we are doing OK with the stories. They *may* be a little too low-level, they may be too developer-oriented. But they seem to be working for us, and I'm sure we'll learn as we go.
We use a customer proxy. Not the actual customer, but someone who represents the user. In MCI this person is a BA (Business Analyst). Their job is to act as a liason between the customer and development - turning "customer language" into "developer languge", writting the requirements, dealing with issues, etc.
For this project the BA is acting as the XP customer proxy. This has been a hugh change for the BA, but one she is mostly enjoying. The hardest part, from my prespective, is the fact that we don't have a requirements document - and that we don't need to know everything up front. As the pink book says, "Don't worry about whether or not you have all the requirements... you don't." This is showing the fallacy in the waterfall thought of "we'll get all the requirements defined, in concrete, as a contract, then we'll go write the software to fulfill them."
It has been interesting to watch the concept of having user stories, which are a promise of later conversation, take hold.
Development is pushing this, perhaps to the exteme - no pun intended. The BA says all the time things like, "Before you guys can start on
The other day something changed (I forget what - which in itself is kind of telling. The fact that it changed was so minor that I don't even remember what it was) - and she said, "Do I have to create a change control for that?" A change control is where something changes and a document is issued which discusses the change, how the current requirements document will be updated, the impact to development, and all the other groups (test, QA, etc.), and have to get assesments from all the groups, etc. We said, "Change control? No, just change the story. We haven't started it yet, and so if you change it before we get to it, no big deal. And even if we had already done it, the change is minor and we'd just update the code and rerun the test."
While this is very different from what the BA is used to, and hard for her to internalize at times, I believe she is loving it. She occasionally shakes her head and says things like, "This is so much easier.", "This is great that I don't have to know it all before you guys even start things", "This is awesome that I can ask for a change before the release goes out and it isn't a big deal." This is a BA that really wants what is best for her customer, and frequently gets teased that she is the "Change Control Queen" - because as a 3 month project drags on, she is always coming up with new things that she'd like to get "squeezed into the release". With XP, she has the ability to add things at any time (as long as she removes something else) - and she is totally cool with it.
Anyway - this kind of turned into more than just a User Story discussion. To get back on track - we think we are doing OK with the stories. They *may* be a little too low-level, they may be too developer-oriented. But they seem to be working for us, and I'm sure we'll learn as we go.
