Thursday, October 27, 2005

Acceptance Tests and Speed

As noted in the previous post, our test group isn't really used to writing fully automated acceptance tests, as far as I know. Our system is made up of many processes, where the data comes into one process, is transformed and moves through several other processes via IPC. To get data from the start to the end can take some amount of time. Not that it takes a *long* time for data to get processed, but to bring up the system, run a small data set through, and then bring the system down and check the results, well it takes a while.

Today I'm going to talk to the team about two things with respect to acceptance testing.
These two things go hand in hand. We have a manager process that starts the other processes and manages them. It makes sure they come up in the right order, watches them and reports if they are having problems, restarts them, etc. This is good - but it takes time. I wrote an acceptance test that had to run parts of the system 4 times (because I'm testing four different test cases in the one test suite, which requires different configuration of the processes). It takes about 60 seconds to run. From my reading on the XP mailing list - that is "a long time".

Now given that you've never written (or run) and automated test. And given you usually just manually change your configuration, get everything set, run the system, and then check all the results by hand - this is a *vast* improvement. But given you are running in an XP environment, where all acceptance tests should be run multiple times a day, and you have 20+ tests, and they all take a minute - you are looking at 20 minutes before you get any feedback on how the latest change effected the system. Too long. And if the test are going to grow over time, it is getting even worse.

So - I have to see if I can reduce the time of my test. However, the reason I need to talk to the team is because the acceptance test that the tester wrote is taking 60 seconds - and it just runs one set of processes, processing a small set of data. So what my test does in about 15 seconds, this test is doing in 60. The reason is, that the testers test is using the manager to bring everything up and bring everything down. While this is the environment in which the system will run, is it really needed for every story, or can this just be run in a test for the story that the system should be automatically managed (or something like that).

Again - when you are running one test and it takes a minute and spits out automated results, that's a great improvement over manual tests. But if we don't pay attention to speed now, it will bite us later. If we have 100 tests and it takes 2 hours to run them - well that just isn't going to be good.

We'll see how it goes...

Acceptance tests and speed

As noted in the previous post, our test group isn't really used to writing fully automated acceptance tests, as far as I know. Our system is made up of many processes, where the data comes into one process, is transformed and moves through several other processes via IPC. To get data from the start to the end can take some amount of time. Not that it takes a *long* time for data to get processed, but to bring up the system, run a small data set through, and then bring the system down and check the results, well it takes a while.

Today I'm going to talk to the team about two things with respect to acceptance testing.
These two things go hand in hand. We have a manager process that starts the other processes and manages them. It makes sure they come up in the right order, watches them and reports if they are having problems, restarts them, etc. This is good - but it takes time. I wrote an acceptance test that had to run parts of the system 4 times (because I'm testing four different test cases in the one test suite, which requires different configuration of the processes). It takes about 60 seconds to run. From my reading on the XP mailing list - that is "a long time".

Now given that you've never written (or run) and automated test. And given you usually just manually change your configuration, get everything set, run the system, and then check all the results by hand - this is a *vast* improvement. But given you are running in an XP environment, where all acceptance tests should be run multiple times a day, and you have 20+ tests, and they all take a minute - you are looking at 20 minutes before you get any feedback on how the latest change effected the system. Too long. And if the test are going to grow over time, it is getting even worse.

So - I have to see if I can reduce the time of my test. However, the reason I need to talk to the team is because the acceptance test that the tester wrote is taking 60 seconds - and it just runs one set of processes, processing a small set of data. So what my test does in about 15 seconds, this test is doing in 60. The reason is, that the testers test is using the manager to bring everything up and bring everything down. While this is the environment in which the system will run, is it really needed for every story, or can this just be run in a test for the story that the system should be automatically managed (or something like that).

Again - when you are running one test and it takes a minute and spits out automated results, that's a great improvement over manual tests. But if we don't pay attention to speed now, it will bite us later. If we have 100 tests and it takes 2 hours to run them - well that just isn't going to be good.

We'll see how it goes...

Acceptance Testing and Process

I've been spending the last few days mostly working on Acceptance Tests. We don't have a test harness in place, and so one of the developers created one. I took it and used it to build one test suite (4 tests). Just to get a feel for how it works, how our acceptance tests would be organized vs our unit tests and to see how fast it was. Additionally there was a problem that did not show up in unit testing that showed up when we ran the system, so I needed an acceptance test to show the issue, so I could go fix it.

Issue: Not writing acceptance tests up-front, or as we go.

In general, our testers do not write what XP would consider true acceptance tests - at least based on my understanding. They test the system and they have tools that help them do that, but I don't believe tests are automated. This based upon how long it takes a system to go through "integration testing". For this project, we have adopted the motto "Its not a test unless it is automated. Unless it is checked into CM and anyone on the team can pull it and issue a command and have it run." If I can't run your test, then it isn't really a test yet. This is a change from how we do things and has caused a bit of a problem, from an XP view. Our iterations are one week, and until this week (iteration 5) we have had no acceptance tests. We have unit tests, and we are "finishing" stories based upon those unit tests. In hindsight this very much violates the Running Tested Features principle. The reason we have no acceptance tests is because the tester wasn't exactly sure how to get them set up, and development was to busy writing functionality to help. Again, in hindsight, probably a bad thing. Development started helping out last week, by building a tool to generate data, and then the test harness, and early this week I wrote one of the first full acceptance tests. This gives a model from which to work.

So we are doing better, and the number of acceptance tests written and passing are growing every day. This is a great thing. A problem is that our velocity is slipping (because we really aren't working on any new stories, we are just getting "completed" ones to pass the acceptance tests). At least we know why it is slipping - so that is good. But it also makes it a bit harder to give an "end date". When will all these stories be done? Well... they are "done", they just aren't "finished". If we had done this all at once: write the unit test, run the test, write the code until the test passes, then run the acceptance test. We'd not only have finished stories, but we'd know how long it takes to get them to a "finished" state, not just a "done" state.

And yeah, I know - "done" means "running tested features"... but in our haste to get some code written and somewhat ignoring the test person "because I'm sure he'll have some acceptance tests any day now" - we went 4 weeks before we really had anything.

Learnings

So - it was a good learning. It puts us in a "weird" spot right now, somewhat. But we are making good progress on getting acceptance tests written and passing and the system overall seems to be working well. We know there are some issues, because manually running the system with particular data shows undesirable results; but we'll get that built into *automated* acceptance tests, run them to show they fail, and then start working the system until they pass. Right now there doesn't appear to be anything majority wrong. Especially given we had a few issues and yesterday I was able to get an acceptance tests finished which showed the issues and fix both of them in that day. While this might seem like a long time from an XP point of view (not sure) - it is pretty good for us. Identify a problem *with an automated test* and resolve it in one day, confident that it hasn't broken anything else because all the unit tests still work. It would be even better if all the acceptance tests were passing, but as noted, we don't have them all in place yet. But, now that we've learned this, we won't make the same mistake again, and once these tests are written, we'll have them "all" and as we do more stories, we'll write the tests at the same time we are implementing the stories... Which is what the books say to do, but sometimes you just have to make your mistakes and learn things the hard way. :)

Thursday, October 20, 2005

TDD really works

Test Driven Design, and automated unit tests, really works - not that it is any surprise to anyone who is actually using XP. But its nice to see what is claimed in theory, personally works.

I've been building a number of low-level classes. I've built them following what I believe is the XP approach. Design by intention, build a test, run it, code it, run it again until it works. All my code is "done" - in that it is passing all the tests. (No acceptance tests yet because of our environment. An issue I know, but we are working on it). So... I'm not "running tested features" yet, but I am working stories and unit testing the code which implements them.

Yesterday another team member wrote some tests and then wrote code to pass that test, plugging it into my code. My code is very independent, using stubs, etc. It passes all the tests I've written for it. But when she tried to use it - it wasn't working.

"How can this be? It passes all the tests!"

I then looked at her data, which happens to be valid, but is more complex than the data I had tested with. It is a scenario that upon examination is totally valid, but not one I had planned for.

And this is where the beauty of XP comes in, imo.

Hmm... yeah, I hadn't planned on that, but I can see why it would be expected. How do I fix it. I'm not sure why it isn't working... but I wouldn't expect it to work seeing as how I wasn't thinking in that way. So - how to fix it. There are a number of classes and quite a few interactions, so I'm not even sure where to start. As some XP authors/writers say "I get this bad feeling in the pit of my stomach" when I start thinking about just digging into the code and trying to figure out what went wrong.

Doh... the XP light bulb goes on. WRITE A TEST FOR IT. Of course!

So I write a test, and sure enough it fails (no big surprise). OK... now what. I still don't know how to fix it, and the test fails which is what I expected... But the real reason I added the test is so that I will know when the code is actually fixed. Right. OK... now to start digging (but again, that feeling - I don't want to just dig around in the code). So, I go to the method that is failing, and I look at it. It (like most XP methods) is made up of other methods. So that the code is expressive and short. Hmm... it looks fine, but is obviously broke, so it must be one of the methods that it is calling. Well since they all have tests (YES! Good thing I followed the "test everything" rule, and didn't just test the public interfaces) - I can update each of their tests to test the new scenario. Maybe not all the data from the scenario, but given the scenario, what would their parameters look like.

Hmm... maybe its this first one. If it didn't work that would surely hose things up. Write the new test, run it. It passes. XP Bonus: Instantly I know it isn't this method. I didn't spend tons of time reading the code, thinking it all through, trying to figure out whether it should work or not. I just tried it, and it was PROVEN that it worked. Excellent.

OK... maybe this next method. Write the test. It fails. Excellent. I'm not the track now. At least this is where one problem is occurring (maybe there are others).

I look at this method (level 2, if you consider the method that is having trouble as level 1) and it makes a few calls. OK... lets just be recursive. Lets write tests for its methods. I write a test for the first method, and it fails. This method is very simple, and I go take a look. Light bulb. Of course, there are 3 conditions that this method has to process, and I only considered two of them. I've got a less-than clause, and it should really be less-than-or-equal-to. I make that change, rerun all the tests for this class, and all of a sudden I've got a "green bar". 100% of my tests are working. This tells me two things (as obvious as this might seem). 1) That I've fixed the low-level method that I just discovered was bad, and 2) that fixing that, fixed all my problems.

I commit the code, the other person pulls it, and now it works for her as well. I feel great.

Although we aren't "pair programming" yet, we are only 4 weeks into using the XP process, and we have things we need to learn, issues we are overcoming, etc. - the great thing is that we already see so much benefit. And to use an unscientific proof - it just "feels right". Every day as I write tests, and then write the code, and then continuing writing until the tests passes; and every night as I finish for the day, looking at all the tests I've passed, seeing how much closer we are getting to having something done (even if we take a few small steps back when something wasn't tested for and we have something that needs fixed) - I just leave work feeling so happy. One of the things I am is a "process guy". One of the people that thinks about how we should write software, not just writing software. And like I said, XP just "feels right". This is a blessing, and probably a little bit of a curse. I'm so happy the team I'm on is doing it, and I'm kind of sick feeling that all the other teams aren't. Especially when they are having problems that I know this will fix... But change comes slowly. First, my current team will prove it works, and show successes, and then we'll slowly spread it out to the other teams.

Its been a good XP day. :)

Saturday, October 15, 2005

Unit testing

Unit testing just rawks. I'm having such a blast doing "this XP thing" with respect to TDD, coding by intention, and writing unit tests.

We have no official unit test tool (such as JUnit or CPPUnit, etc.) since we are writing in CLIPS. However, a couple of cool things about CLIPS. One is that it is reflective and another is that you can extend the functions/methods of a class by adding them into another file. This makes it every easy to write a test harness framework (which I had previously done) and then start writing unit tests.

In general I'm doing very little design up front. Based upon the iteration and who signed up for what task, I'll say something like, "OK. So I'm supposed to write a class to do X and Y, right?" and then after we discuss this for a few minutes, I'll write a stub for the test code for the class. Setting up a directory (we have all our configuration, run scripts, and test code for a particular suite in the same directory). I can get this set up in about 5 minutes or less - creating all the code I need to create for a test suite and then running it to show it runs and fails the first test.

I then start writing my tests. "OK - the class needs an X method, and it needs to do Y." I write the test, and maybe as I'm doing that I realize that I need a way to easily prove it worked - so maybe I need a few more methods. I add those into the class, stubbed - along with X. And I just keep going, building helper classes (which now get their own test suite, which I can quickly set up and start testing.

The other day I started in the morning thinking I was going to write class A, but as I went to test it realized that I really needed to write B first - as a helper class for A. By the end of the day I hadn't gotten much progress done on A (although the test was still in place for it), but I had gotten B fully completed, along with 27 tests that proved everything it needed to do. The next day I could look at A and see that with B complete, there was much less to do in A, and I started testing and working on it.

The thing that makes unit testing rawk, to me, at this point in the life of this new project, isn't even all the cool long-term project stuff like being able to easily modify code, being able to fix a defect or add a feature with the hard and fast knowledge that you haven't broken anything yet. What is really cool is the other things that XP talks about. Feedback and progress. The ability to quickly write code and have feedback. To think of a needed functionality, write a test for it, and code it within minutes and see it work. To have that feeling of accomplishment. Not just that "I wrote some code today and I think it will probably work and I'm making some progress", but to know that "I added 5 new methods to the class today and they are passing 25 unit tests, doing both positive and negative tests." To see that test number climb from 1 to 3 to 7 to 10, etc. throughout the day. Its just a very cool thing... and so fast and immediate.

User Stories

(I originally wrote this a few weeks ago - but accidently deleted it before saving it; so here I go again. I usually don't write as good the second time - but we'll see. ;) )

We use a customer proxy. Not the actual customer, but someone who represents the user. In MCI this person is a BA (Business Analyst). Their job is to act as a liason between the customer and development - turning "customer language" into "developer languge", writting the requirements, dealing with issues, etc.

For this project the BA is acting as the XP customer proxy. This has been a hugh change for the BA, but one she is mostly enjoying. The hardest part, from my prespective, is the fact that we don't have a requirements document - and that we don't need to know everything up front. As the pink book says, "Don't worry about whether or not you have all the requirements... you don't." This is showing the fallacy in the waterfall thought of "we'll get all the requirements defined, in concrete, as a contract, then we'll go write the software to fulfill them."

It has been interesting to watch the concept of having user stories, which are a promise of later conversation, take hold.

Development is pushing this, perhaps to the exteme - no pun intended. The BA says all the time things like, "Before you guys can start on , I need to find out from the customer all the details" and we are constantly coming back with, "No, you don't. If you do, great. But if not, it won't hold us up, because we aren't to the point where that matters yet." And it even goes past that to, "Well - we'll make an assumption and code it one way, and because we are so *extreme*, *fast*, having *unit tests*, etc. - it won't be hard for us to change that if we have to." This is so foreign of a concept. She keeps shaking her head and saying things like, "I keep forgetting... I don't need to know every little detail up-front." In the past, development wouldn't start anything, until everything was known.

The other day something changed (I forget what - which in itself is kind of telling. The fact that it changed was so minor that I don't even remember what it was) - and she said, "Do I have to create a change control for that?" A change control is where something changes and a document is issued which discusses the change, how the current requirements document will be updated, the impact to development, and all the other groups (test, QA, etc.), and have to get assesments from all the groups, etc. We said, "Change control? No, just change the story. We haven't started it yet, and so if you change it before we get to it, no big deal. And even if we had already done it, the change is minor and we'd just update the code and rerun the test."

While this is very different from what the BA is used to, and hard for her to internalize at times, I believe she is loving it. She occasionally shakes her head and says things like, "This is so much easier.", "This is great that I don't have to know it all before you guys even start things", "This is awesome that I can ask for a change before the release goes out and it isn't a big deal." This is a BA that really wants what is best for her customer, and frequently gets teased that she is the "Change Control Queen" - because as a 3 month project drags on, she is always coming up with new things that she'd like to get "squeezed into the release". With XP, she has the ability to add things at any time (as long as she removes something else) - and she is totally cool with it.

Anyway - this kind of turned into more than just a User Story discussion. To get back on track - we think we are doing OK with the stories. They *may* be a little too low-level, they may be too developer-oriented. But they seem to be working for us, and I'm sure we'll learn as we go.

Friday, September 23, 2005

40 Hour Work Week, Energetic Work, Sit Together and Points

We view items in the title as all having something to do with each other, in our particular context. Currently many people work "off-line". I frequently start at 5:00 in the morning from home, and don't go into the office until 10:00. Others may work late at night. It is our believe that this is an anti-pattern, in that it violates "Sit Together" and "Pair Programming". (Yes, we won't be pairing - but we'll be "group programming" of a sort).

We also all have other projects we are responsible for. XP talks about not time slicing - but we don't think we can get away from it - and we've all done it for so long that we *think* it will be fine. But what we have done is come up with our "Team" hours. During that time, we'll Sit Together, and that is the ONLY time we'll really work on the project. (We may build some test harnesses, etc. "off-line" - but all building tests, code, stories, etc. will be done "on-line"). So mostly we'll work on our other projects off-line.

We have picked 23 hours a week where we will work on-line. We believe that this will provide Energetic Work - in that is when we'll be focused on this project, making good progress, running tested features, measuring velocity over weekly iterations, etc. We won't violate the 23 hours a week - or not violate it much (an hour or two here or there maybe). That will be our "40 hours a week". We will never work late (for us), never work the weekend, etc.

In addition to never working off-line, we also will only do work if there are 2 of the 3 programmers present. This will occur almost all the time - so we shouldn't lose too much down time because only one person is present. If one person is out, the other two will bring them up to speed when they return.

In looking at our 23 hours a week of "energized work" - we decided we can produce 46 hours of work, rounded down to 45 (division of 3). We'll use this for estimating. Yes, Ron suggests to start with 1/3, but we are going with 2/3s of the hours of the team. We think we can sustain that. We might be wrong. We'll then translate that into 45 points per iteration, with each person doing 15 points. 1 point per hour is perhaps pretty fine - but some of our stories are pretty small. That is the nature of how we do things and we believe it will work for us. We'll see.

One of the great things about XP (we believe) is incremental design. And we feel (based on reading, email list, etc) that we can apply it to XP itself. So much of our conversations contain phrases like "Well we don't exactly how we'll do this yet, but we have an idea, we'll start in that direction, and we'll refine as needed."

I love the line from the pink book.

What if we don't have all the stories defined?
Don't worry -- you don't.


What if we don't have the process totally define yet... don't worry, we don't.

Practices

Following are the list of XP practices. Note to self: I should also go look at the 12 points in James Shore's The Extreme Programmers' Oath. These were taking from XPEe2. Turns out we'll be following almost all of them. Where not, there will be comments. If there are no comments, then we are going to follow that practice.

Primary:

Corollary:

As noted previously, management had requested that the team (3 programmers, 1 customer proxy, 1 tester) get together and discuss all the primary principles and pick which ones we'd use. Note any issues we couldn't resolve on the team, and then invite management in to discuss with them which principles we had picked and get them to help us resolve issues.

We walked through the above and discussed them all. What we though they meant to us, how we thought they applied, etc. And as you can see, we picked almost all the principles (primary and corollary) - and we had no issues. There was complete agreement on all of them.

It took us most of the day - and it was a very successful start.

Tomorrow - we start with stories. Cool. :)

Wednesday, September 21, 2005

We are starting!

So - we're starting with "true" XP. By true I mean we aren't just going to take some concepts that we've heard of, like TDD, and try and apply them in our current process, but we are actually going to "try this XP thing" out. How did we get here?

We are starting a new implementation. It has to be done "quickly" and we want to get a minimal amount of functionality out and then build upon that. This is the perfect XP kind of thing. Let the customer pick what they want first, build it, and then let them pick some more. Quick development, quick feedback.

So - building on what I wrote in "Purpose" - we went and talked to the customer last week. They have many features they'd like us to do - eventually; but only one they have to have now. I talked to them about doing iterations and short releases, and they totally got it and are all for it. Unlike many of our current customers, where they pretty much know what they want, in the two day talk we had with our new customers, the features changed back and forth. This perfect for XP's "Embrace change" philosophy. So - we're going to "try this XP thing".

Because I'm totally engergized about this, my approach is "we should just do this". My old manager had a different (more XP philosophy) thought. He asked to borrow XPE2e on the plane ride out. He said, "You should let the senior manager read it on the way back", which I did and he read it. So when we got back together on Monday, 3 of us had now read the book and had some common ground in understanding and terminology. We then put together the team, and my old manager (very wise guy that he is) said, "I know you are rearing to go - but I think we should buy a few more books, let everyone on the team read them, and then we'll all have a common understanding." Excellent point. I want to start NOW - but a day or two isn't going to kill us.

So, that is what we are doing. Monday night I bought 2 more books, Tuesday and Tuesday night folks read them, and today on Wed we are getting together to discuss which practices we are going to follow and how we will do it. As has been suggested, if you are in an organization that is used to a process, don't just scrap it all and follow every XP practice. Maybe pick some of the principle ones, then do the collary, etc.

Following my "purpose" - here is a summary of how we got to our current state, and what our next steps will be:

  1. I did some XP reading on the mailing list, went to some cosAgile meetings, and then read XPE2e and the pink book
  2. My management read XPE2e
  3. Team members read XPE2e
  4. We are going to meet (just "the team" - no management) and discuss what practices we'd like to use in this first release
  5. After we decide that we'll meet with management and discuss issues, concerns, get buy-off
  6. Then we'll start doing it


And reasons I believe this will work great for the current system we are trying to build:

Again - the main keys at this stage:

I'll report back within the next few days with what practices we've decided on, and how things are going!

Saturday, September 10, 2005

BDUF, little waterfalls, user stories, and on-site customers

There have been a few discussions on the xp mailing list that discussed BDUF (Big Design Up-Front) vs Agile/XP methods. And also the concept of "little waterfalls". Are you really doing iterative/xp like development, or have you just taken your big waterfall (2 months of writting requirements, 3 months of doing design, 2 months of coding, 1 month of integration testing) and just broken it down into iterations where every iteration is a little waterfall - 1 month of writting requirements; many "tasks" where each one is 2 days of doing design, 2 days of coding, 2 days of tessting; and then at the end 1 month of integration testing. Are you doing waterfalls, they are just smaller ones?

On Implementation we currently do little waterfalls. How much depends on the team, but we are obviously doing it. This was highlighted by the fact that some members recently wanted to go back to doing very detailed designs before entering the "coding" phase. We no longer use Word documents - we now use the Wiki. This has given us some benefit in that everyone can see everyone elses comments - real-time; we can be more collaborative, because the author/reviewer can have on-line conversations and everyone can see them; etc. However, in this context, what is happening is people are pretty much writting all of their code on the wiki (or in the actual source and then copy/pasting it to the wiki) but not compiling it or running it. Using an XP term - they are getting no feedback. Using another XP term - this is a waste of productivity. In any case, this is what is happening. The "design" is then reviewed, comments made on standards, style, approaches, etc. And refactoring happens. But again - with no machine feedback. The feedback is all at the human level. Once that is all approved, then the person copies the "design" back into their code (or maybe they've been updating the code all along and copying it into their "design") and then they start the "coding" phase.

At that point they are running the code and, for the most part, manually verifying that they have the results they want. This is very much a waterfall approach. In any case - that wasn't what I really wanted to talk about - but it lays the context of "here is where we are now". Another part of this (and what I wanted to discuss) is "requirements gathering" and "writting requirements". Again - we take a traditional waterfall type approach. We get all the requirements up front, and have them as detailed as possible. Requirements may change or have to be clarified (and the fact that we acknowledge that is a good thing), and ideally (in our current process) those will get documented. Everything has to be documented, in detail. The XP thought on this is that it is a waste of time. I'm reading Extreme Programming Installed right now, and it talks about this. It talks about the programmer directly talking to the on-site customer and resolving many of these little issues - that aren't discussed in the user story (which is a pretty high level statement of some feature that should be implemented - my words). We don't have an on-site customer, but we have an on-site customer representative (the Business Analyst (BA)). The XP thought is that a developer going over and clarifying something with the BA is a "good thing". It is better than email, irc, using the wiki, etc. - because it is human communication, faster, etc. We currently tend to believe this is a "bad thing". And the following from the book sums this up very well,
You might also be concerned that this important information about the requirements will get lost somewhere, but remember that you will have specified acceptance terst, and they will surely cover [the requirements that were being discussed]."

At this point a light-bulb goes on for me. I'm trying to reconcile how cool I think XP is, how much I like change and the whole idea of going to User Stories, etc. appeals to me; but at the same time - at work - I'm pushing for "You need to make sure you get the requirements specification updated" - "You need to make sure and add that requirement to the wiki", etc. And this is exactly it - the proven fear (i.e., this has happened over and over again and lead to bad things happening) that the information will get lost. That we'll be reviewing the design or code and say "That doesn't meet the requirement" and the programmer will say "Oh - yeah, I talked to the BA and that isn't the real requirement" or a tester will be testing the system and have the same experience.

XP, in part, addresses this by the statement above. "..., but remember that you will have specified acceptance tests,..." And that is why we have the fear. Because we DO NOT have specified acceptance tests. We do a lot of testing and we do a great job of testing... but it is more a waterfall approach to testing. We have unit tests and regression tests. And we have tests written by the testers that are based on the requirement documents (and thus the need for everything to be documented). But we don't have what I believe XP is talking about. We don't have the developers/testers and BA sit down and write the acceptance tests together. And we definately don't have any form of TDD where the discussion that appears in the book (or happens in "hallway conversations all the time at work) drive two new tests to be written - RIGHT THEN. The book is right. IF as soon as that conversation happend, the people involved sat down together and wrote up the new tests - and then let everyone know those tests existed - and those tests were part of the test suite that automatically runs whenever testing is done. Then yes, we wouldn't have to go update the requirements documents - because the tests meet that need.

And beauty of this XP practice is that the XP way is so much more efficient. It uses human communication to resolve/clarify - which is faster. It uses an acceptance test to document it (which given the right framework, etc. - is faster than having the BA document it, the developer review it, etc. etc.) and that acceptence test has dual purpose. Not only does it document the requirement, but it is then used to prove that the system meets the requirement. In our current methodology that is two tasks. The requirement statement documents the requirement - and then some time later (days for sure, but more likely weeks) a tester writes a test to prove that it works. Based on their understanding of what the text says - not of what the actual human said.

The challenge, imo, is to get people who are used to working in this methodology to change. Because most likely they are going to think it is less precise (it isn't written down in a document) and that it takes more time (because it is so easy to just write it up, vs having to create a test). The truth of the matter thoug, is that the test is more precise - the system will either do or not do exactly what it is testing, and that it will take less time - because there is so much wasted time when requirements are misunderstood and code is redone, tests are redone, information is lost, etc.

So - as is said over and over in the XP books and discussions - you can do some of XP (and most likely you'll have to adopt it a piece at a time), but the more practices you do, the more "bang for the buck" you'll get. Here is a great occurence of that. You can do user stories on their own - but how they really work is when they are combined with acceptence tests - and acceptence tests that are written in collaboration with the user and written as the user story is being worked, not after the fact.

Bottom line: We currently have a problem that our requirements are not always percise enough and sometimes hallway conversation information is lost. The solution is NOT to write more detailed requirements. The solution is to use User Stories COUPLED with developing Acceptence Tests - not only upfront, but any time more information is learned; Acceptence Tests which reflect that newly accquired information.

more on Purpose

So - thinking about it some more, I decided that I don't want to just post - "Here is what we are trying, and here is why". That wouldn't really describe the journey well enough, would it? It would kind of be like "Here are the places we stopped on the trip, and where we ended up" - but maybe not all the conversations that helped get us there.

I'm doing quite a bit of reading right now: xp books, the xp mailing list, and articles written by some of the xp authors. Mostly those that are pointed to be discussions in the mailing lists. I'm just soaking it up. Great stuff.

But as I'm doing this, I'm also thinking about potential problems I see in how we do things. Things that could possibly change. Things that are already close and could easily be tweaked - things that are far away. So, I'll discuss some of those as well.

Monday, September 05, 2005

Purpose

I have quite a few blogs - my main one being WorshipJunky. So - why anoter one? This one (like all my secondary blogs) is to blog some specific aspect of my life - whether it be recipes I've written or experimented with, my mission trip to Costa Rica, or my short life as Jarill.

Quite some time ago I heard of Agile programming and XP. I originally stumbled across the first Wiki a number of years ago. Based on that we (the Sheriff Infrastructure team at MCI) started using the wiki (and also IRC along with logs) as a way to do more collaborative development. We were already very collaborative and pretty iterative, but in mostly a mini-waterfall sort of way. With the wiki, we became more collabortive, because instead of a "write a 100 page document", "review it for a week", "discuss it for a day" and then "update it for a week" sort of cycle, we went to a "write it for a few days (or maybe even 30 minutes"), "review it for 10 minutes", "discuss it for 5 minutes" and "update it for 5 minutes" sort of cycle. This let us iterate much quicker and provided us with the benefits of The great thing about an entire team stating that we are "done enough" and can move on is that the author doesn't spend tons of time in a vacume trying to decide how much to add to the design. Anyway, I digress (as normal).

So - we were doing this "wiki thing" and disucssing things on IRC - because some of us work every different hours than everyone else - and we say improvement. Then we picked up on TDD (Test Driven Development) and we started applying that. I don't remember how we first came upon it, but from my point of view it was probably a number of things. This was the beginning of us actually implementing some "Test First" methodoligies.

So - we were doing some short (for us) iterations, doing some TDD and being very collaborative and iterative. To us, this felt like we were doing "Agile" and "XP" - although we knew to someone who was truely doing it, we might seem like we were a long ways away. But at least it was moving towards it.

Recently, I went to a talk hosted by cosAgile in which Ron Jeffries spoke. That was sort of a turning point. It did a couple of things:

Luckily, at this point in time, I also got the oppurtuinty to work on a new project. The group I'm in (Sheriff) builds a system (which I've discussed on my main blog here and there) that is primarily composed of two layers. The Infrastructure (which provides a common framework, transport, layer, etc. - written in C++) and customer specific Implementations (which build upon that framework and are primarily written in CLIPS). About a year ago I transitioned from being the Infrastructure Architect to being the Chief Architect over both the Infrastructure and the Implementation. This has meet with some successs (mostly technical) and some failures (mostly process and people based) as I try and move some "Agile/XP like" practices into the implementation teams. Much of my failures could have probably been avoided if I had actually read the XP books first, and taken some advice on how to do this... but oh well. Live and learn. In any case, I'm at the point where I might actually be working on a new implementation.

This is very good from an XP point of view - because archtiects should actually *write* code. They should sign up for tasks, etc. This is also good from my own point (before I read it in an XP book) - because recently it has become very apparent that I need to "get in the trenches" to actually experience some of the issues. And also to prove that some of the techniques I'm pushing/suggesting (pushing is probably a very bad XP word) will actually work. Saying that we can do TDD is one thing. Using TDD on a successful implementation then having that as an example, is a much better way. As our many of the techniques.

So - this implementation is currently in the "we need to talk to the customer and agree this is the right thing to do" phase. But we have high hopes it is. Assuming we can achieve some benefit for the customer, then we'll probably do a new implementation and I'll probably be on the team. This will be brand new, from scratch (although we have a lot of code, experience, patterns, etc. in other implementations). So some framework will be set - BUT we might be able to approach this in a whole new way, using a new development process.

To me - this seems like the perfect place to "get in the trenches" and also to try and bring in some new XP practices. We probably won't do them all. We might not use an exact practice, but might try a principle, etc. But I believe it will be a step forward. And in thinking about this - I decided to start this blog, for a few reasons:

So - having said all that, the purpose of this blog is to jounal my journey of "Starting down the road to XP". Why we pick the principles/practices we pick. Why and how we carry some of those out. Up-front, how we think those will work and hopefully in hindsight whether or not we were right. What we learned, etc.

And a final note - this may be a bit premature. I'm not sure of the implementation or that I'll be on it yet... But I'm pretty sure. And so I'm approaching this somewhat iteratively. Here is where I think I'm going and how I want to get there... So let me put these thoughts down now. If it turns out I don't go that direction, nothing but a little time lost. If I do go that way, then I'm already a bit ahead (because I've already started this blog) and it always helps me to write things down as I think of them, rather than after the fact.

This page is powered by Blogger. Isn't yours?