Thursday, 17 September 2009

Integration Tests are not enough - Revisited

A while ago I wrote that Integration Tests are not enough (Part I). i Think that J. B. Rainsberger does the subject much more justice then in his latest session (Agile 2009) titled:

integration tests are a scam.

During his talk he also mentioned, as a side note, that what takes time and experience to master when doing TDD is the ability to "listen" to your tests and understand what they are telling you. I couldn't agree more, I wonder how we can help speed up this process.

Any thoughts?

Friday, 11 September 2009

YAGNI - It all Connects

connection Everything in software connects. Here a common thread that connects the following agile? principles:

YAGNI - You Aint Gonna Need It.

YAGNI is referred to the fact that most time in software development its useless to try and think ahead since most of the time we just don't really know if we are going to need this or not. as Ron Jeffries puts it:

"Always implement things when you actually need them, never when you just foresee that you need them."

KISS - Keep It Simple (Stupid)

Which is the same as saying do "The simplest thing that could possibly work" we use this phrase when we want to emphasize that simplicity in software has usually much more value than we initially think.

There is no one perfect design

I'm not sure this has anything specific to agile. But this one has been gnawing at me for quite some time. Too many times I've been in design sessions were people were deeply arguing about whose design was the best, when clearly they were all good enough. While clearly there is such a thing as good and bad design. Mostly its a matter of context. Every good design is aimed at allowing a certain type of change to be easy. However by doing so, many times, it makes other types of changes harder.

So how do all these three connects?

If we accept the fact that there is no one perfect design and each design is best suited to a given type of change, and we also accept the fact that we lack the gift of vision, so in most times we fall into the YAGNI principle. We are left with the conclusion that we must do "The simplest thing that could possibly work". Even if at that point of time the resulting design doesn't look great. That's OK. If and when we will need to apply changes, then we will know exactly what type of change is required. And only then we shall evolve the design (by Refactoring) to cleanly accommodate that type of change (which is most likely to happen again)

(I would like to thank Uncle Bob, that in his talk about SOLID design in NDC has helped realize how these principles close the circle)

Sunday, 6 September 2009

Practical Scrum Course

At the end of the first day in my Practical Scrum course I've asked 2 questions at the end of the day:

1) What was the most surprising thing you have heard today?

2) What is the most controversial thing you have heard today?

here's what I've got:

The most surprising

1) That even for the simplest "project" it takes 2-3 tries to reach a good way of doing things. (In response to an exercise we conducted)

2) That the waterfall methodology is an example of a flawed model. (taken from wikipedia: Royce was presenting this model as an example of a flawed, non-working model (Royce 1970).)

3) That so far the Scrum process is not that far from what we are actually doing.

4) That plans don't have to be too detailed, and paperwork may actually be less important then we think.

The most controversial

1) The claim that we are not good at estimations.

2) That we can start development without planning everything initially

3) That we can work (most of the times) in cycles of 3-4 weeks and give real business value to the customer at the end of each cycle.

Agile Testing Days

Ill be going to the Agile Testing Days conference in October and will be giving a session on "Quality and Short Release Cycle". It's my first time speaking at such a big conference along leading figures in our industry so I'm very excited and quite nervous.

on a similar note the latest issue of "Testing experience" journal has been released and includes an article of mine (page 16). The issue is all about Agile Testing, includes some very interesting articles and can be downloaded here.

Let me know what you think.

Thursday, 3 September 2009

Branching is evil

Almost a year ago I blogged about "Source Branches". Over the past year this somewhat have become an ongoing discussion I had repeated a few times with. In fact believe that post was my most commented one ever. Today Martin Fowler has posted about the differences between

  1. Simple Feature Branch - each feature is developed on its own branch and merged when finished.
  2. Continuing Integration (CI) - no branching
  3. Promiscuous Integration (PI)- same as feature branching with the twist that each feature directly merges with other features that might affect it.

he also explained the risks and benefits of each approach, and I really liked that post

What I would add to that post is the human aspects that each approach brings.

Feature Branch - The easy way out

In my eyes feature branching is choosing the path of least resistance. Its an approach born in fear (I'm going to break the system before ill make it work again), taken when wanting to work in isolation without the need to constantly communicate with the rest of the team. In some cases I've seen the actual branch was born with ill intent in mind.

Promiscuous Integration - the middle ground?

The PI I think is slight better off, being the middle way it does encourage communication (to a degree), while keeping some of the apparent benefits of working in isolation. But the more I think of it the more I doubt that this can actually work (for a long period of time). Its just seems to much of a hustle. I have a strong feeling that in practice there's a good chance this will revert into a "main trunk" for bug fixes with a ongoing branch for new development (i.e. classical branching), or to a a more simple feature branching.

Continuous Integration - Encourage quality

The continuous integration path is the one I like best. The thing I like most in it, is the fact that working on a single stream of development, forces the team to sustain high quality standards. For this to work all code committed MUST not break the system. in fact when committing into a single line of development one must be sure that even if the work is half done it will no have any negative effect on the system (and in most case should be easily taken out). And quality as you all know is the only way to go fast

Disclaimer - There are business contexts in which branches CANT be avoided. and yes. modern CVS will make this a lesser issue. but to me branching is always an inferior solution to a bad situation.

Running all tests in solution using MSTest

There aren't many times I truly have something bad to say about MS. Its not that I'm a great fun, but I admit that in most cases they do get the job done (it just takes a couple of tries though).

However, there is one thing that always tick me when working with MS products and that their ability to create product which doesn't work well with anything else besides other MS products. Normally I'm not into conspiracies and would of written it off as simple mistakes, but I'm guessing I just have to much respect for MS developers.

And the thing is that this always happen at the really annoying small things when you least expect it, but when encountered really make you go *#*#*!*!$*!*#!.

and the story goes like this:

At one of my clients I'm helping setup the build server to run all the unit tests the development team is going to write (which is why I was brought in at the first place). After some initial work in configuring CC.Net (which didn't take long at all), I've wanted to add the unit test execution part. so I went on to search for a way to execute all tests in a given solution. Shouldn't be too hard right?

Well WRONG!

After touring the web for some time it appears the Mstest has, how unconventionally, left this somewhat trivial ability out of their tool set (they didn't forgot to put a button in the IDE for doing so). its easy enough to do it if you use TFS as your build server solution, but using anything else means that one need to feed MSTest command line with all the various test dll's explicitly and MANUALLY!

so here are some other options:

1) Taken from Stack Overflow (towards the end) - a way to semi automate this using msbuild based on some naming conventions.

2) MSTest command line can use the vsmdi as their execution input, so one can simply create a list with all existing tests and use that. However now one need to maintain that all knowing list.

3) write up a small utility which based on the data inside the sln file, produce a list of all tests dll's - shouldn't be too hard but who would like to do that?

and the last thing (which i think i will choose at the end )

4) kind of a brute force, but if MSTest can only receive one dll as input, lets give him one. And I'm not actually talking about actually writing all the unit tests inside a single DLL. I'm thinking about using ILMerge to consolidate all DLLS in a given output directory (production and tests) into a giant DLL and feed it into MSTest. (I just hope that it will be able to process that dll)

Really Really ANNOYING!!!!!

 
Design by Free WordPress Themes | Bloggerized by Lasantha - Premium Blogger Themes | Walgreens Printable Coupons