Sunday, 29 November 2009

Unit Testing Singletons

Much have been said about the singleton pattern, a short visit to Google shows that its mainly about why not to use them. However, for some reason, in all projects I have seen up to date I found quite a few usages of this patten. In fact in most places, this pattern is over used extensible. I'm guessing this is mainly due to the ease of implementation, and the fact that in most systems one can find several classes which are only instantiated once. (i.e. we only need a single instance in the system)

This post is NOT about whether singleton are good or bad, for me that's not an interesting question. In realizing that most people use them, my goal in this post is just to show some useful tips on how to actually handle them during tests .

But first lets look at a common implementation of the singleton (taken from Wikipedia):

public sealed class Singleton
private static readonly Singleton instance = new Singleton();

static Singleton()

private Singleton()

public static Singleton Instance
get { return instance; }
This implementation poses 2 testing problems that need to be addressed:

  1. A private constructor - When writing tests, our main goal is to make them independent. Therefore we prefer that each test case will work on a different instance to avoid any chance one test will affect another.
  2. The static Instance method - its a valid assumption that much of the code will access the singleton using the Instance method. And it being a static method makes it harder to inject a fake instead of the singleton object.

There are several approaches for bypassing these issues:

1) Use reflection - either to create a new instance each test, or maybe to clear the created instance at the end of the test. (a good summary can be found here).

2) Decouple the business logic, from the creation logic - and test the business logic separately. A IOC container is a common technique for doing this, but a simple factory will do just as well.

3) expose some special method for testing, allowing to tweak the internal field - a simple setter sometimes goes a long way.

and I'm sure that there are more ways. to skin this cat.

I, however, prefer to leverage my tools as much as a I can. Specifically lets see how the Isolator (yeah a big surprise) can help me on this.

Scenario one - Faking a Singleton Behavior

You need to test a class which uses the singleton. During the test you need the singleton to behave in a manner which is very hard to simulate using the real production code (of the singleton). you would like to use a mock however its hard to inject the mock into the code since the static Instance method is had to mock using conventional approaches. In this case you can use the Isolator for setting a fake behavior and mocking the static Instance Method. Here is an example:

public void FakingSingleton()
// Create a fake instance
var fakeSingleton = Isolate.Fake.Instance<Singleton>();
// Set the faked behavior
Isolate.WhenCalled(() => fakeSingleton.SomeMethod()).WillReturn(7);
// Inject the fake into your production code
Isolate.WhenCalled(() => Singleton.Instance).WillReturn(fakeSingleton);
// Execute the test
var actual = Singleton.Instance.SomeMethod();
Assert.AreEqual(7, actual);


Scenario two - Testing the singleton class

Using the same technique we would like to test the singleton internal logic. However since the constructor is private its harder to created new instance each test. again Isolator can be used:

public void SomeMethodTest()
//Just a cool way to create a regular instance instead of using reflection.
var fakeSingleton = Isolate.Fake.Instance<Singleton>(Members.CallOriginal);

//use this if you want your test to look similar to production code
Isolate.WhenCalled(() => Singleton.Instance).WillReturn(fakeSingleton);

var actual = Singleton.Instance.SomeMethod();

While all this examples are written for the .Net platform, one can achieve basically the same in Java using PowerMock and even (to some extent) in C++ using MockitNow.

Sunday, 15 November 2009

Is Typemock Isolator Evil - Round N+1

Every few months the argument for and against the advance abilities of the Isolator mocking framework burst again. This time it started with this post: Test driven design – Willed vs. Forced Designs

Disclaimer: I used to work for Typemock company and was in charge for a long time over the development of the framework. so I claim no kind of objectivity .

the argument for and against usually circle around the following:

on one side, people claim that Isolator "breaks" the language barriers and, by allowing any kind of design to be implemented will end up in helping to build a poorly designed system.

on the other side, by allowing the freedom to design at will Isolator shift the responsibility back to the developer allowing him to choose the "best" design as he sees fit.

here are some points I want to comment on:

If you need Isolator you have a bad design

Actually that one is in most cases true. However the following claim "you have bad design" also holds for too many cases as well. Yes I'm saying that most systems out there are poorly designed. I'm also saying that most software project out there will fail. What I don't like about the initial claim is the conclusion that usually follows:

If you need isolator when you have poor design, usage of isolator will end up with a poor design.

That statement is plain wrong, you will end up with poor design unless you learn how to design better. The effect Isolator (or any tool for that matter) will have on a team design skills is minimal at best. Pragmatically speaking if you are are working on a legacy system isolator is probably the better choice no matter what. If you are working on a newly project and you just start out TDD, most likely using isolator will increase the chances you'll be able to stick with it. (adding the need to relearn design at this stage is soo much harder), and if you're working on a new project and you do know how to TDD. Then you should just know better and be able to safely use the tool. If you don't then you really do have a problem.

Isolator breaks the natural barriers of the language

Yes it does, but the statement "Statics are the death of testability" which actually means "don't use static methods" adds barriers to the language which is not there. so what's the better approach? again I don't know. It really depends on your personal preferences and has nothing to do with what is the best design

Actually there's no such thing is the best design. Every design must evolve always to fit the system needs.

Show me a concrete example

example number 1:

many systems have a dependency on the time of day. the naive way to approach that is to use DateTime.Now. but oops that one cant be faked (its a static method) making it a real pain to test. so experienced programmer introduce the ITime interface wrapping the system time with an interface and then adding some concrete code to make it work, all for the sake of testability. here's an example taken from the testify project on how to implement this:

public static class SystemClock {

private static DateTime? fixedTime;

public static DateTime Now {
get {
if (fixedTime.HasValue)
return fixedTime.Value;
return DateTime.Now;

internal static void Set(DateTime value) {
fixedTime = value;

internal static void Reset() {
fixedTime = null;
Please enlighten me as why this is better/simpler as opposed to use of DateTime.Now.

example number 2:

I have a class A depends on Class B. Class B is the ONLY concrete class implementing the interface ISomeInterafce (and most likely this will hold true forever). Class A is the only place (for now and most likely forever) ISomeInterface is used. yes this is a simple example but since I'm inventing it, I get to set the rules.

and there are several ways to approach this:

1. Use some IOC container to instantiate B and send it to A. a valid OO way with many benefits however in my concrete example (and yes I'm the one setting the rules). I don't have a true need for a full pledged IOC yet, and this very simple scenario wont be the one causing me to start using an such a big hammer

2. Use a factory - a simpler version of the IOC container idea, much more light weight. but still at this point of time (and yes I get to make the rules) under the specified circumstance it falls under the YAGNI directive. If the situation changes (i.e. more places will use ISomeInterafce or more "kinds" of ISomeInterafce will evolve) a factory will be used but for now I don't really need that.

3. Instantiate B somewhere and pass it into A (the simplest form of DI) - a very simple solution which is valid under this scenario. the down side for it are , that it exposes some inner working details of A which may not be a good idea AND most likely it only shifts the problem to someplace else.

4. Just use a new statement somewhere inside A (lets say constructor) - I like this way. Its the simplest thing that can possibly work, and I'm not tied to the "design for testability" chains, so I can do that. and just before I get chopped, yes taken outside this context of this simple concrete example is most likely not a good strategy.

I want to make the Choices. I'm the professional trained to do them. I wont stand being TOLD how to do my job, not by a person, and especially not by a tool.

Thursday, 29 October 2009

Gathering worthwhile feedback

Tobias Mayer has got me (and I guess several other) thinking about the common feedback process used by trainers. While I think that there's much to learn from the general purpose form handed out at the end of a training, clearly doing it only at the end of a training session means you cant use it for improving during a lengthy course.

Therefore I use two different techniques for gathering feedback during a course

Course Retrospect

this is actually quite simple and effective technique, towards the end of the second day. usually after I cover the sprint retrospect meeting. I conduct a short "retrospect" meeting with the aim of improving the actual course. My current favorite way is borrowed from Alistair Cockburn and we try to list out (together in an open discussion) the things which are:

  1. Going good for us so far,
  2. causing issues and difficulties,
  3. and things that we want to try to do different on the next day.

That way not only do I gather feedback that I can use to improve on, we also get the chance to practice and witness the power of Sprint retrospect meetings.

Guided Questions

The second technique I'm using was inspired by Roy Osherove, and its a little more subtle. The basic idea is to ask a few (usually 2) open ended questions regarding the passing day. For a concrete example read these 3 posts: day 1, day 2 & day 3.

And this works really well. First, asking these question at the end of the day makes people go over and reflect on what they have learnt during the day. It take 5-10 minutes in which I see them going through the written material trying to fish out an answer. Just for doing this I think its a great practice. Second, I really learn a lot about how they grasp Scrum, what are the main difficulties, what is seen as beneficial, what they think will be easy and so on. And last in most cases the answers show me how well I did as a trainer. For example on the first day I got an answer saying that "Documentation is not needed." That was not the idea I was trying to convey (In fact I clearly specified that "its not that documents are not needed...." at least twice) therefore I need to improve on that (clearing that point again on the following day).

Both technique are simple, quite fast (takes between 5-15 minutes) and results in good feedback. combining them with the general purpose feedback form really works great for me so far.

What have we learnt today (last day)

At the end of the last day of the course I've asked two different questions:

  1. What will be the easiest thing to implement at your company?
  2. What is probably the hardest thing to implement at your company?

Here are the answers I've got (as is):

The easiest thing
  1. Pair Programming
  2. usage of Task Board (General purpose)
  3. Stand up meetings
  4. Creation of product backlog
  5. Agile estimation Techniques
  6. using Pair Programming to help bring a new team member up to speed
The Hardest thing
  1. Automated unit tests/TDD
  2. Division into Sprints (especially short sprints)
  3. Locating people for scrum roles and creating a self contained team
  4. XP
  5. Building a true self contained team.
  6. Pair Programming

Again almost no overlap of the answers (other then a few saying TDD /AUT is hard). I really enjoyed this cycle of the course, the people attending were really great and I would like to thank them for their feedback.

If anyone is interested in attending, next time will be toward the end of December. Just leave a comment.

Tuesday, 27 October 2009

What have we learnt today/ (day 2)

On the second day I’ve added two more questions to the list:

  1. What will be the be the most valuable thing to you?
  2. What is the most useless thing you have heard so far?

Here are the answers I've got (again not edited):

The Most Surprising
  1. A typical programmer manages 4-5 hours of ideal work per day.
  2. The actual Scrum Master role in a Scrum process.
  3. No hard delivery target (just an estimation), we commit
  4. Priority Poker.
  5. using Agile has brought companies to a defect rate of 1 per quarter.
  6. that sprints can be 1 week long including planning.
  7. that people can be involved/do tasks which are not in their main expertise area. (a QA helping a developer to estimate).
The most controversial
  1. the SM doesn’t need to be a team leader.
  2. The option for terminating a sprint.
  3. a team which managed himself for a long period of time.
  4. Vertical planning Vs Horizontal Planning.
  5. a team leader should not be the Scrum Master.
  6. sprint planning should take into consideration expertise of the various team members.
  7. that the team allocate its own tasks.
  8. no defined time for system analysis. no allocated time between sprints for turning the story into a requirement document.
  9. That SM has no authority over the team.
The Most valuable
  1. Techniques for estimations, task allocation within the team.
  2. splitting the work into very short tasks.
  3. Priority Poker
  4. Vertical Planning
  5. Group estimations
  6. not using buffers.
  7. the fixed sprint content that need to be formally approved at the end of the sprint.
The Most Useless
  1. A manager should “keep” a private buffer and should be completely transparent with his team.
  2. not allow people to multitask.
  3. too much QA in the process (i don’t have enough QA people)
  4. The Claim that “The Plan is worthless, Planning is important”
  5. Team leader as the SM of a different team.
  6. picking SM’s which are not team leaders.
  7. that the team leader is not involved in the process of estimations which should be left to the team only.

What amazes me time and again that there is almost no overlapping between the different answers.

Is Scrum good for me?

The second question i get form most people (after "Does it actually work?") is does the Scrum process good for me?

Mike Cohn has showed a great technique for evaluation which of the processes out there is most suitable for your project. you can find the details here.

Sunday, 25 October 2009

What have we learnt today?

I've started today another round of the Practical Scrum Course. At the end of the first day I search for some feedback in the form of two questions I ask:

1) What was the most surprising thing you have heard today?

2) What is the most controversial thing you have heard today?

here are the answers I've got (not edited):

The Most Surprising

  1. No true need for comprehensive documentation.
  2. Estimations are done in units which does not represent time.
  3. "Done" should be defined, sounds trivial but when one think of it ...
  4. Agile can works for large groups.
  5. Waterfall was presented as an example for a flawed process.
  6. Scrum can be an appropriate process for most projects (90%)
  7. Agile and Scrum, are established and well defined methodologies.
  8. No Clear definition of who is the PO

The most controversial

  1. Scrum can work for "Real Time" projects.
  2. One of the goal of an Agile Process is to have Fun.
  3. Velocity can't be compared between different teams.
  4. Short iterations results in intensive management, in which we repeat many steps which lead to big overheads. (this is in response to the idea that short iterations are the bets way to develop software)
  5. Stories should/can be represented by one sentence in the "customer language".
  6. a process can work without "Requirements Documents"
  7. Documentation is not needed.

Clearly there are some things I need to go over and refine again. However I also think that we are making progress.

Sunday, 18 October 2009

More Information about C.R.A.P

More details about the usage of C.R.A.P and what C.R.A.P is all about can be found in the following links:

  1. Metrics of Moment
  2. Clean Code and Battle Scarred Architecture
  3. Alberto Savoia talks shop about C.R.A.P.


Measuring code quality is a tricky business at best. More then a few attempts has been made over the years however still a single solution has yet to emerge. The C.R.A.P metric has been born a few years ago and tries to combine code complexity along with Code coverage to indicate (to some extent) the quality of a given piece of code. There’s much more to say about the C.R.A.P metric which ill leave for the experts . I will say however that more information can be gathered in the Crap4J website.

Over the last weeks I have been working on creating the Crap$Net project. The Idea is to bring the C.R.a.P metric into the world of .NET development by supporting the various tools used in this world.

I am happy to say I managed to wrap this effort into an open source project which was published today over at CodePlex, you can view the project home page here.

For now the tool is able to take the resulting xml reports generated for Code coverage and for Cyclomatic Complexity and generate a single XMl containing the C.R.A.P data. nothing too fancy but for starter this should do. the tool currently supports Coverage data generated by PartCover or by MsTest internal coverage tool . Cyclomatic Complexity data is generated by using the Code Metric add-in of Reflector. What was important to me is the ability to take this tool and plug it into a continuous build cycle and less the visibility of the data.

Future plans span two mains directions, the first is to include support of other coverage tools (if you have any preferences please leave me a comment) and if needed other formats of Cyclomatic Complexity reports. the second is to add some more fancy reporting which will include a basic html summary and maybe even some more stats on the underlying code.

So if you ever wondered how Crappy is your code, done wonder just download and get an answer.

Tuesday, 13 October 2009

Iteration Zero – using Testify

One approach when starting out a new project is to dedicate the first sprint/iteration in order to build in the framework for the development process. that is building in all the stuff which will be required during the time life of the product. things life setting up the build server, preparing the installer establishing the framework for unit and acceptance testing. in short all the technical details an agile development environment expect to just be there.

Today I managed to go into Mike Scott session in which he demonstrated his very nice tool called “Testify”.

I don't want to go into all the details since it appears that just today Mike has finished to ‘open source’ the tool. (can be found here). I would however say that during the short time in the session (about an hour), Mike has managed to create a new project which contained :

  1. Some source.
  2. Executable Unit tests (using NUnit)
  3. Executable Acceptance tests – using fit
  4. Setup a source control repository (Subversion based)
  5. and Setup a continuous build server (using CC.Net)

Since I have done this kind of things a couple of times. i was really really impressed.

Monday, 12 October 2009

Agile Testing Days - ATDD

I went to Berlin for the Agile Testing Days Conference. I’ve join Elisabteth Hendrikson tutorial about ATDD, while is full day tutorial about the basic concepts and technique of doing ATDD. fro me this is a great opportunity to augment my knowledge about best practices tools and idea of how to improve my ATDD skills.

In the first part what really stood out for me is the time we spent about the various silos existing in a typical development environment. Specifically what i liked best is the emphasis Elisabeth puts about the need to break up those silos focusing on only establishing a clear separation between the role of deciding the “What” and the role of building the “How”. she quoted Tobias Mayer which defines in this post these roles as “The What voice” and “The How Tribe”

Thursday, 17 September 2009

Integration Tests are not enough - Revisited

A while ago I wrote that Integration Tests are not enough (Part I). i Think that J. B. Rainsberger does the subject much more justice then in his latest session (Agile 2009) titled:

integration tests are a scam.

During his talk he also mentioned, as a side note, that what takes time and experience to master when doing TDD is the ability to "listen" to your tests and understand what they are telling you. I couldn't agree more, I wonder how we can help speed up this process.

Any thoughts?

Friday, 11 September 2009

YAGNI - It all Connects

connection Everything in software connects. Here a common thread that connects the following agile? principles:

YAGNI - You Aint Gonna Need It.

YAGNI is referred to the fact that most time in software development its useless to try and think ahead since most of the time we just don't really know if we are going to need this or not. as Ron Jeffries puts it:

"Always implement things when you actually need them, never when you just foresee that you need them."

KISS - Keep It Simple (Stupid)

Which is the same as saying do "The simplest thing that could possibly work" we use this phrase when we want to emphasize that simplicity in software has usually much more value than we initially think.

There is no one perfect design

I'm not sure this has anything specific to agile. But this one has been gnawing at me for quite some time. Too many times I've been in design sessions were people were deeply arguing about whose design was the best, when clearly they were all good enough. While clearly there is such a thing as good and bad design. Mostly its a matter of context. Every good design is aimed at allowing a certain type of change to be easy. However by doing so, many times, it makes other types of changes harder.

So how do all these three connects?

If we accept the fact that there is no one perfect design and each design is best suited to a given type of change, and we also accept the fact that we lack the gift of vision, so in most times we fall into the YAGNI principle. We are left with the conclusion that we must do "The simplest thing that could possibly work". Even if at that point of time the resulting design doesn't look great. That's OK. If and when we will need to apply changes, then we will know exactly what type of change is required. And only then we shall evolve the design (by Refactoring) to cleanly accommodate that type of change (which is most likely to happen again)

(I would like to thank Uncle Bob, that in his talk about SOLID design in NDC has helped realize how these principles close the circle)

Sunday, 6 September 2009

Practical Scrum Course

At the end of the first day in my Practical Scrum course I've asked 2 questions at the end of the day:

1) What was the most surprising thing you have heard today?

2) What is the most controversial thing you have heard today?

here's what I've got:

The most surprising

1) That even for the simplest "project" it takes 2-3 tries to reach a good way of doing things. (In response to an exercise we conducted)

2) That the waterfall methodology is an example of a flawed model. (taken from wikipedia: Royce was presenting this model as an example of a flawed, non-working model (Royce 1970).)

3) That so far the Scrum process is not that far from what we are actually doing.

4) That plans don't have to be too detailed, and paperwork may actually be less important then we think.

The most controversial

1) The claim that we are not good at estimations.

2) That we can start development without planning everything initially

3) That we can work (most of the times) in cycles of 3-4 weeks and give real business value to the customer at the end of each cycle.

Agile Testing Days

Ill be going to the Agile Testing Days conference in October and will be giving a session on "Quality and Short Release Cycle". It's my first time speaking at such a big conference along leading figures in our industry so I'm very excited and quite nervous.

on a similar note the latest issue of "Testing experience" journal has been released and includes an article of mine (page 16). The issue is all about Agile Testing, includes some very interesting articles and can be downloaded here.

Let me know what you think.

Thursday, 3 September 2009

Branching is evil

Almost a year ago I blogged about "Source Branches". Over the past year this somewhat have become an ongoing discussion I had repeated a few times with. In fact believe that post was my most commented one ever. Today Martin Fowler has posted about the differences between

  1. Simple Feature Branch - each feature is developed on its own branch and merged when finished.
  2. Continuing Integration (CI) - no branching
  3. Promiscuous Integration (PI)- same as feature branching with the twist that each feature directly merges with other features that might affect it.

he also explained the risks and benefits of each approach, and I really liked that post

What I would add to that post is the human aspects that each approach brings.

Feature Branch - The easy way out

In my eyes feature branching is choosing the path of least resistance. Its an approach born in fear (I'm going to break the system before ill make it work again), taken when wanting to work in isolation without the need to constantly communicate with the rest of the team. In some cases I've seen the actual branch was born with ill intent in mind.

Promiscuous Integration - the middle ground?

The PI I think is slight better off, being the middle way it does encourage communication (to a degree), while keeping some of the apparent benefits of working in isolation. But the more I think of it the more I doubt that this can actually work (for a long period of time). Its just seems to much of a hustle. I have a strong feeling that in practice there's a good chance this will revert into a "main trunk" for bug fixes with a ongoing branch for new development (i.e. classical branching), or to a a more simple feature branching.

Continuous Integration - Encourage quality

The continuous integration path is the one I like best. The thing I like most in it, is the fact that working on a single stream of development, forces the team to sustain high quality standards. For this to work all code committed MUST not break the system. in fact when committing into a single line of development one must be sure that even if the work is half done it will no have any negative effect on the system (and in most case should be easily taken out). And quality as you all know is the only way to go fast

Disclaimer - There are business contexts in which branches CANT be avoided. and yes. modern CVS will make this a lesser issue. but to me branching is always an inferior solution to a bad situation.

Running all tests in solution using MSTest

There aren't many times I truly have something bad to say about MS. Its not that I'm a great fun, but I admit that in most cases they do get the job done (it just takes a couple of tries though).

However, there is one thing that always tick me when working with MS products and that their ability to create product which doesn't work well with anything else besides other MS products. Normally I'm not into conspiracies and would of written it off as simple mistakes, but I'm guessing I just have to much respect for MS developers.

And the thing is that this always happen at the really annoying small things when you least expect it, but when encountered really make you go *#*#*!*!$*!*#!.

and the story goes like this:

At one of my clients I'm helping setup the build server to run all the unit tests the development team is going to write (which is why I was brought in at the first place). After some initial work in configuring CC.Net (which didn't take long at all), I've wanted to add the unit test execution part. so I went on to search for a way to execute all tests in a given solution. Shouldn't be too hard right?


After touring the web for some time it appears the Mstest has, how unconventionally, left this somewhat trivial ability out of their tool set (they didn't forgot to put a button in the IDE for doing so). its easy enough to do it if you use TFS as your build server solution, but using anything else means that one need to feed MSTest command line with all the various test dll's explicitly and MANUALLY!

so here are some other options:

1) Taken from Stack Overflow (towards the end) - a way to semi automate this using msbuild based on some naming conventions.

2) MSTest command line can use the vsmdi as their execution input, so one can simply create a list with all existing tests and use that. However now one need to maintain that all knowing list.

3) write up a small utility which based on the data inside the sln file, produce a list of all tests dll's - shouldn't be too hard but who would like to do that?

and the last thing (which i think i will choose at the end )

4) kind of a brute force, but if MSTest can only receive one dll as input, lets give him one. And I'm not actually talking about actually writing all the unit tests inside a single DLL. I'm thinking about using ILMerge to consolidate all DLLS in a given output directory (production and tests) into a giant DLL and feed it into MSTest. (I just hope that it will be able to process that dll)

Really Really ANNOYING!!!!!

Sunday, 26 July 2009

Scaling the PO team

Second session was planned to be another session with Alistair Cockburn however, he choose to ignore the official session title and instead he conducted an interactive activity demonstrating the power of iterative development and effectiveness of an adaptive process.

after that Ive joined a session focusing on the role of the PO in a scrum process and how to scale this position without damaging the entire process.

The role of the PO in a scrum team is to represent the client. the PO is responsible of maximizing the ROI, i.e. he’s the one responsible for generating $$$. Specifically the PO is in charge of grooming the product backlog, which includes among other, writing user stories, prioritizing them and keeping track on what is getting build. The important thing to remember is that this is an active role, stories are not invented from thin air, they are discovered by talking with users, customers and sometime even the developers. A good PO will actively manage the backlog, he will keep priorities straight, refine stories as new knowledge is gained, remove less important stories and add new ones.

The key point of being a PO which is sometimes missed is that the PO should consider himself as part of the team. When a PO works in disconnection from the team, things will usually not go smoothly optimal. For example while in the Scrum “rule book” the PO approves stories during the demo at the end of the sprint, effective PO’s do not wait for it. they actively review and approve progress during the sprint work. after all why wait?

Scaling the PO

But sometime one person is not enough to effectively cover all the responsibilities of the PO role. and the big question is how to scale that role. Like anything else there is not a single answer on how to correctly scale the PO role, each method has its own strengths and its own risks. the key for success like anything else in life is to always manage the risks and be on the lookout when things are not going as they should.

One way of scaling the Po is to divide the by area of responsibility. i.e. one person (or more will work closely with the customer) and one (or more) will work closely with the team. Another option is to split the work more naturally according to the product technical components. but no matter how the work is split its crucial to first achieve good communications channels between the people functioning as PO’s. Normally a PO team will much higher level of cooperation than other teams to succeed.

another tip for scaling the PO role is to adhere to agile values, i.e. always inspect how the team of PO function and to improve. It is also possible to use a modified scrum process in order to manage the work inside the PO’s team.

When not to Scale

but the most important tip of this session was when to chose not to scale. Sometime organizations are scaling the PO not because they actually need to, but because there's an underlying dysfunction which was not recognized. for example I've seen a PO which spent so much time in customer relations, that he didn’t have time to help the development team. naturally the team complained, and suggested to add more personal to the PO. Only after sitting with the PO and raising the team issues, did we realize that the PO did have time but he didn’t fully understands what was required of him.

So before scaling, take a good look, maybe scaling i not the answer. maybe just maybe there’s a better alternative.

Thursday, 16 July 2009


After Alistair lecture the convention divided into 4 different tracks,eFigWolfson-7Kanban I chose to attend the session titled as ScrumBan given By Yuval Yeret (a Coach in Agile sparks). In this talk Yuval introduced some of the ideas behind Kanban and how they fit inside a Scrum Process.

So what is Kanban

I not going to explain too much here info on that can be found here and here. I would just say that Kanban is what guided toyota when designing its lean manufacturing line and it very much resemble a Scrum process with two major differences:

  1. its more flow based and less iteratice kind of process.
  2. being a flow based process the amount of work In Progress allowed (WIP)is limited.

When to use Kanban

Kanban should be used in situations when one can’t plan effectively, even for a very short cycle found in Scrum. This usually happens in specific kind of projects. one example would be projects which are mainly about support and maintenance. In These priorities really change on a daily basis making planning useless.

However many projects do have specific stages in which its really hard to plan ahead and naturally they adopt a more Kanban style of working. This happens a lot towards a release date during the Hardening phase. In this phase most of the work is about solving issues found as fast as possible and there is nt much use in doing extensive planning ahead.

When not to use Kanban

Kanban is sometimes adopted to hide a Scrum smell. Sometime an organization find himself incapable of planning and adopt Kanban way of working. However, the inability to plan can sometime be a real dysfunction needs solving. Maybe the PO is not doing a proper job in managing priorities, maybe an external force like the CEO is injecting too much noise… adopting Kanab in this states will only hide the problem (at least for some time). In these contexts Kanban can be viewed as a ScrumBut

The effects of doing Kanban

The most visible effect of Kanban is that from time to time team members will be blocked. Since the amount of WIP is very limited. there will be cases in which a team member is not allowed to pick up new work and needs to find himself something to do. The first option this member has is to help others. and indeed, One of the goals of a Kanban process is to generate teamwork in order to solve bottle necks in the process.

but there are cases in which one cant help others, this is the time in which a member works on improving the process. Whether by trying to understands why the bottle necks happens and resolve them. or by doing any kind of work to improve General productivity (Paying Technical debt, optimizing some development process, cleaning up…)

another Effect a Kanban has is the that regular measurements like Velocity burn down charts lose their meaning. You cant draw a burn down when not working in time boxes, the same goes for velocity. Instead in a Kanban process one use a modified Burn up chart in order to measure the WIP at any given time (where the goal is to minimize it) and ”lead time” which is the time it take for a task/feature to be finished (naturally the goal is to shorten the time).

To summarize Kanban is an agile process, which resemble Scrum but takes it a little further by adopting a flowing kind of process. It was successfully adopted by Toyota , and it can be adapted to enhance a Scrum process or to replace it in specific stages during the prodcut life cycle.

next Scaling the PO Role

Opening talk – Alistair Cockburn

The opening talk title was “Effective software development in the 21st century”. As promised by the title this session was packed with so many great insights, advices and ideas for me to be able to repeat them all. But some stuff did stick and I thought i should share.

Agile has grown beyond small organizations

Yes, this simple fact still comes as a surprise for people outside agile circles. However one must remember that originally Agile methodologies (mainly XP and Scrum) were created for such teams. A fact that some Agilists tend to forget. What is important that current methodologies, as used today, have grown, changed and adopted to all kind of projects and they are still growing. For example the trend in which more engineering practices are picked by organization doing Scrum might soon force the powers to be to finally include some of them in the “book of Scrum”. Yes who knows what the future of Scrum might be.

Software development as a cooperative game

A great metaphor the full details can be found here. In short, when looking at development as such a game we can say that:

  1. Writing code can be viewed as a finite cooperative game with set targets, goals and an end.
  2. Developing a product can be viewed as an infinite game in which the only true goal is to actually play the game hopefully as long as possible.
  3. The process that we choose is the actual “Strategy” by which we choose to play the game.

so where that leaves us?

With two (actually more) important observations. First, like any other complex game you don’t find yourself in the same state twice,and a single winning strategy does not exists. In fact, when talking about strategies, the terms right or wrong does not apply, strategies are either weak or strong. What’s more important is that one can and should pick different strategies for different situation. A process which works for a startup company probably wont work as well for a big team. Second, the different in nature between writing code and developing a product might explain a very basic developing dilemma. On one end we want to ‘win’ the game of writing code, which has specific goals and a known end date. On the other side we must remember that from a product POV the game is infinite (hopefully) and we always want to be prepared fro the next “round’. Meaning that its not enough to write working software satisfying current needs. The quality of the written code must be kept good enough to support future development. And those two needs sometimes pull us in different directions

Developing Software is a Craft

its not the first time i hear this, there’s an ongoing growing movement of developers calling all to start treating writing software as a “profession”. Alistair however summed it up quite well

“A developer must relearn how to program every few years”

and indeed software development techniques are continuously changing. When I started out my first programming job involved writing code in C/functional C++ for an embedded system. A couple of years later I had to relearn how to program when i switched to develop using object oriented C++. A few year passed and I was introduce to Agile, which came with practices like TDD, pair programming. So again I had to relearn the skills of writing software, and last a couple of years ago, I started to developing under .NET which again made me relearn much of my skills. Bottom line I feel all these relearning is the only way to keep up with the profession of writing software.

Decisions as unit of inventory

The last part of the session discussed the idea that actually software development is mostly about making decisions, and that the speed by which we transfer those ideas between minds is what actually dictates the productivity of a development team. The best way to speed up development is by locating and removing all barriers which slows down this flow of ideas. For example the reason why distributed team are less efficient than collocated teams are that they communicate using inferior channels which of course reduce the efficiency in which we can transfer ideas.

The nice thing about using this analogy, is that the entire development cycle can now be reduced to a manufacturing line and all the appropriate theoretical knowledge can now be drawn upon. Specifically by using decisions as unit of inventory we can now map the stations along the development “line” and try to locate bottlenecks by recognizing the places in which decision are being delayed.


There was much more going in the session, in fact I’m sure that in order to grock it all ill need to listen to it at least a few more times. I do hope however that I managed to convey at least some of that knowledge.

Next post ScrumBan (how to use Kanban in Scrum)

Wednesday, 15 July 2009

Second Israeli Scrum Gathering

Well the second gathering has ended and it was great. while the amount of people coming this time seemed to be a little less than the first time (I estimate there were about 150-200). Content wise this day was way better. I think that this time most of the sessions were aimed for a more advanced audience, assuming at least basic knowledge of what is scrum and Agile.

Another improvement was that this time the keynote speaker was physically present. While it was nice hearing Ken Schwaber through a live feed, it didn’t have the same effect as having Alistair Cockburn in the same room. And hearing what Alistair has to say is fascinating (more on that will follow).

Like last time, the day was opened by a lecture given by the key note speaker followed by breaking up into 3 tracks lasting until the end of the day when we gathered for an ending panel discussion. all and all i was able to attend 5 sessions all of them were immensely interesting. I just wish that someone would find a solution enabling a person to attend two lectures at the same time.

Tuesday, 14 July 2009

Off to Second Israel Scrum Gathering

The second Israeli Scrum User Group gathering is happening today. Last time was really good and I expect this time to be even better. the Key Note speaker of the Day is Dr. Alistair Cockburn and the rest of agenda also looks promising even more.

so if don't have any special plan I'm sure its not too late to come.

Tuesday, 23 June 2009

Tweaking Outlook to work With GMail

I have not yet managed to give up on email clients. I know that the "new generation" has long ago switched and is using a web client to manage mail, but I still like the use experience that outlook gives me (along with its calendar).

The trouble is that working on multiple machine require some tweaking to make it comfortable.

Downloading emails to multiple machines

The trouble starts fro using the pop3 protocol. in general that protocol is aimed for a single machine use and when you download email to a given client, even if the messages are left on the server they are not downloaded to another machine. One possibility is setting up GMail to use IMAP but I personally don't like this kind of setup.

luckily there's a way to work around this:

Using POP on multiple clients or mobile devices

the trick is to change the user account name to start with "recent:" and to leave the messages on the server. in this mode the client downloads all new messages received in the last 30 days regardless if they have been downloaded on a different machine

Warning. when first setting this up, outlook will download AGAIN all messages from last 30 days.

However there is an annoying side effect to this, after i set this up outlook has started to download to the inbox all outgoing email as well.

Setting a rule to delete Sent mail

A simple rule should have solved this side effect. however outlook rules doesn't really support complex logic. My first attempt to define a rule to help me was:

"delete all incoming mails sent by me and not addressed to me" (yes I have the annoying habit of sending mails to myself of things I like to remember). but outlook does not support the "not" operation.

after several attempts at combining rules I did manage to define the rule i needed:

from <me> move to <trash> except where my name is on the to box or my name is in the cc box

So there you have it in order to use GMail pop3 access on multiple machine one needs to:

  1. Add "recent:" to the user account name
  2. Setup a rule to delete all sent mails

Sunday, 21 June 2009

Versioning Scheme

There are various scheme for handling version number. But so far I haven't really encountered anything meaningful in any of the schemes. several days ago I encountered scheme which I really liked.

The scheme is a variant of Change Significance and is specifically aimed for API libraries, that is while I'm sure it can be extended to regular products, its basic semantics is for aimed for API's libraries.

and the semantics goes like this (using a four digit sequence):

  • a change in the first number means the API has changed in a breaking manner.
  • a change in the second number means there were additions to the API
  • and a change in the third number means an that the internal behavior
  • Last number is a sequential build number.

Let take for example NUnit, if NUnit will adopt this scheme. When upgrading from lets say 2.4.0 to 3.0.0, I would expect that some of my existing tests will need to be updated to reflect the changes. When upgrading from 2.4.0 to 2.5.0, I would expect to see some new abilities reflected in new API's (but my existing tests should still stay valid). And last, when updating from 2.5.0 to 2.5.1 there would be some internal changes in behavior that shouldn't affect me at all.

I really liked this one when I heard it, what do you think?

Tuesday, 16 June 2009

TDD Problems for Practice

Every time I look or hear a TDD lecture, I always see the same examples. While its hard to go very deep in a 30-60 minutes lecture, the common used examples doesn't really reflects a real life scenario. How many in our daily job are coding a calculator?

When I'm asked (which happens 90% of the time) if I have more "real life" example I redirect to the TDD Problem Site:

The aim of this site is to contain a growing collection of software problems well-suited for the TDD-beginner and apprentice to learn Test-Driven Development through problem solving.

and the real cool good thing about those problems are that they follow the following rules:

  • they are real-world, not just toys
  • they are targeted towards learning TDD (that is: they are small and easy enough to work out in say half a day)
  • they don't involve any of the harder-to-test application development areas: GUI, database or file I/O. (since those topics are considered too hard for the TDD-beginner)
  • they have been solved by a TDD-practitioner previously, proving their appropriateness for this site

If you want to practice doing TDD and feels that its too hard to do on your production system. These examples are a great place to start practicing real life TDD.

Sunday, 7 June 2009

Myth Busted - NOT

In a recent post Scott Ambler starts by claiming

Agile Myth:High Quality Costs Less than Low Quality - Busted! (at scale)

later on when reading the actual content Ive learnt that he refers to very specific cases :

For example, in situations where the regulatory compliance
scaling factor is applicable, particularly regulations around protecting human life (i.e. the FDA's CFR 21 Part 11), you find that some of the URPS requirements require a greater investment in quality which can increase overall development cost and time.


This is particularly true when you need to start meeting 4-nines requirements (i.e. the system needs to be available 99.99% of the time) let alone 5-nines requirements or more. The cost of thorough testing and inspection can rise substantially in these sorts of situations.

In my opinion he went a little off charts with his claim.

First is the exact "Myth" so to speak? Is it a simple "High quality cost less"?

Well actually its a little more subtle than that. What the agile community has found once and again (as Scott mentions) is that it cost less to work in high quality mode, when you need to reach and sustain acceptable quality. After all quality is infectious. In general it cost less to produce crappy systems, but mostly those just fails when quality needs catch up.

But back to the examples.

I don't have experience with life critical systems. However, is there a possibility that what actually cost are the regulations themselves and not the underlying quality? Is there a way to reach the necessary life safety quality without following those costly regulations (at lower costs)? I don't know, I do know is that the FDA regulations are costly to implement and originate in a time before the Agile paradigm shift.

High Available (HA) system, on the other hand, I do understand. In fact I was in charge of developing a HA solution for a big billing company. And here Scott argument falls short.

Reaching 4 and especially 5-nines has nothing to do with the quality of the developed system. In order to get to that level of availability you must have an integrated solution and there lies the cost of 5nine systems.

So what myth has been busted?

Yes there are cases in which a specific aspects of quality will drive costs up. But taking a very specific examples and generalizing it to "Busted (at scale)" that an obvious Mistake in my book.

Wednesday, 3 June 2009

Evolving Design - Its a must!

A couple of days ago I read Gil's post about "Good Design Is Not Objective", in it he writes

So are there "good" or "bad" designs? Every person has an opinion. Opinions change over time. Problems and technologies change over time, and a design can help or interfere with the project, but you'll know that in hindsight.

Yesterday, at one of my client a simple question turned into a related discussion. One of the software architectures asked me if I ever saw (and what do I think of) a system which uses a global variable repository. One that every part of the system could use at will  to store data and is used as way of communicating between various parts of the system. Apparently that kind of an approach is used in their system and he wanted to know what I would say.

After thinking a little, I said that the only system which seems to me to be similar would be Windows registry, I said that I never encountered such a system in a project I was working on (at least not on the scale he was talking about). I also said that in my past each time I used such global variable (in a similar manner) I regretted it. And last that it really sounds like very "old fashion" approach to design. Something that kind of contradicts object oriented design.

The guy told me he said, that this part in question is very old and was written some 15 years ago when the company was at its startup stage. It appears that the entire system was evolved on top of it, which made it very hard to replace. in fact the cost was so high that each time someone considered it they decided to leave it as is.

Now I think this is a very interesting case that demonstrates how our continual search for the Best Design is just stupid.

There is no such thing as the Best Design!

Like most things, design also carries a context. In this case the current mechanism they have was (as the guy admitted) the quickest and cheapest way of reaching a product (one which people will buy). At that time (15 years ago) the technology (they were writing in C) and software engineering approaches were in line with such a mechanism. And it was a reasonable design at that point of time.

Is it still so today? absolutely not!

The company is no longer a startup, what was good at that time is really bad today. The needs has changed over time, the technology has changed over time, and the product has changed over time.

As Gil said it:

Time Changes everything

The Best Design today will probably wont be the Best Design tomorrow. A developer accepting that fact, can focus on producing a "good enough" design and evolving it as needed over time.

In my opinion that was the actual Mistake done. Its not that the design was wrong. its the fact that they assumed it wont need to change. That assumption made the cost of actually changing it high resulting in an unsuitable design they have today.

Tuesday, 5 May 2009

How NOT to write Code Examples

When working on one of my pet projects, I need to extend the visual studio IDE. Before starting out I wanted to get a better understanding of the IDE extensibility model. Therefore I allocated  some time to do a spike in order to get a feeling of the basic capabilities and the effort that will be involved. Usually when doing spikes I  like to dirty my hands as soon as possible, so I thought that best way would be to create my own sample add-in and play with it.

So the first step of business was to create my own add-in project, that was fairly easy to accomplish since there's a build in wizard for creating an add-in project (it hides under other project types-> extensibility:


The next step for me was to try an add a custom menu command to be able to insert some custom logic and check out its behavior. It didn't took me long and I found the following article on MSDN: How to: Expose an Add-in on the Tools Menu (Visual C#).

After reading the initial description:

When you create an add-in by using the Add-In Wizard and select the option to display it as a command, the command is on the Tools menu by default. If you skip this option when you create the add-in, however, you can simply run the Add-In Wizard again, check that option, and then copy your existing code to the new add-in.

If doing this is not possible, though, the following procedure produces the same result.

I was encouraged. This was exactly what I needed.

So I went ahead and followed the instruction titled:

To add a menu command to an existing add-in

The first step was to copy paste a  given piece of code:

1. Replace or change the add-in's Connect class and OnConnection() procedure code to the following:

(actual code can be found in the article itself)

Followed by copy pasting 2 new methods:

2. Add the following two required procedures, QueryStatus and Exec

(actual code can be found in the article itself)

and that's it.

easily enough I did those things and went on to compile and test it (a nice thing about the add-in wizard is that it takes care of deploying the add-in and it puts in the debug command line what you need in order to safely open a new IDE instance with the updated add-in.)

did it work?

Of course not (otherwise I wouldn't be writing this post would I?)

Actually it was far worse then not working at all. When trying to figure what was wrong, I got this irritating inconsistent behavior that didn't point me in any way to what was causing it. At first I did something wrong, so I went ahead and did everything again (from scratch). Naturally that didn't help. Then I went and tried other examples from the web for the OnConnect method, but still nothing.

To make a long story short, after several frustrating hours debugging, I took the needed break and went back to reread the MSDN article. This time I paid extra attention to all parts. At the end of the article there was the following paragraph:

Each time you implement IDTCommandTarget, you must add these two procedures. A quick way to do it is to select IDTCommandTarget in the Class Name drop-down box at the top-left corner of the editor. Select each procedure in turn from the Method Name drop-down box in the top-right corner. This creates the necessary empty procedures with the correct parameters to which you can then add code.

The Exec procedure is called when a user clicks your menu command, so insert code there that you want to execute at that time.

Don't get me wrong. I read the entire article twice before, but on the third time it hit me. The Connect class needed to implement the IDTCOmmandTarget class. And  since the wizard which created the class didn't do so and nowhere in the article it was mentioned explicitly I didn't add it myself.

Lesson Learnt?

Well the obvious one would be

when everything else fails read the f***** manual

but that would be a cliche.

The actual message I'm trying to sent out here is meant for all those writing API documentation, articles and other code examples.

Please make the extra step and try your own examples before publishing them. Not on your personal machine. Try them out in a clean system where you mimic (as closely as possible) the steps that will be taken by those using the examples. Several hours would have been spared me (and probably for others as well) if that was done in this case.

Monday, 4 May 2009

The future of scrum

Lately I've heard rumors about SCRUM starting to define the developer role inside the scrum team. Yesterday in the 3rd Israeli scrum forum, those rumors were confirmed when Danko announced that the scrum alliance is contemplating adding a Scrum Developer certification course. When I asked for details, he said that its just the beginning, but the Scrum Alliance is now starting to discuss the various engineering practices and decide which of those to adopt as inherent parts of SCRUM.

Its about time.

A good Development process

One of the things that always bothered me about SCRUM is the idea that one can define a DEVELOPMENT process without actually going into how software is developed. This led me to describe SCRUM as a management process which is very suitable for managing software development projects, and not as a full development process. Therefore when I'm consulting companies trying to adopt scrum I always make it a point to introduce to them some of the engineering practices included in the XP process (mainly TDD, CI, Pair programming and refactoring). Others (Dave Rooney, Martin Fowler, Ron Jeffries and more) also expressed their concerns about SCRUM not being enough and in order to sustain the benefits and become a hyper-productive team one must adopt some engineering practices to enhance the SCRUM process.

Are XP and SCRUM merging?

Although it is very early in the process, I do believe that we are finally starting to witness one of the most expected moves in the agile community, the evolving of the SCRUM and XP methodologies into a unified methodology that will encompass all aspects of software development. I'm guessing that is only natural following the joining of key figures of the XP community to the Scrum alliance.

The birth of ScrumP

For me there was never a place for two sisters methodologies in the software world. In fact most of the places I've seen have chosen to a to develop using an XP SCRUM hybrid methodology. In those places which did not do so, it was mainly because they felt it to be too risky to take it all at once, and thought it better to focus on specific aspects of the process to begin with adding more as they get better.

So lets welcome the newborn baby and help him make his first steps in the world. I hope that when he grows up he will help us all do a better job.

Friday, 3 April 2009

ALT .NET II - Wanted Board

For those who might have missed the chance to write up the details:

ALt .net 002

Israel ALT .NET II was a Blast

Like the first one I had a great time in the second gathering of the Israeli Alt .NET group. No matter how I look at it the "Open Space" format along with the people involved have made this gathering one of the most enjoyable conventions I've been in a long time (actually since the previous one).

For those of you who have missed it

Here is the agenda of the day:

ALt .net 003

My only regret is that we didn't have more time to put in more sessions.

Tuesday, 24 March 2009

ScrumBut - No Release Planning

"Were doing Scrum-But ..."

The ScrumBut phenomena has started to appear more and more as scrum has started to take its hold in the industry. I want to add one ScrumBut i was in charge of and it went like this:

We were doing scrum but: We had no release planning.

At the time we were undergoing so many directional shifts that each time we tried to flesh out a release plan, it changed drastically causing any planning effort to fail. Also our acting PO wasn't involved enough, a fact which also contributed to to difficulty of achieving a stable enough release plan. So we went and continued without it.

And actually it wasn't a complete disaster, we achieved a very short sprint length of 1 week, we added to that various engineering practices like TDD, CI and Pair programming and we managed to maintain a constant development rhythm with good quality.

however we were missing several big things, in fact we completely gave up on any attempt to see and track the bigger goals. I wasn't able to predict anything beyond the end of the current sprint and the developers found it difficult to understand the "bigger" picture. Eli has described the problem:

So if the management has a goal: Close Sale with Customer [put huge customer], and to do this the R&D must implement the ABC feature. We might reach the end of the month without that feature being implemented and management won’t know about it.


The thing is, that what we were doing seriously lacking, and due to us changing the process and failing to do it properly we got the what can only be described as the logical result.

When you don't plan on the higher level (release), you cant project beyond the micro level (sprint). You don't measure overall progress, and you lose the ability to manage on that high level. Naturally you cant be sure when things will be done or when things are not going according to plan.

At Typemock the solution for this was to adapt an "integrity" based management approach, and so far it seems to be working for them. However what Eli reports as "The differences between Scrum and integrity" is actually based on a something which is Scrum based but is too lacking to represent actual Scrum done properly.

Wednesday, 18 March 2009

Open Houses

So many announcements today.

Anyway next week Ill be giving two open houses:

The first will be at Sela University on the 23/3 and will deal with the trouble of planning and estimation in the software world. (As far as I know most of the places are  taken, but if you're interested leave me a comment and I'll se what can be done)

The second one will happen at Haifa on the 25/3 show why Design For testability is not a prerequisite for doing TDD. (the slides for this can be found here.

ALT.NET Israel - 2nd Gathering

Been some time since our previous encounter so we thought we might enjoy another meeting.

Anyway, we will meet on

April 2nd and 3rd, at Sela University.

Details are on the israel’s usergroup: (hebrew)

I would like to thanks my associates at Sela making this possible.

Signup link will be published shortly.

Sunday, 15 March 2009

NUnit Extensions - Adding Logic at Run Start

Most of the time using NUnit is very straight forward. Recently however I needed to add some functionality that will make life a little bit easier. In this post I'll show how to add some setup code that will be executed once at the start of the test run.

Why is this useful? actually I don't have a clear answer. Here are some reasons I came up with :

  1. Run some special set up code which is just to costly to run at every setup (even if only for class setup).
  2. Transparently enforce some logic on all tests (i.e. without the need for other programmer to explicitly add that logic)
  3. you want to add some special behavior to the test runner


NUnit Event Listeners

NUnit has several extension points that will adding almost any kind of logic to the framework. Ben Hall has posted some time back a great article describing the various possibilities For my needs (which are described here) I have been using the Event Listener extension point. Here's the class implementing the event listener:

[NUnitAddinAttribute(Type=ExtensionType.Core,Name = "My Add In",Description="A Demo for Run Setup Code")]
public class CTestingAddin : IAddin, EventListener
#region IAddin Members

public bool Install(IExtensionHost host)
IExtensionPoint listeners = host.GetExtensionPoint("EventListeners");
if (listeners == null)
return false;

return true;


EventListener Members

public void RunStarted(
           string name, int testCount)
//Do Set up logic Here
    public void RunFinished(Exception exception)
{ }

public void RunFinished(TestResult result)
{ }

public void SuiteFinished(TestSuiteResult result)
{ }

public void SuiteStarted(TestName testName)
{ }

public void TestFinished(TestCaseResult result)
{ }

public void TestOutput(TestOutput testOutput)
{ }

public void TestStarted(TestName testName)
{ }

public void UnhandledException(Exception exception)
{ }


Deployment and execution

That's all the code needed and it can be wrapped as a simple class library (dll). The dll created will need to reference the nunit.core and nunit.core.interfaces. To deploy just copy the dll to the NUnit bin\addin directory. To verify it was properly deployed open the NUnit GUI and check the tools menu under add-in to see that the add-in is there.

I will finish with two tips that will make life easier to run and debug the code.

  1. Its worth while to add a post build event that will deploy the add-in in into NUnit directory at the end of a successful build. In order to do that open the add-in project property and the following to Post build Event Command Line: copy $(TargetPath) <NUnit Directory>\bin\addins.
  2. A convenient way to debug the add-in is to put the NUnit GUI as the "Start action" of the Add-in project. This will enable putting break points normally in the code. (another option is to use the Attach To Process mechanism Ben has suggested).

Wednesday, 4 March 2009

Handling Bottle Necks in Scrum Team

During  yesterday open house I was asked an interesting question. 

Some context first. The organization has chosen to adopt the Scrum methodology (good for them). As a scrum team member she is in charge of writing the automation for system test using a module inside the QC product (can anyone guess what's wrong here?). The issue is that she cant keep up. The team has 5 developers who write code and only her to write test automation (now do you see it?). So she was asking what is the "agile" solution to this.

I don't know if this is The agile answer, but for me, when a team member cant keep up and doesn't manage to finish his tasks, he needs help.

No agile process or in fact any other development process will change that. If a given person has too much work no matter if he works under a Scrum team an XP team or a waterfaoolish team, he has TOO MUCH work. Without help he wont finish on time. If no one on the team has the time to help, its a good indication (to me) that the entire team is overloaded. Face it, no magic answer here. No process I know of will take the time needed to complete a given work and make it shorter. The only thing we can expect a process is the focus the effort invested in the right direction, help eliminate wasted effort and establish an environment in which the team can become better. Until that happens, be realistic. Commit only to the amount of work that you can currently handle.

Another issue I saw, was in the exact role definition. Part of the problem (in my eyes) is that in this specific team they were still thinking in the terms of well defined roles and responsible. Here are the people which writes code, here are the people who writes tests, here is the manager and so on.

In order to become more effective, we strive for versatility. that is we want each person in a team to be able to do every task involved in building the software. Yes we accept the fact that people still have different expertise and that's good. However when a team member gets behind schedule we wish that any team member will be able to come in and help. My guess (and clearly I can be mistaken), was that in this specific team she was the only one trained for that kind of work, therefore no one could actually help her. At least not at this point of time.

What to do?

First gather the team and raise the issue (can be done during retrospect meeting). Accept the fact that at this point of time you can achieve only a given amount of work (which is less than you thought). Face the facts and as a team inform management and commit to less. Its not much help if the coding tasks can be finished faster then can be tested. Development is not done after the coding phase. Try to resist the urge to take that kind of work (which currently is the bottleneck for the entire development cycle) and move it outside (to a different sprint or different team).  Also make sure that programmers wont rush ahead and take on more coding work cause it easier for them to do. make sure that no extra work is picked until committed work is actual done. And I mean done done as in coded, tested, integrated and everything else which falls under you definition of done.

Also, make sure to increase capacity. Either by bringing in help by hire more people (not likely at this point of time in current economy), or maybe from somewhere else inside the organization. Or train other team members. Yes this will be painful and will probably require a serious effort in making the team leave its comfort zone. But if your goal is to establish a self contained team this must be done.

Tuesday, 3 March 2009

Collection Asserts in MSTest (and some more)

Sometime I can miss the obvious.

I've been working with MSTest framework for over a year now and up until today I completely missed the fact that beside the Assert class MSTest framework contains 2 other assert classes :

  1. StringAssert - used for string specific asserts like StartsWith, EndsWith and even support regular expression matching
  2. CollectionAssert - used for collection operation like Contains, AllItemsAreNotNull and the most useful AreEquivalent.

Naturally NUnit also has equivalent classes (which of course I missed as well).

Now the thing that really ticks me off is that I was asked this specific question twice in the last month and gave the wrong answer. Only today after I allocated 2 minutes to actually google it, I found out about it.

Lesson learnt - For all consultant out there, Don't get overconfident about your knowledge. Even if its in you specific area of expertise. Always assume that you are mistaken and double check your answer (especially if your answer starts with "You cant...")

Monday, 2 March 2009

The Fifth Value – Respect (cont.)

Some time ago I touched the subject of respect in the software development business. Yesterday I had the chance to talk with one of the persons working at  a client of mine. Beside of holding the respectful position of VP QA (the company is of medium size) the guy also allocate one day a week to maintain an IT business he once had which currently has a single client. After talking about how he ended up in this kind of arrangement, an obvious question for me was why does he keep on doing it. Isn't the fuss of handling a single customer to big to be worth the time?

The answer I got was:

Working for that client puts me in perspective. once a week i get the chance to function as a plain technician. This gives me the opportunity to get on my knees and work with the bits the bytes.

For me this is the essence of humility (which is a corner stone for respect), although the guy has attained the "big boss" status, he understand that there's more to it then just management, and in order for him to excel at his work he needs to get down as close as he can to the daily work the people working for him do.

I salute you!

On the other extreme,  a previous boss of mine once told me that he was advised (and agreed to) to treat every hour of his work as costing 2000$ (no, this was not the guy salary). The reasoning behind this that being so high on the food chain he should not waste his time.

For me this kind of attitude puts you on the opposite scale. Saying such a thing comes as close as you can (without actually saying it) to "I'm worth more then you are" or more simply "I'm better then you are". This is just being rude.

I wonder how would that guy take an advice that goes: "Unless you generate 2000$ profit for the company per hour, your not doing your job"

Sunday, 1 March 2009

Sela Open House

Last week I gave at Sela a lecture about Design For testability.  For all those who attended I want to thank you for coming.

For all those who missed it, you might be able to catch it this week (I'll be giving it again at Sela on Tuesday) or if you happen to be located at the north parts of Israel it will be repeated nearer to you on the 25/3.

For everyone else you can get the slides here.

Monday, 16 February 2009

Quality is infectious

In his recent post - Quality-Speed Tradeoff — You’re kidding yourself, Ron Jeffries has explained how high quality is mandatory in order to achieve high delivery rate.

There are many reasons for quality issues in a product, inherent technical complexity, lack of skill or just plain laziness. However an over looked factor that really hurt quality is pure pressure. When the pressure to deliver rises above a certain level (which vary for people and teams) the team starting to cut corners. When pressed developers will deliver. However we are not magic workers. When pressed we usually starts by working longer and harder, but when that's not enough, the only option left is to compromise current quality in the hope that part of the work can be delayed after the delivery. The mistake involved is the belief that after the release there will be time to fill in the blanks so to speak. naturally there is never enough time. In the agile community we even given those cut corners a glorified name, we call the work we should have done "Technical Debt".

Bad Quality is Infectious

The real problem is that bad quality is infectious by nature. It can starts by maybe skipping a needed refactoring, it might start by not writing  unit tests for a given component (assuming we will do it later) but very soon islands of low quality code start to form. As time goes by these islands have a tendency to grow. In a large enough code base they hide themselves for quite some time. They are always discovered at an inconvenient time when they affect work being done but when there is no time to actually fix it, so another workaround is coded, and again quality suffers just a little more. So they grow and grow until bigger problems starts to show, suddenly the escaped defect rate has significantly increased, adding new features become harder and takes longer. And quality continues to deteriorate until something is decided to be done about it. The problem is that by the time we reach the this stage the cost for fixing the issue is high.

Good quality is also infectious

The good news are that good quality is also infectious, most programmers (at least those I know) really want to do a good job and achieve a high quality code. So it doesn't take much effort to keep us on the high quality road. I have seen cases that a single team member by not compromising quality, cause the rest of the team to follow, even at points when pressure was high.



So the morale of the story is very simple, quality for me is mainly a state of mind. The moment I accepted the fact that by trading quality I always but always LOSE (and constantly reminding myself of this) I find it easier to achieve better quality. The up side is that I saw how this kind of approach infects others as well.

Sunday, 8 February 2009

WPF Testing - Part I

Automating GUI testing is always a tricky business. In fact its so hard that people tend to mark it off as too hard to bother with. I believe however that this observation is not correct.

During one of my consulting gigs I was starting to instruct a new team on the concepts of unit tests and since they working on a real product they actually have some GUI involved (imagine that).If that's not enough they are using WPF as their framework of choice. I on the other side cant claim to be a GUI expert (to say the least) and my hands on experience with actual WPF code is, well, close to nothing.

I couldn't leave it at that, so I invested some time in actually trying to figure out how one can write tests for WPF based GUI. This post (which hopefully will be followed by some more) will demonstrate my finding on the matter.

First scenario I wanted to test was how I can trigger an event using code instead of a person clicking on the GUI. For that (and probably for the following examples as well) I constructed a small application that looks like this:


As you can see for now I have a big empty window with a single button and some text fields, but for now hat will be enough. In order to achieve what I'm doing I'll be using the Isolator framework, and I will try keeping my design as simple as I can (I know that there are many patterns out there that probably reflect a much better design scheme, but I really like to address the pressing need of the common programmer and not resort to fancy designs patterns at this point)

Firing a mocked Event

As I said, first I want to explore how to test an event handler wired to a button. i.e. to simulate a user pressing a button and check that the correct logic is executed. for the purpose of this demo I assume that my GUI is backed up by a business logic layer that will handle the actual business logic, and since that should be tested separately,  all I want for now is to make sure that the proper logic from that layer is called.

so here's the implementation of the business logic:

internal class BusinessLogic
void GetResult()
throw new NotImplementedException
("no logic yet");
As you can see noting much in there, in fact since I'm not interested in the buisness layer at this point the implemantation is not yet ready and all i have is a an empty

and here is the the window source code:

public partial class HurtFundsMainWindow : Window
public HurtFundsMainWindow()

private void GetResults_Click(
object sender, RoutedEventArgs e)
BusinessLogic logic = new BusinessLogic();

Again at this point i dont have much, only a simple event handler for the button click action that will delegate all logic into the business layer. Note that i'm not using any dependancy injection pattern and I'm not working against an interface at this point. (my application is so simple that I really don't see the need for doing that. I assume that when the application evolves the need will make me evolve the design as well)

and here is the actual test code:

public void Clicking()
//Create the fake business logic and inject it into production code
BusinessLogic fakeLogic =

//Create the fake button
Button fakeButton =
//inject it into production code
//replace the button event wiring
Mock mockButton =
MockedEvent handle =

//create the window
HurtFundsMainWindow wind =
new HurtFundsMainWindow();

//fire the mock event
handle.Fire(wind, new RoutedEventArgs());

() => fakeLogic.GetResult());

Not very trivial so lets see what's goes on here.

the first step in the test, is creation of the fake business logic, i create the fakeLogic and using the Swap ability I ask the Isolator framework to replace the next created instance of business logic with the fakeLogic (check this for further details).

Second step is doing the same for the button that will be created, after that I also fake the actual event wiring done by WPF and put my hands on a faked handle which I later use to fire the event. (note the event handling is done using an older variant of the Isolator API. I hope that in the near future the proper API will be added to the AAA syntax as well)

Next comes the actual executing. I first create the window, and then I use the fake handle to trigger the event.

At the end of the test I verify that the GUI actually calls the proper logic method (in this case the GetResult() method).

That's all for now, while this is only a basic scenario, the principles shown here can be adapted to handle much of the event system of the WPF system. If you have any questions feel free to leave a comment.

Design by Free WordPress Themes | Bloggerized by Lasantha - Premium Blogger Themes | Walgreens Printable Coupons