Wednesday, 31 December 2008

Agile Planning - Prioritizing Features

I attended Giora Morein Agile boot camp course yesterday and during the day Giora (while talking about Agile Planning) has shown the following diagram:

Picture1

He then asked the class what would they think should be the best order in which to tackle the given features. Several answers were given and all of them started with the "Low Risk- High Value" category.

I cant say I was too surprised by this. People, especially developers, have a tendency to first pick up the "low hanging fruits". Their natural reaction is to fear risks and go for path of least resistance (I've touched this subject in a previous post:Cost of Change – The Fear Factor).

However the "responsible" answer, would be to start out with the "High Risk - High Value" category and only then tackle the "Low Risk - High Value" features. The reasoning for this, is since the features in the "High Risk - High value" category have a HIGH value, eventually they will need to be done. It would be better to tackle and remove the risks involved in them as early as possible. Removal of risk is healthy for a project. One must try to remove as much risk as possible as early as possible, if a real nasty surprise is lurking, which will cause the project to fail. We want to fail it fast to minimize the amount of money wasted on the failing project.

BTW, after finishing all the "High value" features, a good strategy would be to proceed with the "Low Risk - Low value" category, and find something better to do besides "High Risk - Low Value" features.

Design For Testability - Lecture

On the 14/1 ill be giving a lecture for the Nes Tziona user group. The title for the lecture is Design for Testability is a fraud - TDD is easy! and will deal with the myth that user code must be designed in a specific way in order to make it testable. The complete lecture abstract with more info can be found here.

Wednesday, 17 December 2008

Code snippets for Isolator

I've worked a little with the AAA syntax of the isolator and after writing a couple of tests I felt the need to add some snippets to make things go faster. So here a couple I found useful.

(In order to use just copy each xml code into a .snippet file and use the snippet manager to import them)

The first one is for creating faked instance:

<?xml version="1.0" encoding="utf-8"?>
<CodeSnippets
xmlns="http://schemas.microsoft.com/VisualStudio/2005/CodeSnippet">
<CodeSnippet
Format="1.0.0">
<Header>
<SnippetTypes>
<SnippetType>
Expansion</SnippetType>
</SnippetTypes>
<Title>
Typemock Fake</Title>
<Shortcut>
tf</Shortcut>
<Description>
Create a fake instance</Description>
<Author>
Lior Friedman</Author>
</Header>
<Snippet>
<Declarations>
<Object
Editable="true">
<ID>
Class</ID>
<ToolTip>
</ToolTip>
<Default>
T</Default>
<Function>
</Function>
</Object>
</Declarations>
<Code
Language="csharp"><![CDATA[$Class$ fake = Isolate.Fake.Instance<$Class$>();]]></Code>
</Snippet>
</CodeSnippet>
</CodeSnippets>

The second one if for setting the behavior of a method call:

<?xml version="1.0" encoding="utf-8"?>
<CodeSnippets
xmlns="http://schemas.microsoft.com/VisualStudio/2005/CodeSnippet">
<CodeSnippet
Format="1.0.0">
<Header>
<SnippetTypes>
<SnippetType>
Expansion</SnippetType>
</SnippetTypes>
<Title>
Typemock When Called</Title>
<Shortcut>
twc</Shortcut>
<Description>
Create a fake instance</Description>
<Author>
Lior Friedman</Author>
</Header>
<Snippet>
<Declarations>
<Object
Editable="true">
<ID>
Method</ID>
<ToolTip>
</ToolTip>
<Default>
fake.DoStuff()</Default>
<Function>
</Function>
</Object>
<Object
Editable="true">
<ID>
Action</ID>
<ToolTip>
</ToolTip>
<Default>
WillReturn</Default>
<Function>
</Function>
</Object>
<Object
Editable="true">
<ID>
Value</ID>
<ToolTip>
</ToolTip>
<Default>
0</Default>
<Function>
</Function>
</Object>
</Declarations>
<Code
Language="csharp"><![CDATA[Isolate.WhenCalled(() => $Method$).$Action$($Value$);]]></Code>
</Snippet>
</CodeSnippet>
</CodeSnippets>

and the last one is for verification:

<?xml version="1.0" encoding="utf-8"?>
<CodeSnippets
xmlns="http://schemas.microsoft.com/VisualStudio/2005/CodeSnippet">
<CodeSnippet
Format="1.0.0">
<Header>
<SnippetTypes>
<SnippetType>
Expansion</SnippetType>
</SnippetTypes>
<Title>
Typemock Verify</Title>
<Shortcut>
tv</Shortcut>
<Description>
Verfiy a call was made</Description>
<Author>
Lior Friedman</Author>
</Header>
<Snippet>
<Declarations>
<Object
Editable="true">
<ID>
Method</ID>
<ToolTip>
</ToolTip>
<Default>
fake.DoStuff()</Default>
<Function>
</Function>
</Object>
</Declarations>
<Code
Language="csharp"><![CDATA[Isolate.Verify.WasCalledWithAnyArguments(() => $Method$);]]></Code>
</Snippet>
</CodeSnippet>
</CodeSnippets>

Tuesday, 16 December 2008

New Mocking Frameworks (C++ and Java)

Over the last couple of months (maybe half a year) there's a big discussion whether TDD/AUT is here to stay or is it just a passing fashion.

I cant predict the future and as much as I believe in these concepts, only time will tell. However, since I'm paranoid by nature, I'm always on the lookout for indications that TDD is here to stay. A strong indication for me is the creation of new tools to help and simplify the writing of unit tests.

I don't know if this is a pure incident or if someone up there is orchestrating all these but during the last week I was introduced not to one but to two new tools out there. Both of them are new mocking frameworks.

The first one is a a mocking framework for C++ released by no other then Google! (named GoogleMock) this framework complete the unit testing framework Google release not too long ago (GoogleTest).

The second framework is a really cool mocking framework released for the Java language - PowerMock. Like JMockit and the .NET counterpart Isolator this framework is based on the principle of Interception (achieved by bytecode manipulation/CLR code injection)  and will allow mocking any (and I mean ANY) kind of method, including Static, Private and Final methods.

For me this has been a productive week for tool developers.

Sunday, 14 December 2008

Windows 7 - Testing Aid.

Automating GUI testing still represents a big challenge for us. In fact I haven't seen a good affordable solution that will help in fully automating them. And where there is no automation what we left with are manual tests.

Sometimes however help comes from places you least expect it, during a preview session for the upcoming Israeli Developer Academy III, Alon has demonstrated some of the new features that will be included in the new Windows 7 OS. What caught my eyes was a little application that Microsoft has included in as part of the OS which is called "Problem Steps Recorder". This application focuses on helping users produce detailed reports about steps taken which resulted in a defect they wish to report.

now what is the connection?

Simple, part of the problem with manual tests is that someone has to sit down and write the test scenarios down. The person needs to write down, step by step, the things he is doing, and their expected outcome. This however takes effort and time.

The "Problem Steps Recorder can help in this, I have only got a glimpse of the tool output, but basically its a zipped html that describes the user actions and gives a screen captures of the output. one can also add some specific info to describes the various steps he is doing. So instead of writing this manually, what you just need to do is turn it on, perform the actual test and you get everything in a very nice format which can be shared by everyone.

Thursday, 11 December 2008

If its too hard - do it more often.

During a panel in the latest Scrum gathering, a question was raised regarding how deliver a potentially shippable product each sprint if it takes 3-5 days of stabilizing the product to get it to a release level.

Its a relatively common question which has various answer, however Scott Ambler has given it a gem of an answer(in my opinion at least). His answer went something in the spirit of:

if it takes you that long then you must do it more often, in fact keep on doing it until you learn how to reduce the time that needed for stability.

This somewhat relate to a previous post of mine,  Cost of Change – The Fear . Investing the effort to fix pains that we learnt to live with is always a low priority. In fact the bigger the pain is, its harder to fix so it slips down the list. After all, if it was easy, we would have done it already.

So Scott gave a good way out. Basically he advices to enhance the pain! If its a scratch cut it with a knife. if its a twisted leg, break it completely. Make it hurt so bad that ignoring it is no longer an option, and the only course of action will be to fix it. A drastic approach but one which works.

Sunday, 7 December 2008

Distributed Card Meeting

I'm a big fan old low tech tools when it comes to meeting and information sharing. There's nothing more informative then a plain white board and nothing more useful then a bunch of sticky notes when trying to plan something.

There are times when these tools are not optimal, usually this happens when all the participants are not physically present. In such cases try using CardMeeting, its a "whiteboard sharing" application that give imitate the behavior of a board and notes just done on a virtual wall that can be seen distributed places.

On the up side the setup is very easy the use is very intuitive and it does keep it very simple. Also, the data can be exported into an excel sheet for future usage.

Monday, 1 December 2008

Scrum Master Certification

I'm proud to announce that I have finished my formal Scrum Master training and I'm now a full pledged Scrum Master. Its a big deal for me since being in the job of helping other integrate agile practices into their development process, the training really helped me sort things out and put them in the right prospective.

Here are some pictures from the training session.

Wednesday, 26 November 2008

CPPing in .NET

I've talked about the trend of developers to use ready made solutions.

The idea is simple enough. There is no need to reinvent the wheel. Almost every problem a developer encounters, someone else has already coded a solution.

When taken to the extreme, we end with a development technique which can be termed "Copy Paste Programming" (CPP in short).

Like anything in life there's a risk involved in this. In many cases code produced using this technique is less understood then code developed from scratch. And in most cases (I have seen) the ready made solution solves a slightly different problem then the one it is applied to. Which at the end leaves us with a piece of code that almost works but we don't know why.

Its not that I'm against using other people code. In many cases its the most effective and fastest to get the job done. However when doing so, I take the time to carefully understand the piece of code I'm CPPing into my .NET application.

Capability of a Developer

Sometimes in life you join a discussion expecting the least, but surprisingly enough profound wisdom can be found in the least expected places.

In a five minutes talk I heard a very wise man give a very good analysis of what is capability.

It translate to something like this:

Capability of a person is a combination of two elements

  1. The "body", which dictates the true ability/talent of the person.
  2. The "spirit", which dictates the will of the person.

Taken to our domain, a developer capability will be a combination of the his technical knowledge(talent) and his motivation (will). When I want to increase a developer capabilities, I can either try to help him increase his technical know how, or try to help him on his motivation side. For best effect I try them both.

Some coaches I have encountered are making the mistake to assume that will on its own is enough (did someone mentioned "The Secret"?). I'm sorry, but no matter how much I shall "will" it, I CANT write code with absolutely no bugs. Maybe its because they lack the capability to increase true ability in a person?

Monday, 24 November 2008

Outlook 2007 Hanging

Yesterday, I got into a battle with my outlook (2007). It all started after I received meeting invitation from a friend.

Harmless enough isnt it? WRONG!!!

When I first tried opening it, my outlook just froze on me. Being used to software crashes,  I terminated the program like I do every time. However for some reason, that stubborn message kept on freezing the outlook each time I tried to open it, and if that's not enough trying to delete it also made outlook freeze.

It took me a couple of hours to try various things, but every operation that I tried on that message froze up outlook. I got so desperate so I even tried looking on the net for PST manipulation tools, that will enable me to hack in and make the message go away. (BTW I didn't find any)

At the end however, I got lucky. From all things marking the message as junk got it moved to the junk folder and from there I could delete it once and for all.

So if you have outlook freeze on a specific message each time you try to open it and you cant delete it either, try marking it as junk first.

That's what I call annoying!

Tuesday, 18 November 2008

Documentation in an Agile Process

One of the more common misconception about agile is to think that it means no documentation. In the original Agile manifesto we can find that we value:

Working software over comprehensive documentation

For some reason people mistook this to mean that documentation has no place in an Agile process.

In my opinion this is very far from the truth. What I do agree is that the commonly form for documentation i.e. text files, that most people mean when they say documentation, is probably not the best way to get the job done.

Why do we write documents?

I recently had a nice discussion with Eyal Katz about the need for documents. Eyal listed the following three reasons:

  1. To make the writer really think about what he is writing about. For example by writing a design document the designer is forced to think and formalize his design properly.
  2. Documents can later on serve as a mean for sharing knowledge.
  3. To store knowledge (so we wont forget)

Are documents cost effective?

well in my opinion the answer is clearly no (in most cases):

I myself think better against a white board. For me sitting against a computer screen is just not flexible enough. When I do a design session using a white board, I usually complete in an hour what can then takes me 3-4 days to write down. So clearly, for me, writing a text file is not the most efficient way to think and I'm sure other people have even better ways.

But how can other people read/understand what I sketched on the white board? Well they cant. But generally it doesn't take me more then a 20-30 minutes to explain it. So instead of spending 3-4 days writing it down, I can personally teach it (and trust me its much more effective that way) to 60-80 peoples in a 1 on sessions. So how many people are really reading most of the documents we write?

but what will happen in a year from now if I don't write it down? Well, the real problem is that without maintenance documents gets outdated. Wherever I go, I see that knowledge stored in text files is almost always either partial, old or just plain wrong. For some reason software companies don't do a good enough job in making sure that their text files reflect reality.

No documentation in an Agile process?

on the contrary  a lot of documentation, just not a lot of the regular kind (i.e. text files). Some examples:

  • Product & Iteration backlogs serves as a good place to specify requirements.
  • Unit tests serves as good low level design documents
  • Burn down charts serves as great progress reports.

When taking into account all forms of documentation an agile project will usually have a lot more documentation then seems at first sight and in most cases that documentation will be more accurate and effective.

Uninstall Word?

All this does not mean that good old text file don't have their place. Just to say that from my experience they are widely misused. Knowledge stored on paper (or in files) is far less important then knowledge stored where it counts in people heads. A true Agile project will first make sure that it gets to there before it goes into a text file, and not vice versa.

Wednesday, 12 November 2008

Cost of Change – The Fear Factor

Oren has blogged on how to reduce the cost of change. He claims and I agree that

Beyond anything else, it is the will of the team to accept the pain of making the change and actually doing this

When accepting this mind set its usually a lot easier to find ways of reduce the change cost . i.e. make it easier to change.

However I feel there’s another factor in equation, and that is the developer’s inherent fear of the unknown. Time and time again I’ve seen (and did that myself) programmers start implementing a change by attacking its easier parts first, leaving the real hard unknown issues to the end.

When adding a new functionality to an existing system for example, there are two main steps:

  1. Implementing the new functionality.
  2. Integrating it into the given system.

In most cases the harder part will be the second part, however 90% of the times a programmer natural instinct would be to start out with coding the new functionality.

As far as I could tell, the reason is that in most cases the new functionality is kind of a well defined problem, while the effect of the integration part is a big unknown, and most people will shy away from uncertainties leaving it to the end. and if we add to this the general “embedded”dislike of changing existing code (if it works don’t fix it) and we get the natural tendency to start with the well understood isolated new piece of code.

The problem is that in most cases the hard parts are, well, hard. They are the parts that will lead to the real problems which will take time to solve. Starting out with the easier parts of a problem leads to a false sense of progress which later on may bite when facing the real difficulties in the implementation.

Here's an example of such  a thing happening, in this case we have worked on a feature for quite some time, but were only able to complete it after starting all over again by attacking the hard integration issues first.

So don’t make the natural Mistake of letting fear of the unknown take over. Accept the fact that the hard problems are there and must be solved. and start by solving them first. When that’s done the easier parts will easily fall into place.

Test Types – Are we confused yet?

Recently I’ve encountered several places which discussed the different type of tests. Here are some terms I’ve seen:

1) Unit tests – the most widely used term, which still I’m afraid is open to interpretation. Up until now I haven't seen a good definition of what is a “unit”. Is it a single class? Is it a single component? where lies the line between different units?

2) Integration tests – again kind of vague, but mostly it means a test which encompass several “units” sometime the entire system.

3) Developer tests – this is a more broader term which means test written by developers. I myself don’t see much value in using it since it doesn’t give any indication to the tests nature . (also what are none developer tests? how to refer to manual tests done by developers? how to refer to UAT written by a developer? )

4) Manual/Automated tests - Distinguish how the test are executed.

5) User Acceptance tests(UAT) (also referred to as Verification tests) - tests whose main goal is to “convince” the user that the system behaves as it should.

Are we confused yet? here’s a little something I found on the net.

test types

The point is, that such a wide terminology, tend to lead to much confusion and misunderstanding. Before starting any sort of discussion, my advice would be to make sure that the all parties share the same meaning and terminology. I’ve spend too much time argueing only to find that the person I was arguing with just used a different meaning than myself.

Tuesday, 11 November 2008

The Future of Unit testing

I’ve recently watched the PDC 2008 panel Session on the future of unit testing. The feeling I took from this session is that Automated testing is here to stay.

What gave me the indication that this is true is the fact that although the panel tried to focus on unit level testing, the audience tended to shift the discussion into other zones.Most of the audience questions were dealing with more complex scenarios, going beyond the unit level, that they encounter in real life.

For me this gives a good indication that automated unit testing has taken its hold. I’m not sure if my interpretation is not Mistaken, but it seems to me that the available tooling solutions for writing unit tests has matured enough (yes even if you don’t like using mock frameworks) to the point where people are now trying to  leverage them into other testing areas.

As the panel mostly agreed, there is still a shortage of good tools for doing integration/User acceptance testing, and if one listened closely enough it looks like at least some progress is being made on those areas.

Another fact, that most of the of the panel agreed upon, is that unit testing is not enough, at then end there’s still a gap between the user requirements level and the unit level that must be bridged by other sort of testing (integration tests/UAT).

If you want to get some more info on the session but don’t have the time to watch it fully try reading this post by Eli or this one by Andrew

Sunday, 9 November 2008

How to know which unit test calls a specific method

From time to time one faces the situation that one of the unit tests written fails inconsistently. The usual scenario is that when run alone the test passes, but when run as part of the entre suite the test blows up.

In a lot of cases this results from some left over from previous tests. Here is an example of such a case (taken from Typemock-Isolator support forums - AAA tests pass individually, fail in suite). The reported error in this case indicates that for a mock setup in a given test was accessed in some other test which kind of confused the given test.

Solving such an issue usually involves a lengthy Search&Destroy in which most of the work involves finding out which of the test is the one which is interfering with the given test.

So here’s some code that should save most of the leg work. I’m using one of the more powerful features of the Isolator framework – Custom Decorator, which allows adding interception points before and after a given method call.

Here’s the code:

public class TraceTestAttribute : DecoratorAttribute
{
//I'm using a dictionary of lists to store for
//each method all the tests in which the code
//was called
static IDictionary<MethodBase,List<String>> _TestNames
= new Dictionary<MethodBase, List<string>>();

public static string GetData()
{
StringBuilder output = new StringBuilder();
foreach(KeyValuePair<MethodBase,List<string>> entry in _TestNames)
{
output.Append(entry.Key.Name + ": ");
foreach (string testName in entry.Value)
{
output.Append(testName + " ");
}
output.Append(Environment.NewLine);
}
return output.ToString();
}

//this is where all the works get done
public override object Execute()
{
List<string> data;
if (!_TestNames.TryGetValue(OriginalMethod,out data) )
{
data = new List<string>();
_TestNames[OriginalMethod] = data;
}
data.Add(GetTestName());
return null;
}

//going over the stack to locate the test name
//we use reflection to find the method decorated
//with some sort of [Test*]
private string GetTestName()
{
StackTrace trace = new StackTrace();
foreach (StackFrame frame in trace.GetFrames())
{
foreach(object attribute in frame.GetMethod().GetCustomAttributes(false))
{
if (attribute.GetType().Name.StartsWith("Test"))
{
return frame.GetMethod().Name;
}
}
}
return "not from test";
}

//this instructs the framework to apply
//the attibute ALL methods (and not just tests)
protected override bool DecorateMethodWhenAttributeIsClass(MethodBase methodBase)
{
return true;
}
}
As you can see this is quite trivial, when applied to a given method at time the method get calls the attribute Execute method is invoked before the call. In that call I’m going over the stack searching for the test method in which we are running and storing it in a big dictionary keyed by the method we are tracking.

At the end this gives me the ability to stop the execution at each point and easily see the table of all the tests that during their execution a specific method was called (directly or indirectly).

(For good measure I’ve added a GetData method which build a string of the entire dictionary)

I hope you find this one useful, leave a comment if you do or if you have related question.

A warning: currently the isolator contains a defect that will cause an exception to be thrown when the method with this attribute is mocked. I’m sure that this problem will be fixed very shortly by the Isolator team.

Thursday, 6 November 2008

The Fifth Value – Respect

There’s a lot to be said about respect but for some reason this value is not stressed enough. I think that much of what troubling the software development world today can be traced back to this value. Lets face it most developers are arrogant bastards(and yes I’m a developer too).

Taken from Kent book:

Every person whose life is touched by software development has equal value as a human being. No one is intrinsically worth more than anyone else. For software development to simultaneously improve in humanity and productivity, the contributions of each person on the team need to be respected. I am important and so are you.

lets take a few examples I’ve encountered (ok, ok I’ve said those):

  • “our customers does not know what they want. Ill tell you what they want”
  • “our test team don’t have a clue on what they are doing and how to test the system”
  • “I don’t trust our system deployment team to properly configure the system in at the customer site”

and the list goes on.

The problem is that this kind of attitude, even if its backed up with real hard facts, leads to a basic lack of respect, and this will effect the project outcome.

I try to treat everyone the same (with partial success I admit), when facing incompetency, I first try to see if this can be fixed (in many cases what looks like incompetency is a result of a simple lack of knowledge) and if it cant, I will look for a replacement. Most project I’ve worked on don’t have the luxury with the less capable.

Wednesday, 5 November 2008

The Funny side of Metrics

From time to time I do stumble upon some great post. The following post (yes its an old one):

Is it Wise To Aim for 100% NTF?

did not only made me laugh it really made me think.

Lesson learnt – Using humor to mask Wisdom is really effective.

Monday, 3 November 2008

The power of combining Unit tests with Integration Tests (conclusion)

Here and Here I explained why I think that unit tests alone or integration tests alone does not do a good enough job at assuring the quality of the product. However, that combining the two test  levels, i.e. investing some of the effort in integration tests and in unit tests is what I call a winning solution.

Investing effort on both levels allow one to benefit from both worlds

on one side integration test will help in:

  • formulizing the user requirements.
  • make sure that system is working end to end.
  • test non functional requirements

on the other side unit tests will help in:

  • driving the design of the system to a better one.
  • give a good coverage of all parts of the system
  • eliminating defects that get shipped with the system.

I have been suing this approach in a TDD style in which the development process of a new feature(user story) started out by writing a few (failing) integration tests then followed by writing some more (failing) unit tests and only then starting to implement. this kind of process has worked for us.

The million dollar question is of course “how much do invest at each level?”, and the real answer is I don’t know. I think that it really depends on the specifics of the project and the system under test. In some cases it makes more sense to invest more time on the unit level while in other its better to spend more effort on the integration level. I really think that the best guide to that will be to use common sense and see how things works out.

Tuesday, 21 October 2008

Unit tests are not enough (Part 2)

In a previous post I've explained why integration tests alone will not be enough to create a high quality product. However assuming that unit tests on their own will be enough is also a Mistake I don’t intend to repeat.

There is great value in adding integration tests (and specifically doing writing them before coding is started) that is not gained by writing unit test.

Desired Behavior

Unit tests are testing the actual behavior of a single small component. A good set of unit tests will assure that a unit does exactly what its programmer has intended it do to. However when a programmer does not fully understood the DESIRED behavior, his misunderstanding will not be caught by any unit test HE writes. Writing some integration tests will not only make sure that all units behaved as desired, it will also do wonders to the programmer understanding of the desired behavior.

Real Life

Unit tests are usually executed in a very sterile environment. One of the more common practices when writing unit tests is to isolate the tested unit from all external dependencies by using stubs or mocks. However, stubs have a weird habit of behaving exactly as they are told by the developer, which reflect HIS understanding of how the real component behaves. From time to time this understanding does not reflect the ACTUAL behavior of the component. One cant be sure until the component is tested against the real thing.

None Functional Requirements

By nature unit tests usually focus on functional requirements. It is usually much harder to unit test none functional requirements. For example not only it is hard to test performance on the unit level, in most cases it is kind of pointless. It’s very hard to translate the performance needs of the system into a single testable criteria for a given component. Testing system Performance make much more sense on the system level.


Actually the fact that unit tests are not enough is more or less understood by everyone I've encountered. However, after doing TDD for a long period of time its very easy to forget that simple fact. I was caught (and caught others) more then once falling into the false assurance that a passing suite of unit tests gives.

Sunday, 19 October 2008

Integration Tests are not enough (Part I)

In the last year or so I have learnt the hard way that Integration tests to are not enough.

Throughout this post I'm using the term "integration tests" to refer to a more specific kind of tests which are also known as User Acceptance Tests (UAT)

There is value in writing Unit tests (and specifically doing TDD) that is not gained by writing integration test.

Design

Integration test are design agnostic. The main goal of integration tests is the prove that the system is working correctly, from the USER point of view. At most, when these tests are done TDD style they will help the API's of the system to look better. If we want to leverage tests to drive the system technical design, the tests must be aware of the different parts which construct the system design (Units) and explicitly test each of them (Unit tests).

Coverage

Software testing is an exponential problem by nature. As a system gains complexity the amount of test cases needed to cover all possible path and behaviors, grows exponentially. Therefore its not really possible to cover everything by integration tests. A Possible feasible approach is "Divide and Conquer". The system is cut into several parts (units), and each part is tested separately (unit tests). To those test we add specific tests that will make sure that all the parts play well together (integration tests)

Simplicity

Software systems are complex by nature. A large part of a system complexity is actually caused by the need to handle all things that can go wrong. I think I can fairly state that in most cases the number of tests written aimed to make sure the system doesn't break in extreme cases is larger then the number of tests covering normal system behavior. The thing is, that when thinking as a user, its really hard to look at the system as a whole and figure all the things that can go wrong. It's hard enough for us to think what the user actually needs, so trying at the same time to look at the system and find all the corner cases is really hard. It's generally a lot easier (for most developers) to look at each part of the system (units) on its own, analyze all possible failures and write tests (unit tests) that make sure this failures are handled properly.


To share my past experience, I’ve fallen into the Mistake of assuming that integration tests will be enough. After neglecting for enough time testing at the unit level, I observed an increase in our incoming defect rate. and the most annoying fact in all those incoming defects were that most of them would have been caught by proper testing at the unit level.

Monday, 13 October 2008

Numbers Talk - Real Agile Case Studies

I've recently found about the existence of the Agile Bibliography Wiki.

I believe this site is a must for all those interested in understanding the real measured effect various agile practices can have on projects.

I admit though that after diving into it for a couple of hours there's too much for me to digest at once. Therefore I will need some time to see what hidden gems I can find there.

The most interesting thing in this WIKI (for me) is the existence of case studies which examine the effect specific agile practices such as TDD, Pair programming have. This gives me a good starting point in trying to show organizations that even early stages in adapting agile methodologies can have a good ROI.

At any case I would like to thank George Dinwiddie for pointing me to this site.

Sunday, 12 October 2008

Self Distracting Code

Here’s a code segment for you:

#define OPDEF( id, s, pop, push, args, type, l, OpCode1, OpCode2, ctrl ) id,
typedef enum enumOpcode
{
#include "opcode.def"
CEE_COUNT,/* number of instructions and macros pre-defined */
} OPCODE;
#undef OPDEF

yes I’m back to old C++ hard core coding just in case someone is missing.


Before I start let me assure you that this piece of code actually works and after understanding what it does, I admit it’s doing so quite cleverly.

BUT:

  1. It took 3 experienced programmers, sharing about 15 years of C++ coding between them, 15 minutes to fully understand what is going on here.
  2. This segment is using one of the nastier tricks in the book – putting an “#include” statement in the middle of the file to achieve code replacement.
  3. At first look this code seems be doing nothing – we define a macro that is undefined almost immediately without being used???
  4. In order to fully understand what is going on one must open and see the content of “opcode.def” file.

The real Mistake here (in my opinion) is that this code was not written with readability in mind. In short, don't code like this. As clever as this looks its really a pain to maintain.

And if you really decide that to be so clever, don’t assume that the next guy is as clever as you are put some comments to help the poor guy coming after you(which in this case was me)

(BTW inside the opcode.def file, the defined macro is used to define a bunch of other stuff which stays defined although the original define is removed.)

AUT in C++

Its been some time since I left the C++ field in favor of .NET development and recently I have taken some time to do some catching up. After focusing so intensely on developing a mocking framework it was only natural for me to start by looking what has changed in the areas AUT/TDD in C++.

The bad news is that as far as I can tell, C++ AUT/TDD tools are still far behind in comparison to those found in .NET and Java. However there are some new tools on the block that seem to be heading the right direction so hopefully there is some change brewing.

disclaimer: I am only starting to evaluate the changes and there is a good chance that I am missing some key change here.

Therefore, for the sake of my own personal knowledge, I thought it will be nice to go over the tools I have found, and see if I can figure how far they have progressed.

So I’m going to dedicate some posts to those tools I have managed to locate. For the sake of the experiment I limited myself to those tools I have managed to locate fairly easy assuming that if there are more tools that didn't show up most likely other will miss them as well. However, if anyone out there know of a tool which is worth mentioning here just leave me a comment. At the end, I would really like to cover all tools out there.

I’ll start out by reviewing the following testing frameworks:

  • cppunit
  • cxxunit
  • cppunitlite
  • unittest++

which will be followed by reviewing Mocking Frameworks:

  • mockpp
  • mockitnow
  • amop

Monday, 6 October 2008

Using Ready made solutions

Here's a paradox for you:

Before starting to work as with .NET technology, I was a hard core C++ programmer. During that time the general approach around me was discouraging the use of external source components. The general attitude was always "we can do it better".

This ingrained reflex was and still is one of the hardest thing for me to overcome. When starting to work with .NET, the developers around almost always tried to first find ready made solutions before reverting to coding themselves. Here's a nice post summarizing this approach: Search, Ask and only then Code.

I find both approach somewhat funny:

C++ - if we are such great programmers, humility would suggest that other programmers can do just as well, so their code would be just as good as ours. why not use it?

.NET - if we can't write good enough code, and we are definitely not more stupid then the average developers. Most likely other people code would be just as bad.

In short, taken to the extreme both approaches are bad. In todays world there is so much info out there, that its a shame not to look and see how things get done and avoid common mistakes done by others. However, relying too heavily on other people solutions without fully understanding them (which in a lot of cases exactly what happens), will at the end take more time to maintain and will prevent your own skills (as a programmer) from evolving.

Programming is a combination of knowledge and skill. Possessing just one wont be enough at the long run.

Sunday, 5 October 2008

Merge in SVN

At the end of the day there will be times in which branching will occur, resulting in the need to merge changes from one code line to another. Merging in an SVN system is a little different then what I expected resulting in me making Mistakes from time to time.

Merge in SVN is a ternary operation

I for one was used to treat merge as a binary operation. you take one source file and merge it into another resulting in the combined changes from them both. In SVN however, the merge operation always involves three factors. In fact I would think that a better name for merging in SVN would be “diff and apply”. When merging, we first take a given revision on a code line which we want to merge from compare it to some previous version on that line and apply the differences into a different code line that is the target of the merge operation.

Example

1. lets say we have a trunk and a branch and we want to merge the last set of changes committed on the branch to the trunk.

The merge operation involves comparing the head revision of the branch to its previous revision (to extract the latest committed changes only) and then applying them to a local folder containing the head revision of the trunk.

2. Lets say we want to merge a complete branch back into the main trunk.

In the merge operation we take the head revision of the branch, compare it to the revision in which we branched from the main code line (this will give us all the changes made on the branch in relation to the main trunk) and apply the changes into a and then applying them to a local folder containing the head revision of the trunk.

Merge using TortoiseSVN

The good news are that the latest version of the TortoiseSVN client has greatly improved their merging GUI. The new TortoiseSVN now includes much simpler interface for doing common merge operation. (The new version also have a mechanism for tracking merge operation to avoid duplicate merges, but since I’ve not really used that I’m not sure how useful that is)

Friday, 3 October 2008

Source Branches

I hate branches. I think that branches are a bad solution for a stinking situation.

now before you I get jumped, I’m aware that there are cases in which branches are a perfectly legitimate solution to a given situation, but still my advice would be to try looking at what is causing the need for the branch, in my experience there is an underlying problem that the branch will not solve.

In the Last Alt .NET gathering I had a very nice discussion with Ken Egozi regarding branches. In that discussion ken and I represented the two opposites approaches. Ken on one hand is using Git and relies heavily on its great branching ability. To the point (if I understood him correctly) that he treat each source change as a different branch. I on the other hand uses SVN and try to avoid branches as much as I can.

During the last year I have used branches three times, in all of those cases I’ve came to realize that the branching was just a mechanism that allowed my to ignore the real issues but just caused us to waste time and effort. Here are the stories behind those branches:

Branching to sustain Quality

I think this is the most common use of a branch. During our development we reached a stage in which we felt that between solving defects and trying to add new features, we couldn't keep our code quality high enough. Preventing us from releasing in our desired rate (about once every 2 weeks). So we branched. Like many other project I’ve seen we thought to use he branch as our release base on which we would solve defects. leaving us to implement new features over the the main trunk.

quite simple right?

Well not really. after trying that for a while we found out that we were always merging all defect fixes from branch to trunk (as expected) AS WELL as all changes from trunk to branch!!!

when facing reality, each new feature we worked on was important enough and it was finished the business side opted to merge it into the branch so we could release it as soon as possible. The result of course was that the trunk and branch were practically identical. We just wasted time merging between the two code lines without addressing the real quality issues.

Branching To fend off a feature

Just before a trip our “customer” has insisted on implementing a couple of features in the last moment for “demonstration” purposes. Those features were quite controversial and we really didn’t want to implement them as they were especially on such a short notice (in fact we had quite an argument on how those should look). However since the “customer” was quite insisting, we choose to implement them on a branch, hoping to discard the branch at the end of the trip. We really wanted to sit down and do a proper decision on those features.

Naturally, at the end, those features were merged as is immediately after the trip, and all attempts to resist that merge failed.

Branching To avoid completing a feature

My latest experience with a branch was caused by me leaving. As part of my final days I was charged to complete a task I already started a task which is just the first step in a series of improvements which are long over due. When finishing the task, since no one actually knew how things will progress from here it was decided to put the task changes into a branch. putting it on a branch should allow picking them up later.

I myself am very much in doubt. I would say that most likely this branch will continue to exists and will never be used again. A branch in this case is just an excuse for putting this work aside still having a false sense of confidence that this can be picked up again. In fact unless those changes will be continued quite soon, they will be so much harder to pick in the future so they will never be completed.

These were my experiences with branching. In retrospect all these branches were just Mistakes on my part. In the first case instead of dealing with the causes for quality decline I tried to shove them aside into a branch. In the second instead of accepting the decision I tried using a branch to fend it off and last instead of admitting reality again a branch is used.

I would really much like to hear about you experience in using branches. Is the dislike I have towards them is just my own or do people out there share this.

Thursday, 2 October 2008

Decoupling Testing from Design

A recent discussion has recently sparked up about whether or not can testing practices can be decoupled from design practices. Roy in his latest post "Unit Testing decoupled from Design == Adoption", has argued that one of the factor which prevents the masses from adopt AUT is the strong perception that AUT and design are strongly coupled, a perception which in his opinion is wrong. Udi on the other hand argues in "Unit Testing for Developers and Managers" that testing practices and design goes hand in hand.

So can Testing be decoupled from Design?
Well my answer is: Yes but probably not.

The Context

To clarify this cryptic answer I first need to put those practices in a context. As I see it, both design and testing are just means to reach an end. The ultimate goal is producing class A quality software. Software which brings real business value to its user/Customer. Most customers are completely indifferent to the system internal design, and are usually not that interested in how it was tested. They just wants a working reliable system which answer their needs.

Decoupling Testing from Design

As Roy I also believe that from a practical sense design and testing can be done independently. Too often have I heard managers and programmers alike claiming that they cant adopt AUT/TDD because that will lead to a major system rewrite(oops I meant redesign) and currently there is no time for that. This argument is just plain wrong (in today's world). Using the proper toolset the need to redesign the system can be decoupled from the technical problem of writing unit tests.

So My answer is yes one can decouple the technical need for redesign from the ability to write unit tests.

Testing and Design Correlation

However putting Design and Testing in the above context leads me to believe that both are just ways to achieve quality in a software system. In that aspect they cant be truly decoupled. Its like trying to decouple reading&writing skills from basic math skills, technically they can be decoupled but at the end if you want to get somewhere, you'll need to learn them both. No matter how hard that might be.

In fact My experience has shown me that both practices are mutually beneficial. Its easier and safer to redesign a system when you have a comprehensive set of tests as a safety net, and its much easier to write effective unit tests on a well designed system. Still one can redesign a system without automated tests (that how most people are doing it) and I have yet to see a system no matter had bad its design that wouldn't have a strong ROI for adding some test automaton.

TDD

However many times in this discussion I get the feeling that we are missing the real essence here, and the essence is the message that TDD approach brings. I have always felt (like many other) that TDD true innovation lies in its being a design practice. Unit testing and automation concepts were here long before the TDD, but only recently these practices took hold. In fact today trying to separate them in a discussion is well, kind of pointless. Too often I referred to AUT and people heard and understood TDD. There is something more to TDD then just writing

I think that Michael has captured the essence of TDD in his post: "The Flawed Theory Behind Unit Testing":

Unit testing does not improve quality just by catching errors at the unit level. And, integration testing does not improve quality just by catching errors at the integration level. The truth is more subtle than that. Quality is a function of thought and reflection - precise thought and reflection. That’s the magic. Techniques which reinforce that discipline invariably increase quality.

And that's how TD captures me, what makes TDD tick for me his the strong coupling between design and testing that it brings into my daily process. For me doing TDD is not just about writing unit tests. TDD is about investing time in thinking how the system should look writing tests to reflect those thoughts and refactoring the code to make it so.

so at the end follow Ian advise:

Make it work, then make it right.

Sunday, 28 September 2008

MSTest – ExpectedException and exception message.

Since this is my first blog post ill start out small enough.

Recently we have migrated our unit test from NUnit into MSTest. Although by large the transition was quite smooth, one of the more annoying issues is the different behavior of the ExpectedException attribute.

An intended design or an oversight by MS?

I would (wishfully) think the latter is the case. But in any case I would say its a Mistake.

In any case, since MsTest does not support asserting on the exception message, we had to revise some of the tests. The revision itself is quite simple, just need to wrap the test code with try catch clause. Something like:

[TestMethod]
public void SomeTest()
{
try
{
//The test code:
//...
//...
Assert.Fail("We should not get here");
}
catch(SomeExceptionType ex)
{
Assert.AreEqual(
"Expected Message",
ex.Message);
}
}

However after doing this for a couple of tests we have managed to come up with a slightly better pattern:

[TestMethod]
[ExpectedException(
typeof(SomeExceptionType))]
public void SomeTest()
{
try
{
//The test code:
//...
//...

}
catch(SomeExceptionType ex)
{
Assert.AreEqual(
"Expected Message",
ex.Message);
throw;
}
}

Although the difference is quite minor, still we found the second version much more expressive and readable.

To conclude, there’s a lesson to be learnt here. When trying to replace a well used framework its best to make sure that the transition will be as easy as possible (Unless you are MS and can get away with this).

Saturday, 27 September 2008

First Post (Introduction)

Hi, my name is Lior Friedman and I'm a software developer.
This blog is all about me trying to recapture and share my past (and future) mistakes, in the hope that putting them on paper will help me remember and avoid repeating them in the future. I will try to focus on my own personal mistakes, but I’m sure that from time to time I will share those I have seen made around me.

I've been developing software in various fields and technologies for more then 10 years now (wow time do fly), and during the way I've made more then my fair share of errors along the way. The bigger of them are what got me to look up agile methodologies and so far I have found those much more appealing and effective then the rest.

So do expect a good share of philosophy here, I will however always try to relate everything to real world experience. Also being a technical guy I’ll also try to put in technical issues as much as I can in hope that someone out there will find this interesting.

Being an experiment I really hope that you can share your thoughts and comment with me, feel free to use the comments to put everything you wish that me or other should know. I would really like to know if someone out there finds this interesting or not

 
Design by Free WordPress Themes | Bloggerized by Lasantha - Premium Blogger Themes | Walgreens Printable Coupons