Tuesday, 21 October 2008

Unit tests are not enough (Part 2)

In a previous post I've explained why integration tests alone will not be enough to create a high quality product. However assuming that unit tests on their own will be enough is also a Mistake I don’t intend to repeat.

There is great value in adding integration tests (and specifically doing writing them before coding is started) that is not gained by writing unit test.

Desired Behavior

Unit tests are testing the actual behavior of a single small component. A good set of unit tests will assure that a unit does exactly what its programmer has intended it do to. However when a programmer does not fully understood the DESIRED behavior, his misunderstanding will not be caught by any unit test HE writes. Writing some integration tests will not only make sure that all units behaved as desired, it will also do wonders to the programmer understanding of the desired behavior.

Real Life

Unit tests are usually executed in a very sterile environment. One of the more common practices when writing unit tests is to isolate the tested unit from all external dependencies by using stubs or mocks. However, stubs have a weird habit of behaving exactly as they are told by the developer, which reflect HIS understanding of how the real component behaves. From time to time this understanding does not reflect the ACTUAL behavior of the component. One cant be sure until the component is tested against the real thing.

None Functional Requirements

By nature unit tests usually focus on functional requirements. It is usually much harder to unit test none functional requirements. For example not only it is hard to test performance on the unit level, in most cases it is kind of pointless. It’s very hard to translate the performance needs of the system into a single testable criteria for a given component. Testing system Performance make much more sense on the system level.


Actually the fact that unit tests are not enough is more or less understood by everyone I've encountered. However, after doing TDD for a long period of time its very easy to forget that simple fact. I was caught (and caught others) more then once falling into the false assurance that a passing suite of unit tests gives.

Sunday, 19 October 2008

Integration Tests are not enough (Part I)

In the last year or so I have learnt the hard way that Integration tests to are not enough.

Throughout this post I'm using the term "integration tests" to refer to a more specific kind of tests which are also known as User Acceptance Tests (UAT)

There is value in writing Unit tests (and specifically doing TDD) that is not gained by writing integration test.

Design

Integration test are design agnostic. The main goal of integration tests is the prove that the system is working correctly, from the USER point of view. At most, when these tests are done TDD style they will help the API's of the system to look better. If we want to leverage tests to drive the system technical design, the tests must be aware of the different parts which construct the system design (Units) and explicitly test each of them (Unit tests).

Coverage

Software testing is an exponential problem by nature. As a system gains complexity the amount of test cases needed to cover all possible path and behaviors, grows exponentially. Therefore its not really possible to cover everything by integration tests. A Possible feasible approach is "Divide and Conquer". The system is cut into several parts (units), and each part is tested separately (unit tests). To those test we add specific tests that will make sure that all the parts play well together (integration tests)

Simplicity

Software systems are complex by nature. A large part of a system complexity is actually caused by the need to handle all things that can go wrong. I think I can fairly state that in most cases the number of tests written aimed to make sure the system doesn't break in extreme cases is larger then the number of tests covering normal system behavior. The thing is, that when thinking as a user, its really hard to look at the system as a whole and figure all the things that can go wrong. It's hard enough for us to think what the user actually needs, so trying at the same time to look at the system and find all the corner cases is really hard. It's generally a lot easier (for most developers) to look at each part of the system (units) on its own, analyze all possible failures and write tests (unit tests) that make sure this failures are handled properly.


To share my past experience, I’ve fallen into the Mistake of assuming that integration tests will be enough. After neglecting for enough time testing at the unit level, I observed an increase in our incoming defect rate. and the most annoying fact in all those incoming defects were that most of them would have been caught by proper testing at the unit level.

Monday, 13 October 2008

Numbers Talk - Real Agile Case Studies

I've recently found about the existence of the Agile Bibliography Wiki.

I believe this site is a must for all those interested in understanding the real measured effect various agile practices can have on projects.

I admit though that after diving into it for a couple of hours there's too much for me to digest at once. Therefore I will need some time to see what hidden gems I can find there.

The most interesting thing in this WIKI (for me) is the existence of case studies which examine the effect specific agile practices such as TDD, Pair programming have. This gives me a good starting point in trying to show organizations that even early stages in adapting agile methodologies can have a good ROI.

At any case I would like to thank George Dinwiddie for pointing me to this site.

Sunday, 12 October 2008

Self Distracting Code

Here’s a code segment for you:

#define OPDEF( id, s, pop, push, args, type, l, OpCode1, OpCode2, ctrl ) id,
typedef enum enumOpcode
{
#include "opcode.def"
CEE_COUNT,/* number of instructions and macros pre-defined */
} OPCODE;
#undef OPDEF

yes I’m back to old C++ hard core coding just in case someone is missing.


Before I start let me assure you that this piece of code actually works and after understanding what it does, I admit it’s doing so quite cleverly.

BUT:

  1. It took 3 experienced programmers, sharing about 15 years of C++ coding between them, 15 minutes to fully understand what is going on here.
  2. This segment is using one of the nastier tricks in the book – putting an “#include” statement in the middle of the file to achieve code replacement.
  3. At first look this code seems be doing nothing – we define a macro that is undefined almost immediately without being used???
  4. In order to fully understand what is going on one must open and see the content of “opcode.def” file.

The real Mistake here (in my opinion) is that this code was not written with readability in mind. In short, don't code like this. As clever as this looks its really a pain to maintain.

And if you really decide that to be so clever, don’t assume that the next guy is as clever as you are put some comments to help the poor guy coming after you(which in this case was me)

(BTW inside the opcode.def file, the defined macro is used to define a bunch of other stuff which stays defined although the original define is removed.)

AUT in C++

Its been some time since I left the C++ field in favor of .NET development and recently I have taken some time to do some catching up. After focusing so intensely on developing a mocking framework it was only natural for me to start by looking what has changed in the areas AUT/TDD in C++.

The bad news is that as far as I can tell, C++ AUT/TDD tools are still far behind in comparison to those found in .NET and Java. However there are some new tools on the block that seem to be heading the right direction so hopefully there is some change brewing.

disclaimer: I am only starting to evaluate the changes and there is a good chance that I am missing some key change here.

Therefore, for the sake of my own personal knowledge, I thought it will be nice to go over the tools I have found, and see if I can figure how far they have progressed.

So I’m going to dedicate some posts to those tools I have managed to locate. For the sake of the experiment I limited myself to those tools I have managed to locate fairly easy assuming that if there are more tools that didn't show up most likely other will miss them as well. However, if anyone out there know of a tool which is worth mentioning here just leave me a comment. At the end, I would really like to cover all tools out there.

I’ll start out by reviewing the following testing frameworks:

  • cppunit
  • cxxunit
  • cppunitlite
  • unittest++

which will be followed by reviewing Mocking Frameworks:

  • mockpp
  • mockitnow
  • amop

Monday, 6 October 2008

Using Ready made solutions

Here's a paradox for you:

Before starting to work as with .NET technology, I was a hard core C++ programmer. During that time the general approach around me was discouraging the use of external source components. The general attitude was always "we can do it better".

This ingrained reflex was and still is one of the hardest thing for me to overcome. When starting to work with .NET, the developers around almost always tried to first find ready made solutions before reverting to coding themselves. Here's a nice post summarizing this approach: Search, Ask and only then Code.

I find both approach somewhat funny:

C++ - if we are such great programmers, humility would suggest that other programmers can do just as well, so their code would be just as good as ours. why not use it?

.NET - if we can't write good enough code, and we are definitely not more stupid then the average developers. Most likely other people code would be just as bad.

In short, taken to the extreme both approaches are bad. In todays world there is so much info out there, that its a shame not to look and see how things get done and avoid common mistakes done by others. However, relying too heavily on other people solutions without fully understanding them (which in a lot of cases exactly what happens), will at the end take more time to maintain and will prevent your own skills (as a programmer) from evolving.

Programming is a combination of knowledge and skill. Possessing just one wont be enough at the long run.

Sunday, 5 October 2008

Merge in SVN

At the end of the day there will be times in which branching will occur, resulting in the need to merge changes from one code line to another. Merging in an SVN system is a little different then what I expected resulting in me making Mistakes from time to time.

Merge in SVN is a ternary operation

I for one was used to treat merge as a binary operation. you take one source file and merge it into another resulting in the combined changes from them both. In SVN however, the merge operation always involves three factors. In fact I would think that a better name for merging in SVN would be “diff and apply”. When merging, we first take a given revision on a code line which we want to merge from compare it to some previous version on that line and apply the differences into a different code line that is the target of the merge operation.

Example

1. lets say we have a trunk and a branch and we want to merge the last set of changes committed on the branch to the trunk.

The merge operation involves comparing the head revision of the branch to its previous revision (to extract the latest committed changes only) and then applying them to a local folder containing the head revision of the trunk.

2. Lets say we want to merge a complete branch back into the main trunk.

In the merge operation we take the head revision of the branch, compare it to the revision in which we branched from the main code line (this will give us all the changes made on the branch in relation to the main trunk) and apply the changes into a and then applying them to a local folder containing the head revision of the trunk.

Merge using TortoiseSVN

The good news are that the latest version of the TortoiseSVN client has greatly improved their merging GUI. The new TortoiseSVN now includes much simpler interface for doing common merge operation. (The new version also have a mechanism for tracking merge operation to avoid duplicate merges, but since I’ve not really used that I’m not sure how useful that is)

Friday, 3 October 2008

Source Branches

I hate branches. I think that branches are a bad solution for a stinking situation.

now before you I get jumped, I’m aware that there are cases in which branches are a perfectly legitimate solution to a given situation, but still my advice would be to try looking at what is causing the need for the branch, in my experience there is an underlying problem that the branch will not solve.

In the Last Alt .NET gathering I had a very nice discussion with Ken Egozi regarding branches. In that discussion ken and I represented the two opposites approaches. Ken on one hand is using Git and relies heavily on its great branching ability. To the point (if I understood him correctly) that he treat each source change as a different branch. I on the other hand uses SVN and try to avoid branches as much as I can.

During the last year I have used branches three times, in all of those cases I’ve came to realize that the branching was just a mechanism that allowed my to ignore the real issues but just caused us to waste time and effort. Here are the stories behind those branches:

Branching to sustain Quality

I think this is the most common use of a branch. During our development we reached a stage in which we felt that between solving defects and trying to add new features, we couldn't keep our code quality high enough. Preventing us from releasing in our desired rate (about once every 2 weeks). So we branched. Like many other project I’ve seen we thought to use he branch as our release base on which we would solve defects. leaving us to implement new features over the the main trunk.

quite simple right?

Well not really. after trying that for a while we found out that we were always merging all defect fixes from branch to trunk (as expected) AS WELL as all changes from trunk to branch!!!

when facing reality, each new feature we worked on was important enough and it was finished the business side opted to merge it into the branch so we could release it as soon as possible. The result of course was that the trunk and branch were practically identical. We just wasted time merging between the two code lines without addressing the real quality issues.

Branching To fend off a feature

Just before a trip our “customer” has insisted on implementing a couple of features in the last moment for “demonstration” purposes. Those features were quite controversial and we really didn’t want to implement them as they were especially on such a short notice (in fact we had quite an argument on how those should look). However since the “customer” was quite insisting, we choose to implement them on a branch, hoping to discard the branch at the end of the trip. We really wanted to sit down and do a proper decision on those features.

Naturally, at the end, those features were merged as is immediately after the trip, and all attempts to resist that merge failed.

Branching To avoid completing a feature

My latest experience with a branch was caused by me leaving. As part of my final days I was charged to complete a task I already started a task which is just the first step in a series of improvements which are long over due. When finishing the task, since no one actually knew how things will progress from here it was decided to put the task changes into a branch. putting it on a branch should allow picking them up later.

I myself am very much in doubt. I would say that most likely this branch will continue to exists and will never be used again. A branch in this case is just an excuse for putting this work aside still having a false sense of confidence that this can be picked up again. In fact unless those changes will be continued quite soon, they will be so much harder to pick in the future so they will never be completed.

These were my experiences with branching. In retrospect all these branches were just Mistakes on my part. In the first case instead of dealing with the causes for quality decline I tried to shove them aside into a branch. In the second instead of accepting the decision I tried using a branch to fend it off and last instead of admitting reality again a branch is used.

I would really much like to hear about you experience in using branches. Is the dislike I have towards them is just my own or do people out there share this.

Thursday, 2 October 2008

Decoupling Testing from Design

A recent discussion has recently sparked up about whether or not can testing practices can be decoupled from design practices. Roy in his latest post "Unit Testing decoupled from Design == Adoption", has argued that one of the factor which prevents the masses from adopt AUT is the strong perception that AUT and design are strongly coupled, a perception which in his opinion is wrong. Udi on the other hand argues in "Unit Testing for Developers and Managers" that testing practices and design goes hand in hand.

So can Testing be decoupled from Design?
Well my answer is: Yes but probably not.

The Context

To clarify this cryptic answer I first need to put those practices in a context. As I see it, both design and testing are just means to reach an end. The ultimate goal is producing class A quality software. Software which brings real business value to its user/Customer. Most customers are completely indifferent to the system internal design, and are usually not that interested in how it was tested. They just wants a working reliable system which answer their needs.

Decoupling Testing from Design

As Roy I also believe that from a practical sense design and testing can be done independently. Too often have I heard managers and programmers alike claiming that they cant adopt AUT/TDD because that will lead to a major system rewrite(oops I meant redesign) and currently there is no time for that. This argument is just plain wrong (in today's world). Using the proper toolset the need to redesign the system can be decoupled from the technical problem of writing unit tests.

So My answer is yes one can decouple the technical need for redesign from the ability to write unit tests.

Testing and Design Correlation

However putting Design and Testing in the above context leads me to believe that both are just ways to achieve quality in a software system. In that aspect they cant be truly decoupled. Its like trying to decouple reading&writing skills from basic math skills, technically they can be decoupled but at the end if you want to get somewhere, you'll need to learn them both. No matter how hard that might be.

In fact My experience has shown me that both practices are mutually beneficial. Its easier and safer to redesign a system when you have a comprehensive set of tests as a safety net, and its much easier to write effective unit tests on a well designed system. Still one can redesign a system without automated tests (that how most people are doing it) and I have yet to see a system no matter had bad its design that wouldn't have a strong ROI for adding some test automaton.

TDD

However many times in this discussion I get the feeling that we are missing the real essence here, and the essence is the message that TDD approach brings. I have always felt (like many other) that TDD true innovation lies in its being a design practice. Unit testing and automation concepts were here long before the TDD, but only recently these practices took hold. In fact today trying to separate them in a discussion is well, kind of pointless. Too often I referred to AUT and people heard and understood TDD. There is something more to TDD then just writing

I think that Michael has captured the essence of TDD in his post: "The Flawed Theory Behind Unit Testing":

Unit testing does not improve quality just by catching errors at the unit level. And, integration testing does not improve quality just by catching errors at the integration level. The truth is more subtle than that. Quality is a function of thought and reflection - precise thought and reflection. That’s the magic. Techniques which reinforce that discipline invariably increase quality.

And that's how TD captures me, what makes TDD tick for me his the strong coupling between design and testing that it brings into my daily process. For me doing TDD is not just about writing unit tests. TDD is about investing time in thinking how the system should look writing tests to reflect those thoughts and refactoring the code to make it so.

so at the end follow Ian advise:

Make it work, then make it right.

 
Design by Free WordPress Themes | Bloggerized by Lasantha - Premium Blogger Themes | Walgreens Printable Coupons