Showing posts with label Integration Tests. Show all posts
Showing posts with label Integration Tests. Show all posts

Wednesday, 13 June 2012

On Various Testing Levels

Agile-Testing-QuadrantsOne of my very first few posts (dates end of 2008) dealt with the premise that One test level is not enough to reach a good qulity product. Then I argued on one hand that Integration tests are not enough, but on the other Unit tests are not enough as well. Only the Power of combining both approaches will truly help make your product successful.

Has anything changed since then?

Since then I had the chance to meet  a lot of great people who taught me a lot about Quality and specifically about testing. I had the chance to meet people like Lisa Crispin which exposed me to the Agile Testing Quadrants, Michael Bolton who taught me the difference between tests and checks and Elisabeth Hendrickson who really showed me that being a good tester takes a lot more than what I imagined.These people also helped me in refining and improving my understanding.
However I still think that relying on a single level of testing will not be an effective strategy.

Checked, Accepted and Explored

I think that I first heard this from Elisabeth Hendrickson:
Every feature (story) developed should be at least “Checked” “Accepted” and explored. Otherwise it is not Done

Checked

For most I use the unit test level to do this. Checked for me means that the code is doing what I think it does. Basically that it will exhibit no unexpected logical error and that the thing I intended it to do when I coded it is indeed what it does.

Accepted

For this I usually try to get as close as I can to E2E. Excluding the GUI. Accepted for me means that the system behave in a useful way. that is the user (or other person) that “ordered” the feature is indeed satisfied that the system behave in what he accept as a good way.

Explored

For this I use manual testing. Explored means that there was a person involved that played around with the system and tried to judge the general quality of the solution. usually there are specific question to answer in this phase, but sometimes this takes the form of free style testing. the goal of this stage is to use the brain power of a tester to see what else can be improved and whether or not the solution is indeed satisfactory.

Is that ALL?

Definitely not!
This is the bare minimum. In many contexts this is not enough. sometimes the system is just too big and needs testing at other levels, there are definitely the non functional requirements that need to be addressed and much more.
However this is a very good place to start. In some projects you can definitely achieve good enough results with only this. And if not, starting with here will give you a strong foundation to base the rest of your test efforts on.

The Twist

But what if were not in an agile context? Well the principles stay the same. I believe that using these 3 simple aspects even when you are not developing an end to end feature will do you good. Even when you are working in an team focusing on a horizontal layer of the product its just a matter of aligning your definition of what is the system under test. Each piece of functionality you deliver still needs to be checked accepted and explored. Its just that the accepted part takes a twist.
Since you are now working on part of the system, most likely your “customer” is not an end user but a fellow developer that needs to either integrate with you work or simply use it. However if you treat him like your customer he can still define his acceptance criteria. Most likely the resulting tests will be at the component (or subsystem) level and will not encompass the entire system. but either way the thing that you deliver will be checked, accepted and explored.

Thursday, 17 September 2009

Integration Tests are not enough - Revisited

A while ago I wrote that Integration Tests are not enough (Part I). i Think that J. B. Rainsberger does the subject much more justice then in his latest session (Agile 2009) titled:

integration tests are a scam.

During his talk he also mentioned, as a side note, that what takes time and experience to master when doing TDD is the ability to "listen" to your tests and understand what they are telling you. I couldn't agree more, I wonder how we can help speed up this process.

Any thoughts?

Sunday, 14 December 2008

Windows 7 - Testing Aid.

Automating GUI testing still represents a big challenge for us. In fact I haven't seen a good affordable solution that will help in fully automating them. And where there is no automation what we left with are manual tests.

Sometimes however help comes from places you least expect it, during a preview session for the upcoming Israeli Developer Academy III, Alon has demonstrated some of the new features that will be included in the new Windows 7 OS. What caught my eyes was a little application that Microsoft has included in as part of the OS which is called "Problem Steps Recorder". This application focuses on helping users produce detailed reports about steps taken which resulted in a defect they wish to report.

now what is the connection?

Simple, part of the problem with manual tests is that someone has to sit down and write the test scenarios down. The person needs to write down, step by step, the things he is doing, and their expected outcome. This however takes effort and time.

The "Problem Steps Recorder can help in this, I have only got a glimpse of the tool output, but basically its a zipped html that describes the user actions and gives a screen captures of the output. one can also add some specific info to describes the various steps he is doing. So instead of writing this manually, what you just need to do is turn it on, perform the actual test and you get everything in a very nice format which can be shared by everyone.

Wednesday, 12 November 2008

Test Types – Are we confused yet?

Recently I’ve encountered several places which discussed the different type of tests. Here are some terms I’ve seen:

1) Unit tests – the most widely used term, which still I’m afraid is open to interpretation. Up until now I haven't seen a good definition of what is a “unit”. Is it a single class? Is it a single component? where lies the line between different units?

2) Integration tests – again kind of vague, but mostly it means a test which encompass several “units” sometime the entire system.

3) Developer tests – this is a more broader term which means test written by developers. I myself don’t see much value in using it since it doesn’t give any indication to the tests nature . (also what are none developer tests? how to refer to manual tests done by developers? how to refer to UAT written by a developer? )

4) Manual/Automated tests - Distinguish how the test are executed.

5) User Acceptance tests(UAT) (also referred to as Verification tests) - tests whose main goal is to “convince” the user that the system behaves as it should.

Are we confused yet? here’s a little something I found on the net.

test types

The point is, that such a wide terminology, tend to lead to much confusion and misunderstanding. Before starting any sort of discussion, my advice would be to make sure that the all parties share the same meaning and terminology. I’ve spend too much time argueing only to find that the person I was arguing with just used a different meaning than myself.

Tuesday, 11 November 2008

The Future of Unit testing

I’ve recently watched the PDC 2008 panel Session on the future of unit testing. The feeling I took from this session is that Automated testing is here to stay.

What gave me the indication that this is true is the fact that although the panel tried to focus on unit level testing, the audience tended to shift the discussion into other zones.Most of the audience questions were dealing with more complex scenarios, going beyond the unit level, that they encounter in real life.

For me this gives a good indication that automated unit testing has taken its hold. I’m not sure if my interpretation is not Mistaken, but it seems to me that the available tooling solutions for writing unit tests has matured enough (yes even if you don’t like using mock frameworks) to the point where people are now trying to  leverage them into other testing areas.

As the panel mostly agreed, there is still a shortage of good tools for doing integration/User acceptance testing, and if one listened closely enough it looks like at least some progress is being made on those areas.

Another fact, that most of the of the panel agreed upon, is that unit testing is not enough, at then end there’s still a gap between the user requirements level and the unit level that must be bridged by other sort of testing (integration tests/UAT).

If you want to get some more info on the session but don’t have the time to watch it fully try reading this post by Eli or this one by Andrew

Monday, 3 November 2008

The power of combining Unit tests with Integration Tests (conclusion)

Here and Here I explained why I think that unit tests alone or integration tests alone does not do a good enough job at assuring the quality of the product. However, that combining the two test  levels, i.e. investing some of the effort in integration tests and in unit tests is what I call a winning solution.

Investing effort on both levels allow one to benefit from both worlds

on one side integration test will help in:

  • formulizing the user requirements.
  • make sure that system is working end to end.
  • test non functional requirements

on the other side unit tests will help in:

  • driving the design of the system to a better one.
  • give a good coverage of all parts of the system
  • eliminating defects that get shipped with the system.

I have been suing this approach in a TDD style in which the development process of a new feature(user story) started out by writing a few (failing) integration tests then followed by writing some more (failing) unit tests and only then starting to implement. this kind of process has worked for us.

The million dollar question is of course “how much do invest at each level?”, and the real answer is I don’t know. I think that it really depends on the specifics of the project and the system under test. In some cases it makes more sense to invest more time on the unit level while in other its better to spend more effort on the integration level. I really think that the best guide to that will be to use common sense and see how things works out.

Tuesday, 21 October 2008

Unit tests are not enough (Part 2)

In a previous post I've explained why integration tests alone will not be enough to create a high quality product. However assuming that unit tests on their own will be enough is also a Mistake I don’t intend to repeat.

There is great value in adding integration tests (and specifically doing writing them before coding is started) that is not gained by writing unit test.

Desired Behavior

Unit tests are testing the actual behavior of a single small component. A good set of unit tests will assure that a unit does exactly what its programmer has intended it do to. However when a programmer does not fully understood the DESIRED behavior, his misunderstanding will not be caught by any unit test HE writes. Writing some integration tests will not only make sure that all units behaved as desired, it will also do wonders to the programmer understanding of the desired behavior.

Real Life

Unit tests are usually executed in a very sterile environment. One of the more common practices when writing unit tests is to isolate the tested unit from all external dependencies by using stubs or mocks. However, stubs have a weird habit of behaving exactly as they are told by the developer, which reflect HIS understanding of how the real component behaves. From time to time this understanding does not reflect the ACTUAL behavior of the component. One cant be sure until the component is tested against the real thing.

None Functional Requirements

By nature unit tests usually focus on functional requirements. It is usually much harder to unit test none functional requirements. For example not only it is hard to test performance on the unit level, in most cases it is kind of pointless. It’s very hard to translate the performance needs of the system into a single testable criteria for a given component. Testing system Performance make much more sense on the system level.


Actually the fact that unit tests are not enough is more or less understood by everyone I've encountered. However, after doing TDD for a long period of time its very easy to forget that simple fact. I was caught (and caught others) more then once falling into the false assurance that a passing suite of unit tests gives.

Sunday, 19 October 2008

Integration Tests are not enough (Part I)

In the last year or so I have learnt the hard way that Integration tests to are not enough.

Throughout this post I'm using the term "integration tests" to refer to a more specific kind of tests which are also known as User Acceptance Tests (UAT)

There is value in writing Unit tests (and specifically doing TDD) that is not gained by writing integration test.

Design

Integration test are design agnostic. The main goal of integration tests is the prove that the system is working correctly, from the USER point of view. At most, when these tests are done TDD style they will help the API's of the system to look better. If we want to leverage tests to drive the system technical design, the tests must be aware of the different parts which construct the system design (Units) and explicitly test each of them (Unit tests).

Coverage

Software testing is an exponential problem by nature. As a system gains complexity the amount of test cases needed to cover all possible path and behaviors, grows exponentially. Therefore its not really possible to cover everything by integration tests. A Possible feasible approach is "Divide and Conquer". The system is cut into several parts (units), and each part is tested separately (unit tests). To those test we add specific tests that will make sure that all the parts play well together (integration tests)

Simplicity

Software systems are complex by nature. A large part of a system complexity is actually caused by the need to handle all things that can go wrong. I think I can fairly state that in most cases the number of tests written aimed to make sure the system doesn't break in extreme cases is larger then the number of tests covering normal system behavior. The thing is, that when thinking as a user, its really hard to look at the system as a whole and figure all the things that can go wrong. It's hard enough for us to think what the user actually needs, so trying at the same time to look at the system and find all the corner cases is really hard. It's generally a lot easier (for most developers) to look at each part of the system (units) on its own, analyze all possible failures and write tests (unit tests) that make sure this failures are handled properly.


To share my past experience, I’ve fallen into the Mistake of assuming that integration tests will be enough. After neglecting for enough time testing at the unit level, I observed an increase in our incoming defect rate. and the most annoying fact in all those incoming defects were that most of them would have been caught by proper testing at the unit level.

 
Design by Free WordPress Themes | Bloggerized by Lasantha - Premium Blogger Themes | Walgreens Printable Coupons