Over the last couple of weeks I was helping a developer tackle some nasty issues with some automated tests which kept failing for no apparent reason. The tests in question were not unit tests and did not belong to the end to end type. Instead they were somewhere on the level of component and involved several units of code, a service or two, some threading and communicating to another process.
After watching the struggle continue for a couple of hours, I started to get the “writing tests shouldn’t be that hard” alarm buzzing. So I took a step back realigned my thinking glasses (yes I think using my eyes) and looked closely again. What I now saw was:
- The amount of time we invest on this might indicate a negative ROI
- The heavy usage of mocking framework was completely taken to the extreme to a place in which no mocking framework should be.
- The tests were somewhat in the middle and looked like an hybrid creature. On what end they looked and smelled like focused unit tests when in fact they were integrated tests.
Today I met with the power to be and I got to hear the complete story. As it happens these tests were actually quite old and were initially written quite early when the team was just starting. For one reason or another the group of test were tagged with the Ignore attribute causing them to be excluded from the test execution. Since then they realized that they were going the wrong way with those tests, readjusted their method and actually what they have now is proving to be quite effective. The poor developer I worked with was the one who draw the short straw on fixing these old tests and bringing them up to date. The problem is that the ignore thing has gone undetected for some quite time.
So what can we learn from this:
If you invest the time to write the test you should take the effort and execute them.
Tests which are not worth executing have no reason to take up source control space. They should just be deleted. (In our case writing the tests altogether was faster then fixing them – which you always find out to be the case after you finish doing both)
Like production code tests should be maintained. The moment you stop doing so they starts to rot away and the effort involved in fixing them seem to be growing on an exponential curve.
The title of the post was taken from the “Way of Testivus”. I’ll be talking more about these things in the next IDNDUG meeting and more specifically in the upcoming SIGIST conference
4 comments:
Good thinking. I felt, though, that you are not in favour of component level testing. Which is, IMHO, is a legitimate level in the testing approach pyramid, a component testing with appropriate mocking of rest of the system, allows you to separate the team's areas of influence.
Component tests are definitely legitimate.
Personally i start my test investment at the unit level and the acceptance levels. I would test at the component level in bigger systems, or as a mean to make tests go faster. Or when the teams themselves are built around components.
Ilya,
here is the long answer:
http://imistaken.blogspot.co.il/2012/06/on-various-testing-levels.html
:) I guess this is how the blog posts are born :)
Post a Comment