Monday, 20 December 2010

ALT.NET – 3rd Tool Evening

Another meeting of the Israeli ALT.NET group took place today. This session was divided into 5 parts of 20-30 minutes each describing a specific tool.

Raven DB

The first tool was presented by Oren Eini (aka Ayende), which demonstrated Raven DB - a none SQL DB. I wont say much about the tool there’s more than enough info to be found here. The session however was hilarious. During Oren talk the technical devil decided to unleash havoc. it started when the projector started to flicker unexplainably causing the viewers a mild headache. trying to reconnect the laptop change the cable didn’t work, Until at last a new projector was brought.However, at the offices the projector screen is wired to fold when the main projector is shut down, which of course was not a good thing, so we ended up with this:

DSC03506

if your wondering what you see, its one projector projecting a big empty dos window on top a second projector was showing the real image.

(Ayende also tried in parallel to install raven db on the main room pc however he decided to let it go when he found that the machine didn’t have .net framework installed).

I just glad that my laptop was not having those issues.

Crap4Net (and some custom MSbuild tasks)

In the second part I showed the Crap4Net tool. In short Crap is a metric combining into a single number complexity of code and unit test coverage. The idea is to give some indication on where you as a developer should spend your time in improving/cleaning your system. The MSbuild tasks part is just a side effect. but these custom tasks are aimed at people using the MSTest framework improving the integration with the build server and contains several useful tasks like running all tests in solution, converting MSTest coverage report into xml (and some xsl that will produce nice html’s) and some more.

Advanced Resharper

Next came Igal and explained some advanced ability of the Resharper. Since I’m a CodeRush user it was less relevant for me. but towards the end Igal has showed that Resharper has a debug mode that can be activated inside the VS IDE and has lots of hidden features. for more info go to Igal blog post.

Rapid-Dev

in the next part Dror has demoed the Rapid-Dev tool we have been working on. For those who are interested the Tool can be found here. and its dedicated blog here.

NHibernate – Getting started

In the Last session Ariel demoed in 15-20 minutes how to get up and running with a trivial save/load application with NHibernate. A wonderful talk for those just starting to work with NHibernate. More details (with code) here

All and all a great evening for me.

Wednesday, 8 December 2010

Rapid-Dev - Release Update

A new version of Rapid-Dev is available for download. the exact details can be found here, however to summarize there are 2 main changes:

  1. Added support for VS 2010.
  2. Added a "Reset" ability to instruct the Add-in to execute ALL tests again.

Also we fixed a major bug the add-in to crash during loading time. along with some other improvements to stability and capability.

I would like to thank everyone who invested the time to check and evaluate the Add-in. Specifically to those who cooperated with us helping us improve. your effort is really appreciated especially in the light that things were not working for you at all.

Monday, 29 November 2010

So what is an Alpha Version?

Many software products I’ve used in the last years comes with these strange titles like Alpha/Beta/Release Candidate (RC)… But never once have I seen any definition of what those terms means.

What’s the real difference between an RC and a final release? what should I expect of a beta version? should I even try to use an alpha version?

When coming to release Rapid-Dev we knew that its is not fully ready, so we needed to do some thinking on the exact term to use. On one side the product is clearly not complete (check here for details). But is in in Alpha stage or beta stage? How can I tell the difference? Should we wait and make it more stable or are we waiting too much risking investing time and money in the wrong directions?

For me the most important things is that the product will deliver value to my users and that they are fully aware of its state (transparency).

Alpha Version – My interpretation

When we came to title the release we did so in light of what we needed to achieve at this point. For me getting users feedback as early as possible, is critical. I want to know how best to help our users. even if i ran the risk of exposing a not 100% stable version.

Therefore we released as soon as the version met the following criteria's:

  1. It gives enough core functionality to actually be of help.
  2. it demonstrates what exactly we want to achieve and how we are approaching this.
  3. its stable enough (hopefully) not to annoy too many people.

and that’s what I call an Alpha.

Its all about gathering feedback, that’s the main purpose for an alpha version, it might (and probably) not as usable as it can be, Its probably not as stable as it should be. However it is a good base on which I can start the discussion with my users.

So when you try an alpha version, please invest five minutes of your time to let the developers know what you think, otherwise the main point of an alpha is missed.

Monday, 22 November 2010

Rapid-Dev - A Continuous Testing Add-in for VS2008

Over the last year I have been working on an Visual studio Add-in with the goal of minimizing the waiting time of developers for test execution. I'm happy to say that the Add-in has reached the state in which we would like to share it with other people and get feedback on how they find it.

to get more information about you can either go to:

The site has a short demo of how the add-in fits into the VS development environment (and of course a download link), while the blog will be used to share the story of the product development.

Tuesday, 9 November 2010

How not to Organize an Event

I've been running this blog for 2 years now and so far I have kept my ranting to a bear minimum. Today I'll make an exception (hopefully the last one).

I got a distributing call a couple of days ago, telling me that unfortunately there are no more places at the Haifa Agile Tour event and "were sorry but please don't come".

In itself not a big deal, I've missed more than one event dues to space limitations. Getting this calls just TWO days before the event (which I registered a few months ago) AFTER I got a a confirmation email saying "please come and here is where you park" is a just plain wrong.

When you organize an event its ok to:

  1. Limit the amount of people that can register
  2. Its also ok to register everyone and later on confirming only part.
  3. It's even ok to delay the even for almost the last minute while you are doing your best to arrange for a bigger place.

Emailing people saying we managed to increase our venue please come, just to call them so late in the game apologizing with no clear reason is NOT.

Doing so disrespectful and insulting. It clearly states that some sort of criteria was applied other than first in first served. Really the last thing I would expect at an event sponsored by the Agile Tour Organization.

So I contacted the organizing firm and I got the following answer, no nothing unexpected has happened at the last moment, just that they ran out of place and:

Sine this conference is more relevant to people of the industry more that it is for consultants, sadly we had to unregister consultants

That is only "developers" were let in. Agile "Coaches" and other (none potential customers) were left out. since clearly this is less relevant for them.

So here's a couple of questions for Ignite people:

Is that how you regularly do business?

Do you really think its wise to tick of people (specifically the kind that makes a living by connecting to people) just in order to attract a couple more potential customer?

If you are refusing people, why was the registration open while doing so?

And my blog title is IMistaken...

Thursday, 16 September 2010

Things That Works – Time Management

Gil has posted two articles (Write down everything, bucket listJuggling multiple tasks) about techinques he uses for time management that works for him. Since i liked his title so much and since this is a problem im struggling with as well I thought to add.

So first lets describe my context (a solution is not relevant without the context in which it works/failed). being an agile coach means that a large chunk of my time I’m dealing with a few regular clients, some small gigs that pops up and of course the staff related to marketing sales and logistics. I’m also doing some academicals research and have a couple of pet projects I try to advance.

Attempt 1

I started out by using old plain memory power , and to say the truth most of the times that worked for me. However, in time the amount of things i juggle between, caused me to miss things which should have not been forgotten. So I understood i needed to back up my memory.

Attempt 2

Next step was to write everything down, I started out by using physical notes, however since most of the time I’m not in the same place I found the need to carry the notes with me to be too cumbersome.

Attempt 3

next I rearrange everything into a big list (using excel), I’ve tried applying the “product Backlog” technique taken from Agile methodologies (well that’s a surprise). And to many extends that improved things a lot. Using a backlog allowed me to write down big things I wanted to accomplish (“stories”) and to break them into specific things I need to do (“Tasks”). It also improve prioritization, making sure that the important things got done first.

Lately however I recognized that a simple unified backlog is, well, too simple. when examining my work week i discovered that in most days, my work was dedicated to a single main thing (with some small interruptions to other things). I also noticed that if I group the backlog along the lines of my work days i can treat many of the things has there own separate project and I don’t really needs to prioritize between them. I just need to decide how much effort to allocate to that project i.e. decide on which day I’m where.

Attempt 4

So finally my current scheme for handling things has molded into this:

For each “project” I do, which currently are:

  1. One per client.
  2. One per pet project.
  3. Research
  4. Personal
  5. Marketing
  6. Logistics.

I have a separate to do list contained in a one page plain text file. In all cases (and I think this is crucial) I keep the list less than a page long. Each list is sorted by priority and every new thing gets inserted into its proper place according to project and priority. When I finish tasks i just delete them.

I no longer distinguish between ’stories’ and ‘tasks’, since I found that after splitting the list into various files one page is enough to hold all the tasks.

At the start of each Month I do an initial plan of how to split my days, which get revisited at the end of each week (and any other time I feel the need to do it).

at the start of every day, when i get to the place where that days project is being done ( a client, home,…) i can just focus on one specific from which i decide the goals for that day.

and the rest is still left for “Memory Power”.

Summary

so far my last attempt is working ok for me. on the good side I managed to order my week into complete days and to some extend decrease the amount of context switching. I’m consciously limiting myself and when I can I don’t split a day between two projects or more. inside a day I’m keeping my focus and so far i managed to minimize the misses.

on the down side I think that overall my task list has grown and I’m starting to feel the pressure. I know that that’s probably because I’m taking on too much and I think that happens since I’ve lost to some extend the overall view. I need to find a way to improve on that (or find a way to to add more hours per day).

Wednesday, 15 September 2010

Software Craftsmanship – Meeting

Today I gave a session about how to write better code in the Israeli Software Craftsmanship group. While I didn’t manage to cover all the things i planned to do (I underestimated the time needed) I still manage to have a great time.

For those who missed the session and for those who wish to get a second glimpse the slides they can be downloaded here.

Sunday, 15 August 2010

Running all Tests in Solution

MsTest is a nice testing framework. However it can be annoying from time to time. a while ago in the past I’ve blogged about the inability of this testing framework to execute all tests in a given sln file.
Recently i had some time and I decided thats its about time that ill do something productive about this (instead of my usual complaining) and I’ve sat down to write a simple msbuild task that runs all tests in a solution.
here is an example of a build script that uses this task:
<UsingTask
 TaskName="RunAllTestsInSolution"
 AssemblyFile="RunAllTestsInSolution.dll" />

<PropertyGroup >
 <MSTestLocation>C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE</MSTestLocation>
</PropertyGroup>

<Target Name="UnitTestsWithCoverage">
 <RunAllTestsInSolution
   MSTest ="$(MSTestLocation)\mstest.exe"
   SolutionFile="Example.sln"
   Parmaters ="/resultsfile:TestResults.trx" />
</Target>
As you can see you just tells it which mstest version to use, the solution file containing the test you want to execute and any additional parameters if needed.

You can download the source code from here.

*UPDATE*
There is a CodePlex project containing an updated source code along with a couple of  other useful Msbuild tasks. Here's the link

Friday, 6 August 2010

ALT .NET – Take 4

Start of next month The ALT.Net group will be having another unconference. This will take place on the first weekend of September.

If you want to come, find out all the details and register using this link. (Exact place is yet to be decided)

Tracking Actual Hours On Tasks

Oren Ellenbogen has posted a couple of weeks ago “Why tracking actual hours is so imperative?”.

Earlier this week I had a discussion with a VP R&D on exactly the same issues..

For me tracking hours on task is well useless.

But here are some of the things they said on why it is important:

  • When people know they are being tracked they become more committed and dedicated to their tasks.
  • We need to report on people performance to higher management and this kind of information is crucial to understand how people perform.
  • We track times in order to verify how much is done on sprint related tasks and how much time is invested on other things.
  • We track time in order to make sure that people are working enough time.
  • its important to correlate story points/ideal days to calendar time
  • In order to improve our estimation we need to track how good they were.
  • We can use this data to approach the sprint planning in a more formal/mathematical way, resulting in better sprint planning.

All this are great reasons but here are some things to remember:

  • Tracking this information takes time and effort – in fact the discussion all started with the VP saying he want to tweak hi ALM system in order to support a specific related scenario. which to me served as a warning they might investing too much time in this.
  • Tracking this data sometime switch the discussion from what was finished, to discussing how much time it took to finish it. the difference between the two might seem minor, however for me become a more productive team is all about focusing on finishing more things.
  • Agile and Scrum is all about the Team , we would like to establish a performing team and in order to do that we must focus on the team as a whole. Breaking down the team and tracking on the individual level is counter productive to this. At the company it was becoming quite visible that it everyone were really careful that everyone they worked on was credited to them. while this is not wrong,its more important that the entire team finish more things then a specific person having a “higher score”
  • Estimation is not an exact science. To this date we have not managed to nail it. I’ve seen many teams become better at sprint planning when they opted to forsake the exact math and started planning in a more flexible. It still amazes me, but almost every time I see people get hung on the math, they just get it wrong. I believe that most people have a great grasp of what they can actually do once you take the numbers out of the equation.

but this in itself is not a reason not to track actual hours. still the benefits are really important. right?

well yes they are however in a Scrum process all the benefits are there. you just need to know where to look for them:

  • one of the things i here many times is when doing scrum people get the feeling they are micromanaged. The daily meeting in which you tell the entire team what you have done feels a little too much. I agree. daily meeting is all the supervision people needs you don’t need to count actual hours in order to let people know that everything they do is monitored. if you really want to motivate your team just hand the sprint burn down chart above their heads. That’s more then enough to most people.
  • Do we really needs to report on a personal level? do your CEO really cares who did what and how much time he invested in everything? in most cases the answer is no. You (as the team manager) might feel you need this information. Higher management just want to know you trust your team and that the team is doing well. Don’t let them meddle with your people nothing good will come out of it.
  • this one is a little tricky, you really wants to know how much effort is invested in not related things, right? again wrong. in most cases ( and in this specific one) you have nothing to do about those things. those things had to be done and you just needed to do them no matter what. what you do want to know is how much time and capacity you had for the related things. and that can usually be judged quite accurately by just looking at what the team did accomplish. in fact, the specific team did try to extrapolate from this data how much time to allocate to “none sprint’ related but found out they really just couldn’t do it.
  • making sure people worked enough? is that really an actual goal anymore? if that really important to you just use your place clock system and extract the exact time people were at work.
  • Relating estimation units to calendar units is quite important, however that is achieved by tracking the team velocity. no need to duplicate the effort.
  • Trying to improve the estimation is the only thing I think might be worth doing. However, i think that just tracking the tasks that significantly got a wrong estimation should be enough. and i expect a team to be able to do that in a sprint retrospect without the need to actually track this formally.
  • don’t use Math when you plan, as i said its just doesn’t work as well.

Summary

I see too much emphasis is put on post mortem in a process. It is important to inspect and adapt, but for most team especially for teams which are new to agile and Scrum, The things that you need to improve are much much bigger then this. this is just sticking to old habits and old process, in which since planning was done on a personally level this kind of information was really important. channel your energy on how to get more things done instead of why haven’t we done well last time. finding productive ways to improve is more important then understanding why we were wrong in the past

When you transition to agile, just take a leap of faith and see if you can do without this.

Trust me you can.

Wednesday, 14 July 2010

Unlocker is Back

One of the most useful development tool I used in the past is Unlocker. this amazing tools is a life saver when you are developing using the file system and use may cause all kind of file handler to be left hanging.

however about a couple of years ago (I think), when I was forced to switch to working on a 64 bit vista machine, I stopped using it since it didn't support that platform.

You can imagine my happiness when found out this week in the Alt.NET tool evening that they now have a version that does support my configuration.

Who said that conventions are a waste of time?

Wednesday, 7 July 2010

ALT .NET – Second Tool Night

The Israeli ALT.NET group is having its second tool night Monday the next week on the

WHEN?

on Mon. 12/07, 18:00-21:00

WHERE?

Sears offices,

Ekershtein building A,

HaMenofim 9 St,

Herzeliyya Pituach

CONTENT?

The format is for 4-6 sessions, by different presenters.

  1. Test Lint - Roy Osherove
  2. Code Rush & Resharper - Uri Lavy
  3. Process Explorer - Ariel Raunstien
  4. NDepend - Dror Helper
  5. IronRuby - Shay Friedman
  6. And if there will be time I'll demo the Testify Wizard

if you want to register and tell us you’re coming go here.

Hope to see you all there

Thursday, 17 June 2010

TDD .NET Course

Next Month I'll be giving a 3-day course in TDD. The course will take start on the 12/7 and take place at Sela house. If you want to join or know people who might be interested, just leave me a comment. More details about what will be covered you can refer to the course syllabus here

Thursday, 27 May 2010

Importance of prioritization

Knowing to prioritize is a necessary skill.

when you open windows live update which claims to have 1 more

important update selected

and you see that the update name is

Games for Windows - Live V3.3

You kind of think that someone is missing something.

UPdate2jpg

Design is contextual

i once wrote about the fact that there is no perfect and absolute design approach/strategy. To me design always depends upon the context of the system it is developed for and it should evolve along with the system.

Furthermore, I believe design decision should also be based on the context of the people involved i.e. the development team. Meaning that a good design for a given system in one team might not be appropriate for a different team working on the SAME system

To understand the previous statement, we need to first consider the main goal in software design. Not what makes good design but what we would expect a good design to do for us, or why is design important. For me a good system design is one that will minimize both the system maintenance cost and the cost of of developing new capabilities. And if we can agree on this definition I would assert that it is only natural that the people working on the system must be considered in any design decision. Some people find that a high level of abstraction easy to work with, but other people prefer a more procedural approach and find the same level of abstraction hard to understand. Even when I consider myself I know that during different phases of my professional life my taste about design have changed and what I found to work for me these days does not looks resemble what i found easy to understand a few years ago. (and only part of this can be attributed to better design capabilities)

When choosing a design always balance your decisions between your design capabilities and the need to apply good design principles. Sometime going one way because someone smart said it should be like this (even if that person is really smart) will not benefit you unless you understand why this is good.

At the end no matter where you will go you will need to evolve the design along with the product and along with the team applying and using that design.

Wednesday, 26 May 2010

Israel Craftsmanship – First meeting

Software craftsmanship is something that naturally speaks to me, so I couldn’t miss the chance to meet other people sharing the same passion. Today the first meeting of the Israeli Craftsmanship group arranged by Uri Lavy was held at the offices of PicScout.

the meeting itself was more of a get to know gathering an Uri gave a short summary of what does software craftsmanship, why is it important and what will be the focus of the group. All and all it was a very nice overview session which didn’t go very deep but held many promises for more to come.

The nice thing about it was the surprisingly large number of participants. The sitting space allocated by PicScout was completely filled and people spilled in all directions. It seems that future sessions will require more space.

Monday, 24 May 2010

Starting TDD – Conclusion

How do I start TDD? is a question I’ve been asked too many times. Unlike many I don’t believe that in order to start you need the entire team approval. In this series I’ve described a few situation one might find himself in and offered some ideas on how to approach them. Although each developer has its own context i hope you can find some resemblance in the described cases and find the ideas useful. Since this is what I’m doing for a living this days, I would love to hear about your way on the TDD road and specifically on how you started and what were the initial experience.

last in a previous post i was asked by Daniel the following:

So if a single developer on a team is driving the design of his code by TDD, do you really think it would work? The code is probably going to be different in design hence other will probably have problems reading it since it could be a bit of paradigm shift. Also, if the TDD-developer has a similar skill-level as the rest of the members, the implementation time of each future will probably be longer for his items, even if the total time including corrections after QA-people, do you really think things like this will go unnoticed?

Daniel, I don’t know if it will work for you, but i have seen it to work several times. The differences in designs does tend to invoke a design discussion, which i have found to be a good discussion in which testable design is always defendable,in fact I encourage this discussion since they are a tool for winning people over. regarding your second question, I think that a reasonable developer (in many scenarios) can make the overhead nearly unseen, and i would also expect that it wont take too long for that developer skills to rise over his fellows programmers. However, I’m a believer of honesty and visibility, I wouldn’t try to hide what I was doing from anyone, and when challenged I would answer that this is the way i know how to code and point to the low defect rate resulting from doing things this way.

Sunday, 23 May 2010

Starting TDD (Part 6)

if you are really lucky you will find yourself joining an team which already practice TDD:

New developer joining a team doing TDD

You are just starting a new job (maybe in a new company). You have heard all about how the team is doing Agile (Scrum/XP) and how the tests are written before the code (TDD). In fact you are nervously waiting to start doing all those wonderful things.

Take a breath.

Joining such a team is not an easy switch. If this is your first time doing TDD (and Agile). It is going to be kind of a shock, working in an agile team is different in many aspects TDD is just one of them. It can take some time to make the needed adjustment. I myself have not fully experienced this, but helping other pass through the transition, I have learned that it can be a frustrating process, especially if you not a new developer.

The best way to speed the process is to make sure to pair a lot, i.e., Don’t work alone! Always ask to work along with somebody else on the team. Even if the team is not practicing pair programming on a usual base, insist that for your initial phases you would like to sit with others and learn how to do things properly. Pairing is a great way to get the needed knowledge and expertise (and the fastest way I know of).

Starting TDD (Part 5)

With some luck, one day, the place you are working in will decide it want to fully adopt TDD. On that day you might find yourself in the following scenario:

A lone developer in charge of implementing TDD

The team/ manager have considered TDD, they chose to evaluate it and decided that its a good idea. the decision as made and in fact most of the team is quite happy with the change. However, since its a busy time they want to approach the subject with some care and have chosen you to start the ball rolling. that is you are expected to lead the transition for your team first by doing it yourself and then by helping the rest adopt the methodology.

The steps in this scenario are very similar to the one described in Part 4 the focus however is to gather as much real hands experience the fastest way possible.

Sometime this kind of experience can be bought. If possible, bring in an outside consultant that will help you in the transition. If not, try to to make sure the next hire has hands on experience.

if this options are not present, arrange for the whole team to attend some relevant gathering/conventions. Go with the team to TDD courses. and in any-case, when following the steps described in part 4 make sure to share and include other team members in the process. And last practice a lot of pair programming TDD session.

remember the idea is to share and include the rest of the teams even if they are not actively practicing TDD yet. A successful TDD adoption is much a matter of mode and spirit, not only a technical problem.

But always remember, The key factor is to gather as much hands on experience as possible and having the patience to build upon such expereince.

Sunday, 16 May 2010

Showing GoogleTests/JUnit Results in CC.Net

In a Place I’m working in, we needed to integrate the results from tests running in the C++ googleTest framework into the team build machine which is running CC.Net.

It appears that GoogleTest can output its result into a standard JUnit xml format, however I couldn’t locate a proper XSL that will transform the results into a nice report in the CC.Net Dashboard.

The following link points to the resulting xsl file that will create a nice looking report from googleTest output.

  1. Instruct googletest (using the –gtest_output) to write its result into an xml.
  2. Update the ccnet.config file to merge the resulting file into the CC.Net build report.
  3. Put the supplied xsl into the webdashboard/xsl directory of the cc.net installation. Download the xsl from this link.
  4. Update dashboard.config file to use the given xsl.

Hope this will save some time for somebody.

Thursday, 13 May 2010

Starting TDD (Part 4)

In some cases the idea of TDD is seen as a promising idea by the organization. lets consider the following

A Lone Developer in charge of evaluating TDD

You talked with your manager and thinks TDD might be a very good idea. in fact he has given you a green light to evaluate the approach. most likely other team members are also in favor of checking this out all are waiting for your conclusions.

In this context the plan of action is very similar to the previous scenario. The difference here is that you are expected to do a more thorough job, and you are expected to show result in a given timeframe.

Therefore, you still need to pass the learning phase, you also need to setup a CI Server and pick out the tools before starting. Time boxing the learning phase will be a good idea, while you still need to gain the knowledge you would like to do it as fast as possible, so pick the training solution that work fastest for you (for many a TDD course does a decent job in a short timeframe). On the other side you would like to invest more time in comparing and evaluating several options for your tool set (which include the CI product). at the end you will be find it easier to justify your choices if you can show that alternatives were considered.

Next step, is picking out a good place to start the actual writing. Since you are evaluating the approach, you probably have the option of picking out where you want to start, and the idea is to pick (for the first experiments) a place that you feel comfortable working on. a Place which will not be too hard. Try locating a small, decoupled section that is logic oriented. At this stage don’t try to tackle the most difficult place, there will be time for that as well. First gain some hands on experience with TDD.

Like initial stages, you would like to time box this first effort. When you finish it don’t forget to share your finding (and your difficulties). sharing along the way will help you when you finish the evaluation and a decision will need to be made.

According to the time allocated, you might have a chance to tackle a “tougher” piece with unit tests. If you are working on a legacy system now will be a good idea to approach the issues such a system present. You will need to gain more knowledge on refactoring and reading Michael Feathers book is also a very good idea at this point. Going through legacy code is harder and takes time. The trick to success with TDD is to have plenty of patience, this will become evident when working on legacy code. Getting help is also a good idea. If possible try using other team members via pairing sessions. Paring on the harder challenges is a great experience by itself and will also make the time for resolving the issues shorter. As a side effect you also introduce TDD to other in your team.

at the end always remember that you have a specific goal, to understand why TDD will be good for you and your team. And to show how it can be adopted by the rest of the team. So don’t wait for the time to end always remember to reflect on what the experience has thought you what were the benefits and what are the challenges. during the evaluation communicate with the rest of the team and draw them into the discussion. at the end you will probably need their weight to make the organization commit to TDD.

Wednesday, 12 May 2010

Starting TDD (Part 3)

In the previous post I’ve talked how a even in the most hostile environment one can do things that will help on the road for becoming a better programmer. fortunately however the scenario described in the previous post is quite rare and in most cases things are not as bad.

The following is probably a more reasonable scenario

A lone Developer in an indifferent organization

You discussed TDD with your boss and while he is not against it, he doesn’t see enough value in it . Most likely his reaction went along the lines of:

Yes, automated unit tests are great but …

On the other side, you generally get enough freedom as long as you get the job done. other team members also think TDD an interesting concept, but again no one has yet made any move.

Start Learning

If you find yourself in this situation (which I believe is quite common), there’s much you can do. As always you start out by learning. Books, blogs, lectures, videos,.. anything will do. There is even a slight chance you can persuade your company to spend some money and sent you to an actual TDD course.

Setting up a CI server

Next step is setting up of a Continuous Integration server. Yes, I’m serious. If you are going to practice TDD, you should have a full blown CI server at your disposal. Setting one up takes somewhere between 30 minutes to a few hours. The benefits are huge, and down the line it will definitely help you convince others (most managers just love the visibility given by a CI server)

BTW You don’t need a dedicated machine for this. If you can’t locate a spare machine (and any machine will do) just set it up on your own personal development station.

Picking out the toolset

Next step will be to pick up the actual toolset for TDD. the most important thing about it is not to let this slow you done. While there are several options out there for every technology, for the most part the differences between the tools will have no affect at this point. Later on you might want to change the tools but even if that happens the actual cost of switching is usually minimal, and you will have much better understanding then. Just pick the tools you feel will be the fastest to learn. Don’t let the tools get in the way.

Practice TDD.

Now comes the fun part. Just start doing TDD. Pick up you next programming task and allocate some time for doing it the TDD way. If possible try to pick a standalone piece of code if not never mind. Don’t worry about doing mistakes. Don’t worry if your tests doesn’t seem like unit tests at all, and don’t worry if your are not following the TDD rules by the letter. You are just learning. And remember, in most cases it can take a couple of hours, maybe even half a day to write the first test. Things will become easier as you gather more practice.

You are not alone

Starting out TDD is hard. Without help it’s even harder. The fact that you are alone in your organization doing TDD doesn’t mean you shouldn’t have any. Reach out to the mailing lists. Luckily The web is filled with very nice people which likes to help. It might take some time but you will get the answers you need. The real trick for succeeding in TDD is just to have patience. It take some time to master a new skill.

Share you accomplishment

Hopefully after a few weeks/months you will be able to demonstrate the real benefits of TDD. If you stick to doing it, you will overcome the initial issues and will prepare the ground for the team to follow. You will have real experience, and it will be much easier to convince others of the value in TDD.

But even if not, if you got to this point you already are a better programmer.

Monday, 10 May 2010

Starting TDD (Part 2)

In my previous post I explained that you don’t need the entire team in order to start TDD. So lets see some options for the lone programmer.

The question of where to start depends upon the context you find yourself in. For example lets consider the following:

A lone programmer in a hostile organization

You are a team member (or a team lead), you tried running TDD through the team but they feel its stupid, in fact they are strongly against it. If that’s not enough your manager also think TDD is a total waste of time and is closely monitoring to make sure you are focusing on writing production code only. And the worst thing is that the organization is a strict one, no tool can be used without going through the proper channels.

Yes this is as worst as it can get and there is not much you can do (other then maybe start refreshing your CV). But still you can start with the learning parts. TDD takes some knowledge to do properly, its not enough to just decide and jump ahead. there are practices to follow tools to be learned and techniques to be adopted. I have never encountered an organization that will block a determined programmer from acquiring knowledge, even if that knowledge is not directly related to current work.

Here are some places when TDD knowledge can be gathered:

Books:

  1. The Art Of Unit Testing - Roy Osherove.
  2. Test Driven Development: By Example – Kent Beck.
  3. Working Effectively with Legacy Code – Michael Feathers.
  4. Clean Code: A Handbook of Agile Software Craftsmanship – Robert C. Martin.

Mailing Lists:

  1. TDD on Yahoo Groups
  2. Extreme Programming on yahoo

Also there are numerous conventions and blogs dealing with TDD and other Agile Aspects. which deals with various aspects of agile development and some are even more specific to talking and helping with Conventions (and recording of)

It might also be a good idea to refresh on proper design principle and refactoring techniques as well. Even if you wont be able to practice TDD writing clean properly design code is always a good idea and i have yet to meet someone which object to that.

Starting TDD (part 1)

One of the most common question I encounter is how to start with TDD. Usually the questions is formed as how do I get my team to pick up TDD.

There are many answers to this, and in fact it really depends on the specific context (more on this later) but one thing I do know.

You don’t need the entire team

A popular mistake is to think that a lone developer can’t do TDD and you need entire team do it together. This contradict the fact that TDD is a solitary practice i.e. you can do it all by yourself. Think about it, don’t you write code alone? TDD is a development methodology, its an exercise in producing high quality code, in that aspect its not different then doing low level design or writing clean code.

Do you need the rest of your team to design properly in order to produce good design (of your parts)?

Naturally the effect a single developer has is limited and will probably won’t go beyond his code, however doing TDD even if done alone will allow for his code to be better (and most likely faster). Of course its better to have the entire team do TDD. It’s usually easier and will increase quality of the entire product, but there is no reason to wait for everyone to realize that, is there?

So where to we start?

like always the answer is, “It depends…"

but more on this later

Thursday, 6 May 2010

Using CollectionAssert of MSTest

Here is a nice trick to apply when using CollectionAssert.AreEquivalent of MSTest. While the signature of the method is:

(ICollection expected, ICollection actual)

When using them in your code switch between the actual and expected. Pass your execution result as the first argument, and you expected collection as the second. Doing this will give a better more informative error message in during failures.

Lets see an example,

I have a test which ends with this:

var actual = CrapAnalyzer.CreateCrapReport("", "");

var expected = new Dictionary<string, double>();
expected["MyClass.Method1"] = 1.34;
expected["MyClass.Method2"] = 1.22;
expected["MyClass.Method3"] = 5;

CollectionAssert.AreEquivalent
( expected as ICollection,
actual as ICollection );

When run as is I get the following:

CollectionAssert.AreEquivalent failed. The expected collection contains 1 occurrence(s) of <[MyClass.Method3, 5]>. The actual collection contains 0 occurrence(s).

when I switch between the collections i get:

CollectionAssert.AreEquivalent failed. The expected collection contains 1 occurrence(s) of <[MyClass.Method3, 1.12]>. The actual collection contains 0 occurrence(s).

While at first glance both see the same, the first one gives me no added information that is not written in the test code. The second error on the other hand, actually gives a good description on what happened. Reading it I can clearly see that the result value for Method3 was 1.12 instead of 5 expected by the test code. Now I know the actual difference and there’s a good chance that I can understand what went wrong without opening a debugger.

Wednesday, 28 April 2010

The things alcohol make you do

A consultant and a developer went to a bar.....

And after a few drinks they got into a stupid discussion on how bad programmers are.

One said that most programmer will look at a class and if they will think that they will only need a single instance of that class in their system they will instinctively make that class a Singleton.

The other would not accept the fact that this can be.

So who do you think is right and who is clearly mistaken?

and while you're at it, can you tell who said what?


Managing a one man project

Ken Egozi asked me to put an answer I gave on the Alt.Net mailing list so here goes.

the thread started by someone asking for an online Gantt service and later evolved on how to manage a project without the use of a Gantt. the specific project is quite small - about 2 months total and will be done by a single person. however the client is pressured and want to be sure that it can be done by the specified target date.

Here's what I wrote:

If I were you I would start by:

write down in a list all the things need to be done (from the point of view of the client). not the technical 'how', just a list of things that define "what' needs to be done.

after you have your list, sort it by priority and roughly estimate how long each item is going to take you.

when you finish, take a step back and look if everything make sense, i.e. you can complete it in the given time.

Next step is taking the first month or so, and dive into details. plan out your next 4 weeks in details trying to figure out exactly what are the technical steps needed in order to complete every item on your list. (in most cases this kind of details does not interest the client, it is used by you to track progress and verify that you are on the right track). When you finish define your first milestone to contain all the items from the initial list you know you'll be able to finish in the first 2 weeks. (make sure not to over commit).

at this point you should notice that everything adds up, i.e. if for some reason after going into details things looks they are going to take longer make sure to reflect that back on the content of your project.

Now you can go to your client and:

  1. agree with him on a delivery point/milestone every 2 weeks.
  2. show him your commitment for the first 2 weeks.
  3. show him a rough layout for the rest of the plan
  4. agree with him that after 2 weeks you will review progress and update the plan as needed.

if you want I can email you an excel backlog template I'm using to track and manage my pet projects. It's build for this kind of management and might help you to put things in context.

Lior

What I actually described here is a very simplified Scrum process. In my opinion it will work in this context as is mainly due to the fact that this is a one man project (and a relatively short one). This simple version will most likely not be great for a bigger team.

I was also asked to put somewhere excel template that I would use for such thing. So here's a link to a zipped file containing simple templates for both a product backlog and an iteration backlog (which either of them can be used for this specific context).

Monday, 26 April 2010

What Have we Learnt Today?

As usual during the Practical Scrum Course I’m giving. At the end of the first day I gather some feedback in the form of two questions I ask:

  1. What was the most surprising thing you have heard today?
  2. What is the most controversial thing you have heard today?

here are the (unedited) answers I've got this time:

The Most Surprising

  1. How much client expectation can effect estimations
  2. That in some projects a client can get a new version every couple of weeks
  3. New approach to estimations.
  4. That in 4 weeks we should finish X features (development, testing and documentation) and make them ready to be shipped.
  5. Its important to leave room for “exciters” features.

The most controversial

  1. Less documentation if at all.
  2. Priority Poker technique
  3. Everyone using a waterfall approach is following a “flawed" model
  4. That “loaded” requirements document can cause estimation to increase
  5. There is not enough consideration for the unexpected.

JustMock – New Mocking Framework

The world of unit testing in .NET has recently become just a little more interesting. This month Telerik has announced the beta of their new JustMock mocking framework.

The interesting thing about this framework, is that for the first time a new tool is actually trying to compete with the power of TypeMock’s Isolator framework.

JustMock, like Isolator, is using the CLR profiler API in order to intercept method calls, allowing mocking of virtually any kind of class/method in your code. Including static methods, private methods, sealed classes and more.

At a first glance JustMock API looks very similar to the AAA syntax of the Isolator tool, what I did notice howeverare two main differances:

  1. It appears that JustMock has two modes of operation. The standard mode, in which the profiler is not enabled, allowing mocking only ‘overridable’ methods and an advance mode in which the profiler is used and every “feature’ of the tool is turned on. On the upside disabling the profiler should yield better performance, however once you decide to use any of the advance methods (even in a single test) the profiler is turned on for everything. I suspect that the actual reason for this lies in the old Design For Testability debate in which many states that using such a powerful tool might cause a degraded design. giving this two modes of operation allow everyone to choose whether he wants to allow advanced usage of the tool or to enforce a more rigid design strategy.
  2. A quick comparison of the beta version abilities to that of the Isolator framework. shows that at this point JustMock is in its initial stages and is lacking many of the Isolator capability. However, according to its documentation, the most desired ability to mock classes from mscrolib.dll is fully supported. This means that finally we can mock out things like the file system, the system clock all kinds of collections and more.

Whet is left to be seen, is how long will it take Telerik to close the gap between their beta version to a mature release which can actually compete with other mocking frameworks out there. And of course what will be the pricing of this framework (nope its not going to be open sourced or free).

On my side I’m very happy to see more tool options out there, believing that the resulting competition will benefit all of us. Also i plan on diving deeper into JustMock getting a better understanding on its actual usage and limitations.

Sunday, 18 April 2010

Resetting Visual studio

Lately I've been working on some visual studio add-ins and plug-ins. I'm testing a few new ones and even playing around learning how to develop one of my own. Doing so did cause me to break the IDE leaving me in all sorts of weird situations. For example one time I messed it so bad I couldn't even create a new .NET project any more. For some reason the IDE insisted that I'll install some language tools (or something alike).

In the past this got me into a reinstall process which in most cases does solve the problem however does take about an hour or so.

Lately, I found that in most cases, issuing a reset command does in most cases solve the problem as well. So when having problems with VS-IDE before reinstalling try the following:

Reset the Visual Studio IDE

using the command: devenv / resetsettings.

note: it might be worth backing up the current setting to avoid the need to set everything again after a reset.

thank you Steve for this post

Reset a specific Add-in

using the command: devenv /resetAddin AddinName.Connect

note: this should reset the Add-In to its original state, which should make it work again.

than you Roy for this post.

More command line switches can be found here

Wednesday, 14 April 2010

Don't be a Smart Ass Coder - Exercise in Rafctoring

Here's a piece of code I saw the other day.

the code is for real and given after massive renaming to protect the guilty

for ( int j = 0; j < 2; j ++ )
{
if ( j == 0 )
pObject = Repository::GetInstance()
->GetObjectByHandle(
pParent->GetPartAHandle());
else
pObject = Repository::GetInstance()
->GetObjectByHandle(
pParent->GetPartBAHandle());

if ( pObject )
{
ObjectGeometry objectGeometry =
pObject->GetGeometry();

for ( UINT i = 0;
i < objectGeometry.NumRectangles();
i ++)
{
Rectangle* pRect =
Repository::GetInstance()
->GetRectanglesByHandle(
pObject->GetRectnagleHandle(i, 0));
if ( pRect )
m_Repository.DiscardRect(
pRect->GetHandle()
);
}
m_Repository.DiscardSpread(
pObject->GetHandle());
}
}

First reaction was WTF !@$^@%@%^

Second reaction was common your not serious

after thing really sunk in, it was "Smart Ass"

Don't be a Smart Ass coder

Really don't! showing how smart you are through code serve no other purpose other then boosting your own ego. the immediate effect however is that it takes just a bit longer for other people to read your code. Yes you probably can deal with that, but show some respect to other people time. why do you make me read such a thing. cant this be made simpler?

of course it can, lets do that:

Step 1

the chunk of the method in fact clear some stuff in pObject, lets state that:

for ( int j = 0; j < 2; j ++ )
{
if ( j == 0 )
pObject = Repository::GetInstance()
->GetObjectByHandle(
pParent->GetPartAHandle());
else
pObject = Repository::GetInstance()
->GetObjectByHandle(
pParent->GetPartBAHandle());

CleanObject(pObject);
}
void CleanObject(Object pObject)
{
if ( pObject )
{
ObjectGeometry objectGeometry =
pObject->GetGeometry();

for ( UINT i = 0;
i < objectGeometry.NumRectangles();
i ++)
{
Rectangle* pRect =
Repository::GetInstance()
->GetRectanglesByHandle(
pObject->GetRectnagleHandle(i, 0));
if ( pRect )
m_Repository.DiscardRect(
pRect->GetHandle()
);
}
m_Repository.DiscardSpread(
pObject->GetHandle());
}
}

Step 2

do we really need a for loop with this weird if statement? Not really, in fact using the loop index for branching is probably a code smell.

Also who loop until 2?

use the oldest counting scheme (1, 2, many) and only when reaching many use a for.

so getting rid of that we get to:

pObject = Repository::GetInstance()
->GetObjectByHandle(
pParent->GetPartAHandle());
CleanObject(pObject);

pObject = Repository::GetInstance()
->GetObjectByHandle(
pParent->GetPartBAHandle());
CleanObject(pObject);

Step 3

The resulting CleanObject is doing two things:

  1. Clean all the rectangles inside object.
  2. Clean up the object itself.

Let separate those:

void CleanObject(Object pObject)
{
if ( pObject )
{
CleanRectnagles(pObject);

m_Repository.DiscardObject(
pObject->GetHandle());
}
}

void CleanRectnagles(Object &pObject)
{
ObjectGeometry objectGeometry =
pObject->GetGeometry();

for ( UINT i = 0;
i < objectGeometry.NumRectangles();
i ++)
{
Rectangle* pRect =
Repository::GetInstance()
->GetRectanglesByHandle(
pObject->GetRectnagleHandle(i, 0));
if ( pRect )
m_Repository.DiscardRect(
pRect->GetHandle()
);
}
}

Step 4

For good measure lets simplify CleanRectnagles even further:

void CleanRectnagles(Object &pObject)
{
ObjectGeometry objectGeometry =
pObject->GetGeometry();

for ( UINT i = 0;
i < objectGeometry.NumRectangles();
i ++)
{
CleanRectnaglesExtracted(pObject, i);
}
}

void CleanRectnagle(Object &pObject, UINT &rectIndex)
{
Rectangle* pRect =
Repository::GetInstance()
->GetRectanglesByHandle(
pObject->GetRectnagleHandle(rectIndex, 0));
if ( pRect )
m_Repository.DiscardRect(
pRect->GetHandle()
);
}

Final Result

and we end up at:

pObject = Repository::GetInstance()
->GetObjectByHandle(pParent->GetPartAHandle());
CleanObject(pObject);

pObject = Repository::GetInstance()->
GetObjectByHandle(pParent->GetPartBAHandle());
CleanObject(pObject);

void CleanObject(Object pObject)
{
if ( pObject )
{
CleanRectnagles(pObject);
m_Repository.DiscardObject(
pObject->GetHandle() );
}
}

void CleanRectnagles(Object &pObject)
{
ObjectGeometry objectGeometry =
pObject->GetGeometry();
for ( UINT i = 0;
i < objectGeometry.NumRectangles();
i ++)
{
CleanRectnaglesExtracted(pObject, i);
}
}

void CleanRectnagle(Object &pObject,
UINT &rectIndex)
{
Rectangle* pRect =
Repository::GetInstance()->
GetRectanglesByHandle(
pObject->GetRectnagleHandle(
rectIndex, 0));
if ( pRect )
m_Repository.DiscardRect(
pRect->GetHandle() );
}

Food For Thought

It still doesn't look clean enough. Looking closely one can easily spot misplaced operations. CleanRectnagles really looks like it belongs to Object class. The same holds true for CleanObject and CleanRectnagle. Was that so obvious in the original code?

CleanRectnagle can use an additional simplification, resulting in a clean method which belongs to the rectangle class.

etc, etc...

Summary

Take a look back at the initial piece of code, yes its a little shorter then the resulting code. However how long would it take you to understand it?

Remember when writing code the goal is not to show how smart you are the goal is to write simple straight forward code which is easily understandable all programmers reading it later.

Wednesday, 24 February 2010

Customer Support

A couple of days ago I had a good conversation with a friend of mine regarding customer support. Before going into the actual discussion, I want to say that I'm a very strong believer in actively engaging your customers/users on all level, specifically one of the competitive advantages any organization can develop is having great support. In fact there is no excuse for not having great support. You don't have to be a big company, you don't have to use any fancy management system, you don't need to hire special experts at doing so. All you need is the will to establish great support and the persistence to actually go and do that.

Establishing great support, from my experience, has one of the biggest ROI in the organization.

But back to the discussion, specifically what we talked about is how far will they go and expose their support database to their users. Currently, while they do have great support, from a user perspective, once a call has been acknowledged as a defect or a needed feature there is no visibility as to its progress.

In most cases, either you have stumbled on a critical thing, resulting in an almost instantaneous fix (sometime in a matter of hours), or you have stumbled on something less important which go into an arbitrary hidden to do list. as a user there is no middle ground. The things that go into that future to-do list are going into a big black hole and there is no telling how long it will take to actually get a fix.

On the other hand when I asked why do they not publish the above mention to-do list, he told me that currently he fears that seeing the entire picture might scare off some potential and even existing users.

To that I said not likely!

Granted I'm sure that some will be scared if the list is big enough, However like I any kind of relationship being honest is probably the best long term strategy. Hiding the truth from your users has a big chance to backfire. As been demonstrated in several occasions (and being on that end as well I also can testify), you can't hide forever behind a hidden to-do list. At the end the users, will start noticing that some defects are going down that drain hole never to return. And if enough of them does so eventually you will have to deal with a damaged reputation and that's even less fun. Hearing once and again that "this will be fixed/added in a future version" without committing to a time frame, becomes annoying unless you actually see those in a reasonable time. And you never but never want to have annoyed users.

Personally I prefer to hear the truth, if you decided that the issue is not that important just let me know. If its that important to me maybe I'll be willing to pay for it. If not, just help me to work around this. I'm a grown up and I expected to be treated as a grown up.

of course we have also discussed some middle ground, one option was to allow all users to track only their open cases. Which probably wont change the fact that these things are not fixed, but will give a better indication to the actual status of things.

naturally in the long run the only true solution is just to make the product so good and the number of defects so low that this will become a non-issue. However that's probably a whole different story.

So share with me how are you handling your support at your work place? what more can be done in order to mitigate and find a good compromise?

Wednesday, 17 February 2010

Flexible Planning

I sometimes have trouble with explaining to people the basic concept behind agile way of thought regarding project planning and management. Recently i cam across this gem, which as far as i could tell is unrelated to software but in my eyes does a marvelous job at explaining these ideas.

just to make sure i understand, I’ll recap his main points.

there are two way of planning and managing a project:

the classical way – set your goals as the result you want to achieve, from those goes backward and derive the steps needed in order to achieve these goals.from these decide on your plan of action and than just try to follow that plan.

Flexible Planning – goes a little different. you still start out by setting the goals as desired results. but instead of going backward from them, you start by examining where you are and then pick a single action to do that will bring you closer to these goals.when you finish doing that action you evaluate your position again and again pick a single action that will advance you towards your goal. hopefully given enough time you reach your goal.

these two approaches are completely different, the first one when applied to software projects will result in a Gannt chart depicting your plan and by trying to forcing the organization follow that plan. in some places management will even put several check points to and allow the plan to change (which in most cases when facing reality that is a must). The second approach is in my opinion embodied in all agile processes. its an inspect and adopt process. you take one step inspect your new state and adopt you next actions.

The funniest about this, is that i used to work at a company which was in all aspects attempting to be and was an “agile” company. However, in one of the higher management meeting, I was told to plan my work and my team goals using the first approach. Sadly only today I can recognize the contradiction.

Experience of a beginner

One of the thing i really like about being a consultant, is that i get a chance experience first time reaction of people i’m consulting to the new practices. Sometimes these reaction are suspicion, fear and doubt. But in some (happy) cases when I’m able to bypass these, I see beginners takes these practices and leverage them in ways I only expect to see much farther down the line.

so what am i talking about?

Every programmer is given from time to time a task to change/fix sections of the code he is unfamiliar with. Over time I’ve noticed that there seem to be two basic approaches in this situation:

Lets read and understand

one approach taken by many when facing new code is to treat it as a text book to be understood. First the programmer will collect every piece of information he can get his hands on, specifically those related to design. If possible he will even ask for tutoring sessions with someone more knowledgeable. When that’s done, this programmer will continue to investigate the source by reading it, until he reaches the level of understanding he feels comfortable with. ONLY then he will actually start changing the code in order to perform his given task.

Lets Feel the code

This is a different kind of programmer. (Once I believed i was unique in doing this, but over time I saw that many are doing just the same) Anyway, this programmer when approaching new code to learn, will start out by diving in with an editor, and immediately “refactor” the code to his pleasing. In fact refactor is a little over statement, in most cases its just pure formatting and does not involved any real changes to the code. I think that what the programmer is doing is trying to “feel” the code through the keyboard. Recently i spot the same behavior in my kids when given a new toy. they start out by physically feeling the toy and only after they feel comfortable actually start the “playing’ phase. When dealing with software, the closest one can get to this is by applying an editor on the source text. Over time I understood this is a very powerful technique, but I came to realized the context of this process i.e. pure learning. Today I tend to throw away the result of doing this and start over when I’m done.

Using tests to Grok new code

When doing TDD for some time you develop a new technique for handling new code. And this is what i saw one of the people doing here (although his experience with actual TDD was minimal at best). I think its the ‘feel it’ approach taken one step further. Instead of reading the code or blindly hacking it. he used tests to drive out your learning process. That person was given a task to do on a part of the code he was unfamiliar with so the first thing he did was to just write some tests to for that code (the organization is new to TDD so most of the code is not yet covered) in order to understand what was going on.

doing that helped him uncover several strange behaviors which really made him realize that something is not right there (later on one of them even turned out to be a hidden bug), and as an added bonus the tests written were included in the team suite of unit tests.

For me seeing that was a really nice surprise. I recall it took me quite a while to adopt that and use that approach and i was starting to do it on a regular base only after several months of actual TDD experience.

so what’s the moral of the story?

1. always look for new beginners to see how they adopt new habits. There is much to be learnt there.

2. when dealing with new code, try approaching it with tests.

Thursday, 28 January 2010

Quick Unit Video

After several tries I managed to finally upload the Video from last night session. here is the link.

(BTW the Lecture is in Hebrew)

Wednesday, 27 January 2010

Alt.NET Israel - First tool night

altdotnet

Just came back from the first Alt.NET tool night and it was fun.all and all it was quite long. We had 6 short sessions (15-20) minutes along with a short break so it ended about 21:30.

The sessions themselves were quite interesting and covered a wide variety of tools. the good news is that the entire session was filmed by Ohad Israeli and I'm sure he will upload it in a few days (probably tomorrow).

I did however brought my own camcorder and managed to film the session about Quick Unit.

For those of you who are not aware, Quick Unit is a new test authoring tool, which speed up the process of creating unit tests along with giving guidelines on how to write them properly. Unlike other test generation tool, the basic idea is not to try to create many test cases that covers the entire code, but is more like a designer for test, much like a standard UI designer.

Anyway I've downloaded the tool and will give try it out properly in the next couple of weeks to see how does it help me. So be prepared for a follow up . In the mean while I'm currently uploading the film to the web and will publish a link first thing tomorrow morning.

Saturday, 23 January 2010

The one true way

in a recent post uncle bob talks about BDD and compare it to a table based style of test definition.Ii don’t want to talk about that.

i do however want to comment on the following quote:

My issue is not with the tools. My issue is with the idea that BDD is the only true way, and that all tests should be expressed in GWT format forever and ever amen.

Lets revise this statement. In fact lets just remove the part where BDD is mention and we get:

My issue is not with the tools. My issue is with the idea that _____ is the only true way

How many times have you been in a discussion which went along these lines? how many blogs have you seen advocating for “single and best” solution?

I myself have seen these kind too many times. I’ve seen it in the context of design, testing, code convention, SCM’s,coding languages, methodologies, you name it.

ENOUGH!

In this short and simple sentence Uncle Bob captured what I believe too many people are tending to forget. People tend to forget that every problem and I mean every problem has a context. And that context dictates whether a specific tool/solution/approach is good or bad. In our profession there are no simple answers and straight forward solutions, silver bullets do not exists.

What distinguish a true professional from the rest, is the ability to use many techniques and approaches, picking from them a correct solution for a given problem.

Beware of people that preaches “Mine is the only true way”.

Wednesday, 13 January 2010

Open House – Estimations&Planning – Take 2

It appears that many have found this topic interesting, therefore we shall be conducting this open house one more time. it will take place on Monday, February the first at Sela offices. This event requires registration so if you are interested just leave me a comment.

If you wish to review the slides they can be found here.

 
Design by Free WordPress Themes | Bloggerized by Lasantha - Premium Blogger Themes | Walgreens Printable Coupons