Tuesday, 23 June 2009

Tweaking Outlook to work With GMail

I have not yet managed to give up on email clients. I know that the "new generation" has long ago switched and is using a web client to manage mail, but I still like the use experience that outlook gives me (along with its calendar).

The trouble is that working on multiple machine require some tweaking to make it comfortable.

Downloading emails to multiple machines

The trouble starts fro using the pop3 protocol. in general that protocol is aimed for a single machine use and when you download email to a given client, even if the messages are left on the server they are not downloaded to another machine. One possibility is setting up GMail to use IMAP but I personally don't like this kind of setup.

luckily there's a way to work around this:

Using POP on multiple clients or mobile devices

the trick is to change the user account name to start with "recent:" and to leave the messages on the server. in this mode the client downloads all new messages received in the last 30 days regardless if they have been downloaded on a different machine

Warning. when first setting this up, outlook will download AGAIN all messages from last 30 days.

However there is an annoying side effect to this, after i set this up outlook has started to download to the inbox all outgoing email as well.

Setting a rule to delete Sent mail

A simple rule should have solved this side effect. however outlook rules doesn't really support complex logic. My first attempt to define a rule to help me was:

"delete all incoming mails sent by me and not addressed to me" (yes I have the annoying habit of sending mails to myself of things I like to remember). but outlook does not support the "not" operation.

after several attempts at combining rules I did manage to define the rule i needed:

from <me> move to <trash> except where my name is on the to box or my name is in the cc box

So there you have it in order to use GMail pop3 access on multiple machine one needs to:

  1. Add "recent:" to the user account name
  2. Setup a rule to delete all sent mails

Sunday, 21 June 2009

Versioning Scheme

There are various scheme for handling version number. But so far I haven't really encountered anything meaningful in any of the schemes. several days ago I encountered scheme which I really liked.

The scheme is a variant of Change Significance and is specifically aimed for API libraries, that is while I'm sure it can be extended to regular products, its basic semantics is for aimed for API's libraries.

and the semantics goes like this (using a four digit sequence):

  • a change in the first number means the API has changed in a breaking manner.
  • a change in the second number means there were additions to the API
  • and a change in the third number means an that the internal behavior
  • Last number is a sequential build number.

Let take for example NUnit, if NUnit will adopt this scheme. When upgrading from lets say 2.4.0 to 3.0.0, I would expect that some of my existing tests will need to be updated to reflect the changes. When upgrading from 2.4.0 to 2.5.0, I would expect to see some new abilities reflected in new API's (but my existing tests should still stay valid). And last, when updating from 2.5.0 to 2.5.1 there would be some internal changes in behavior that shouldn't affect me at all.

I really liked this one when I heard it, what do you think?

Tuesday, 16 June 2009

TDD Problems for Practice

Every time I look or hear a TDD lecture, I always see the same examples. While its hard to go very deep in a 30-60 minutes lecture, the common used examples doesn't really reflects a real life scenario. How many in our daily job are coding a calculator?

When I'm asked (which happens 90% of the time) if I have more "real life" example I redirect to the TDD Problem Site:

The aim of this site is to contain a growing collection of software problems well-suited for the TDD-beginner and apprentice to learn Test-Driven Development through problem solving.

and the real cool good thing about those problems are that they follow the following rules:

  • they are real-world, not just toys
  • they are targeted towards learning TDD (that is: they are small and easy enough to work out in say half a day)
  • they don't involve any of the harder-to-test application development areas: GUI, database or file I/O. (since those topics are considered too hard for the TDD-beginner)
  • they have been solved by a TDD-practitioner previously, proving their appropriateness for this site

If you want to practice doing TDD and feels that its too hard to do on your production system. These examples are a great place to start practicing real life TDD.

Sunday, 7 June 2009

Myth Busted - NOT

In a recent post Scott Ambler starts by claiming

Agile Myth:High Quality Costs Less than Low Quality - Busted! (at scale)

later on when reading the actual content Ive learnt that he refers to very specific cases :

For example, in situations where the regulatory compliance
scaling factor is applicable, particularly regulations around protecting human life (i.e. the FDA's CFR 21 Part 11), you find that some of the URPS requirements require a greater investment in quality which can increase overall development cost and time.

and

This is particularly true when you need to start meeting 4-nines requirements (i.e. the system needs to be available 99.99% of the time) let alone 5-nines requirements or more. The cost of thorough testing and inspection can rise substantially in these sorts of situations.

In my opinion he went a little off charts with his claim.

First is the exact "Myth" so to speak? Is it a simple "High quality cost less"?

Well actually its a little more subtle than that. What the agile community has found once and again (as Scott mentions) is that it cost less to work in high quality mode, when you need to reach and sustain acceptable quality. After all quality is infectious. In general it cost less to produce crappy systems, but mostly those just fails when quality needs catch up.

But back to the examples.

I don't have experience with life critical systems. However, is there a possibility that what actually cost are the regulations themselves and not the underlying quality? Is there a way to reach the necessary life safety quality without following those costly regulations (at lower costs)? I don't know, I do know is that the FDA regulations are costly to implement and originate in a time before the Agile paradigm shift.

High Available (HA) system, on the other hand, I do understand. In fact I was in charge of developing a HA solution for a big billing company. And here Scott argument falls short.

Reaching 4 and especially 5-nines has nothing to do with the quality of the developed system. In order to get to that level of availability you must have an integrated solution and there lies the cost of 5nine systems.

So what myth has been busted?

Yes there are cases in which a specific aspects of quality will drive costs up. But taking a very specific examples and generalizing it to "Busted (at scale)" that an obvious Mistake in my book.

Wednesday, 3 June 2009

Evolving Design - Its a must!

A couple of days ago I read Gil's post about "Good Design Is Not Objective", in it he writes

So are there "good" or "bad" designs? Every person has an opinion. Opinions change over time. Problems and technologies change over time, and a design can help or interfere with the project, but you'll know that in hindsight.

Yesterday, at one of my client a simple question turned into a related discussion. One of the software architectures asked me if I ever saw (and what do I think of) a system which uses a global variable repository. One that every part of the system could use at will  to store data and is used as way of communicating between various parts of the system. Apparently that kind of an approach is used in their system and he wanted to know what I would say.

After thinking a little, I said that the only system which seems to me to be similar would be Windows registry, I said that I never encountered such a system in a project I was working on (at least not on the scale he was talking about). I also said that in my past each time I used such global variable (in a similar manner) I regretted it. And last that it really sounds like very "old fashion" approach to design. Something that kind of contradicts object oriented design.

The guy told me he said, that this part in question is very old and was written some 15 years ago when the company was at its startup stage. It appears that the entire system was evolved on top of it, which made it very hard to replace. in fact the cost was so high that each time someone considered it they decided to leave it as is.

Now I think this is a very interesting case that demonstrates how our continual search for the Best Design is just stupid.

There is no such thing as the Best Design!

Like most things, design also carries a context. In this case the current mechanism they have was (as the guy admitted) the quickest and cheapest way of reaching a product (one which people will buy). At that time (15 years ago) the technology (they were writing in C) and software engineering approaches were in line with such a mechanism. And it was a reasonable design at that point of time.

Is it still so today? absolutely not!

The company is no longer a startup, what was good at that time is really bad today. The needs has changed over time, the technology has changed over time, and the product has changed over time.

As Gil said it:

Time Changes everything

The Best Design today will probably wont be the Best Design tomorrow. A developer accepting that fact, can focus on producing a "good enough" design and evolving it as needed over time.

In my opinion that was the actual Mistake done. Its not that the design was wrong. its the fact that they assumed it wont need to change. That assumption made the cost of actually changing it high resulting in an unsuitable design they have today.

 
Design by Free WordPress Themes | Bloggerized by Lasantha - Premium Blogger Themes | Walgreens Printable Coupons