The QPC paradox

I think that almost everybody in the software industry would agree that most of the software that is produced today does not have the quality that it should. Let’s assume that this is a fact: the quality of software, in general, is deficient.

How can we improve the quality of the software that we produce? Many experts have proposed myriads of tools, techniques, methodologies, approaches and paradigms that promised to deliver better software, that is, software of higher quality. Supposedly, this was to be achieved because the proposed tools, techniques, etc. were more powerful than the current ones. The argument goes like this: if the results that you are obtaining are of poor quality, use more powerful tools. Better tools will allow you to create the same things, but of higher quality. And it makes sense, doesn’t it.

However, the average quality of software has not improved for decades. I would love to insert a quotation here to back this up, but I can’t remember where I saw it. I promise. Somebody (presumably an expert) recently said that the average quality of software has remained constant for decades. How come, if we have more powerful tools than ever?

Because we use the extra power to raise the complexity of the software rather than to improve its quality.

The promise says that better tools should allow us to create the same things, but of higher quality. But we are not attempting to create the same things, but more complex things. We are, in fact, channeling the extra power that modern tools, techniques and methodologies give us towards raising complexity rather than quality. But, why?

Because we accept poor quality.

We are used to software that works most of the time but fails occasionally. Word or Photoshop usually work fine, but every so often they hang or ruin your work of a few hours, and Microsoft or Adobe don’t get sued for that. Perhaps they don’t even receive a complaint. We are more demanding with life critical systems, but we also admit ocassional failure. And we definitely admit clumsy user interfaces, incompatible file formats, unresponsive applications, excessive cost. Everybody in the business knows that software is the most complex thing ever built by humans, and that it is impossible to guarantee that a non-trivial piece of software is free of defects. Therefore, we are not too surprised when software fails.

So, we are used to our current level of quality. We would like to raise it, of course, and most of us try hard to do it. Whenever we get hold of a tool, technique or methodology that gives us extra power, we immediately find ourselves in front of a decision to be made: shall I use this extra power to improve quality, or shall I use it to improve complexity, to dare further, to venture where nobody has ventured yet? More complex, newer, innovative products sell, and we are systematically tempted to route power into the complexity black hole. As long as quality does not decrease, we find it appropriate. We could say that, in order to keep a constant level of quality, we are burning more and more power day after day. Power that is turned into complexity.

I like complexity. I admire highly complex systems such as modern operating systems, database engines or the web. They work fine most of the time and I could not imagine how I could live without them. I can’t wait to install Windows Vista on my new laptop as soon as it arrives, and enjoy the even more complex stuff that I will find with it. Our changing society needs software of ever increasing complexity, like it or not. So we keep burning power.

But, remember, we are paying the price of not improving quality. I call this the quality-power-complexity paradox: we want to increase quality, so we come up with more powerful tools, which we then use to increase complexity without changing the quality. So we keep wanting to increase quality.

This happens in other aspects of life too. When I was a kid my mum used to give me 25 pesetas every week as pocket money. I could buy some stuff with that money, but usually it didn’t last until the end of the week. I needed more money. So one day I went to mum and asked if I could have an extra “duro” per week and make it 30 pesetas. Mum agreed, and I immediately thought “fine, if 25 pesetas are good for five days, 30 pesetas should be OK for the whole week”. Naive me. The first week I spent my 30 pesetas in five days. The same happened the next week. And so on. I was routing the extra money to a higher rate of spending, rather than to a longer period of sustained wealth. My wealth period was constant, so the sensation of needing more money kept the same. I did feel like the Red Queen!

Could we just split power 50/50, using half of it to improve complexity and the other half to raise the quality bar? Is this a feasible idea?

Advertisements

0 Responses to “The QPC paradox”



  1. Leave a Comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s




Follow me on Twitter

Archives


%d bloggers like this: