A little while ago, I was asked "What makes software good?", which was followed up by "How do you end up with good software?".  I thought that they were excellent questions, and I will give my answers below.  I don't claim to have the answer, just an answer. I'll try to limit esprit d'escalier / Treppenwitz, partly because people much cleverer than me have written many books about both questions.  So the answers here will be pretty much the answers I gave, to avoid trying to write my own book.

The reason why I thought the questions were good was that they're important, open-ended, and the answers show people's priorities.  As ever, I'm interested in your answers, so please leave them in the comments.

A photo of a copy of the book Hitchhikers' Guide to the Galaxy, showing Deep Thought giving the answer to life, the universe and everything.
I don't claim to have the answer to life, the universe and everything (in this article or elsewhere), but those questions were very good ones, and certainly better than "What do you get when you multiply 6 by 9?". Image credit.

What makes software good?

I would say that software is good if:

  • It meets user needs,
  • It's commercially viable,
  • It's of known quality,
  • It meets the needs of programmers and operations people who work on it.

The last one is something I've improved since I was asked – it's just a fuller but simpler version.

The different answers aren't independent – for instance, it's much easier for software to be commercially viable if it meets user needs.  But I feel that they each say something important, which is why I list them all separately.  Also, note that this assumes it's commercial software, rather than a hobby project.  Hobby projects are fantastic, but don't have the same constraints, motivators etc. as commercial software does and so deserve their own discussion.

The thing I like about the question is that it doesn't drill down into any specific kind of software, for instance distributed software, embedded medical software, or games.  It's deliberately general, looking for answers that always apply.  (The specific way you apply the answer to a particular bit of software will vary from project to project.)

I think the first one is fairly self-explanatory (although it's very important, and has lots to unpack).  My main comment on the second one is that writing software has paid my mortgage and so on for many years, and I'd like for this to continue. I'll go into the other two briefly below.

Known quality

The third point above is less obvious, and has to do with how my understanding of testing has developed, particularly through reading things by Gerald Weinberg and Michael Bolton.  Testing isn't about proving that there are no bugs (because that's impossible in any real world bit of software).  It's also not about saying whether software is ready to release or not.  It's about finding out useful information, where "useful" is strongly related to the first two points – user needs and commercial viability.

If testing finds a bug in a part of the software that definitely won't be used for a while (for instance, it's for only one customer, who isn't ready for that part yet), and people are confident that it can be fixed in that time, then it might be the right decision to release the software with that bug.  (It might be the right decision to release with that bug in other situations too.)  Testers are not managers – managers are managers.  It's managers who have the responsibility to make the important decisions such as whether software should be released or not.  (They might, for instance, choose to delegate some or all of that decision-making to a bit of code, e.g. that blocks release if some automatically-checked property is too bad, but that bit of software is merely acting as a tool on behalf of management.)

So, testing should find information that's useful, that is an input to management decisions.  Not knowing the software's quality is very risky (see below).

Meeting development and operations needs

Is the code easy to understand? Is it easy to change safely?  Is there a lot of technical debt?  Is it poorly structured, so that you need to change 17 different things to effect 1 change?  Is it easy to collaborate on, or are there e.g. often lots of merge conflicts?

Is it easy to know what went wrong, and to know what to do about it?  Is it easy to deploy and roll back?

How do you get good software?

I expect I could produce a bigger answer if I tried, but I answered:

Customers (the people who buy or otherwise choose to use your software) and users (the people who, err… use your software) aren't always the same people, which is why I mention both of them.

There are all kinds of risks, introduced by many different things.  Starting from the beginning, there are risks to do with requirements, such as:

  • A requirement was scrambled after it was received (e.g. "not" was added or removed)
  • The person giving the requirements forgot about special cases (I want X except for when there's an R in the month, in which case I want Y)
  • The person giving the requirements didn't realise that the problem they were trying to solve was actually a more fundamental one than what they said, which will have a different solution to the more superficial problem.
  • And many more, including from other sources...

Feedback helps more if it has more useful information (is "good") and if it's as soon as possible after the creation of the thing that ultimately triggered the feedback.  This is why waterfall can be so risky – the feedback that you get that's most likely to tell you the requirements are wrong is the finished software, which can arrive months after the requirements were written.

If a wireframe or some other prototype were created to test out the requirements, that would be much quicker, and would still have much useful information.  It would have less than information than in the finished software, but often the extra speed far outweighs the reduced information.  Or you commit to requirements only in small chunks (e.g. user stories), so that the finished software arrives much more quickly (because only that small chunk is being worked on, rather than the whole thing).

This also extends into things like automated build and deployment pipelines, so that there is minimum friction in getting to the finished software, and automated checking e.g. for regressions.

But there are other risk management strategies that don't depend on agile / lean practices.  These are things like doing a prototype of the integration with a new dependency, to make sure that the interface is as expected, and the dependency does what we expect.

Summary

I could go on, but I've already allowed myself more words to expand on my original answers than I expected.  I don't think that there's one correct answer to these questions, but it's worth thinking about them to see what you think is important.  As I said, please contribute your thoughts if you feel you can.