Thursday, November 01, 2007

Building Bridges without Engineering

One of the themes of my "Software Engineering" & Polyglot Programming keynote is the comparison between traditional engineering and "software" engineering. The genesis for this part of the talk came from the essay What is Software Design? by Jack Reeves from the C++ Journal in 1992 (reprinted here), a fissile meme that Glenn Vanderburg tossed into the middle of a newsgroup conversation about that very topic. Even though the essay is quite old, it is every bit as pertinent today as when it was written. The update that Glenn and I have given this topic is the addition of testing, which gives us professional tools for designing software. We don't have the kinds of mathematical approach that other engineering disciplines do. For example, we can't perform structural analysis on a class hierarchy to see how resilient to change it will be in a year. It could be because those types of approaches will just never exist for software: much of the ability for "regular" engineers to do analysis has to do with economies of scale. When you build the Golden Gate bridge, you have over one million rivets in it. You can bet that the civil engineers who designed it know the structural characteristics of those rivets. But there are a million identical parts, which allows you to ultimately treat them as a single derived value. If you tried to build a bridge like software, with a million unique parts, it would take you too long to do any kind of analysis on it because you can't take advantage of the scale.

Or it may just be that software will always resist traditional engineering kinds of analysis. We'll know in a few thousand years, when we've been building software as long as we've been building bridges. We're currently at the level in software where bridges builders were when they built a bridge, ran a heavy cart across it, and it collapsed. "Well, that wasn't a very good bridge. Let's try again". There was a massive attempt at component based development a few years ago, but it has largely fallen by the wayside for everything except simple cases like user interface components. The IBM San Francisco project tried to create business components and found (to the non-surprise of software developers everywhere) that you can't build generic business components because there are far too many nuances.

Manufacturing is the one advantage we have over traditional engineers. It is easy and cheap to manufacture software parts, by building the parts of software. So why not take advantage of that ability and manufacture our software parts in both the atomic, small pieces and then the larger interactive pieces and then test them to make sure they do what we think they do. It's called unit, functional, integration, and user acceptance testing. Testing is the engineering rigor of software development.

Here's the interesting part. If you told an engineer that you needed a large bridge and that you needed it so quickly that he doesn't have time to apply any of the best practices of bridge building (e.g., structural analysis), he would refuse. In fact, he would be liable for the bad things that would happen if he was foolish enough to proceed. We have none of that liability in the software world.

Responsible software developers test, just as responsible engineers use the tools of their trade to create robust, well designed artifacts. But we still have too much stuff that is untestable, along with pressure to write code that isn't tested because testing takes time. One of my litmus tests for deciding how to spend my time looking at new things (frameworks, languages, user interface approaches) is the question "is it testable?" If the answer is no (or even "not yet"), then I know that I needn't bother looking at it. It is professionally irresponsible to write code without tests, so I won't do it.

6 comments:

Sammy Larbi said...

Neal, I understand and agree with your point, but there's one thing that bothers me (I'm just being nit-picky, I suppose).

When you say "We'll know in a few thousand years, when we've been building software as long as we've been building bridges," you are completely ignoring how far humanity is on the technological curve (which is basically increasing exponentially) compared to where we were when we started building bridges.

I don't think it should take us quite so long as bridge building to mature, taking into account all the advantages we have over bridge builders thousands of years ago, or even just 50 years ago. =) I could be wrong, of course.

Dean Wampler said...

I like your point about millions of rivets being the same vs. the uniqueness of software "pieces". Most people, even most developers, really don't know just how complex software is, especially from the HW up. How many points of variation does a piece of software have?

IntenseFI said...

Could you elaborate about what you mean by "manufacturing" as pertaining to software? I always think of the manufacturing step of software being the compilation of source code (and you could include copying the executable). That is indeed cheap--the cost is near zero. However, you seem to mean something else, and I'm not sure that something else is quite so cheap.

phil varner said...

I took a course in grad school that was basically, "How do we make software into a true engineering discipline?". One of the books was Henry Petroski's "Design Paradigms: Case Histories of Error and Judgment in Engineering", from which we drew many parallels from early engineering history to current software reality, the bridge example being one of the ones that stuck most firmly in my head.

The main difference with software is that it is different every time by definition. If it wasn't, then you could just buy it. While each bridge is slightly different, they're mostly variations on proven designs. Most bridges also don't have many features other than "doesn't fall down". When novelly-designed bridges (or rockets) are built, they tend to have similar failure rates to software.

Also, it _is_ possible to build software with rigor that you build a bridge, it just takes a lot longer. Model languages like PVS and Z and "safe" languages like SPARK/Ada go a lot of the way. This is how software with the same safety criticality of a bridge should be built, and is sometimes. But, for most applications, the market prefers "feature rich and works mostly" to "feature poor and formally-proven correct".

So, I think the answer to "can we build software like we build bridges" is Yes, but the answer to "do we want to" is No.

Neal Ford said...

As the Reeves essay says, manufacturing is indeed just compilation and deployment. And, yes, it is cheap indeed.

Peter M said...

I don't believe all software is different. I believe all software is the same. It just depends on the level of abstraction (talking about database driven applications anyway). Jackson's book on Problem Frames lays the theoretical foundation for this view.

Quote: "Testing is the engineering rigor of software development." This is a great quote. The biggest problem with software projects are the requirements. They were so hard, the Agile community came up with the dodgy approach of making them up as you go. That won't work. I'm a big Agile project now and the requirements are all over the place. Is the resulting code fit for purpose? Who can say. The only way to intersect a run away train is through the user acceptance testing. It's like a retrofit that bypasses requirements. It's an emergency procedure. It's not what I'd choose to do - but if the code is half good (by some miracle and skill) then the user tests are really just testable requirements. Makes sense. So, 'yes' if you can't test it, don't do it.

P. Merrick www.masterstoryteller.co.uk