Sunday, May 18, 2008

“The required techniques of effective reasoning are pretty formal, but as long as programming is done by people that don’t master them, the software crisis will remain with us and will be considered an incurable disease.  And you know what incurable diseases do: they invite the quacks and charlatans in, who in this case take the form of Software Engineering Gurus.”

EWD1305. Answers to questions from students of Software Engineering, Edsger W. Dijkstra, 2000.

A very insightful, but somewhat harsh observation by Professor Doctor Dijkstra.  Also consider:

“No, I’m afraid that Computing Science has suffered from the popularity of the Internet.  It has attracted an increasing – not to say: overwhelming! – number of students with very little scientific inclination and in research has only strengthened the prevailing (and somewhat vulgar) obsession with speed and capacity.”  Again from EWD1305.

As an aside, Dijkstra, and as some of you may have heard of Dijkstra’s algorithm, made a number of fundamental contributions to the area of programming languages and received the Turing Award in 1972 You can read his notes diary on line which makes for fascinating reading.

Why did I quote Dijkstra?  Well, I tend to agree with his view and as described in my previous post, I don’t think we, as Software Engineers, know how to perform Software Engineering.  In fact, I don’t even think Software Engineering really exists in our world today – and in some respects, we seem to be moving farther away from it instead of getting closer.  That is to say our predilection for programming languages blinds us to what our real focus as Software Engineers should be.

Let me be more succinct.  When I say Software Engineering, I am picking specifically on that huge black hole called software design – i.e. we don’t know how to “design” software.  We sure know how to program the heck out of it, with our humungous list of programming languages, but what techniques and tools do we have for “designing” the software to be programmed?  How do we model our design?  How do we prove (i.e. verify) our design is correct?  How do we simulate our design without coding it?  Ponder.

Designing software is all about designing “abstractions.”  Software is built on abstractions.  Programming languages are all about implementing abstractions.  But where do those abstractions come from and how do we describe or model (i.e. design) those abstractions?

Let’s look at some approaches to designing software “proper.”  Formal methods is an approach to software design.  From Wikipedia, “In computer science and software engineering, formal methods are mathematically-based techniques for the specification, development and verification of software and hardware systems.  The use of formal methods for software and hardware design is motivated by the expectation that, as in other engineering disciplines, performing appropriate mathematical analyses can contribute to the reliability and robustness of a design.  However, the high cost of using formal methods means that they are usually only used in the development of high-integrity systems, where safety or security is important.”

Hoare logic (also known as Floyd–Hoare logic) is a formal system developed by the British computer scientist C. A. R. Hoare, and subsequently refined by Hoare and other researchers. It was published in Hoare's 1969 paper "An axiomatic basis for computer programming". The purpose of the system is to provide a set of logical rules in order to reason about the correctness of computer programs with the rigor of mathematical logic."  

Another approach is program derivation, “In computer science, program derivation is the derivation of a program from its specification, by mathematical means.”

Model checking is the process of checking whether a given structure is a model of a given logical formula. The concept is general and applies to all kinds of logics and suitable structures. A simple model-checking problem is testing whether a given formula in the propositional logic is satisfied by a given structure.”  As an aside, you will note that the Unified Modeling Language (UML) is not on the list of model checkers.  Interestingly enough, any UML diagram cannot be checked (i.e. verified) for correctness, so what good is UML?  OK, I am being a bit facetious, but for the most part, I have found this similar to Dilbertisms – they are satirical truths.

One approach that is familiar to me is, “Design by Contract, DbC or Programming by Contract is an approach to designing computer software. It prescribes that software designers should define precise verifiable interface specifications for software components based upon the theory of abstract data types and the conceptual metaphor of a business contract."  The metaphor comes from business life, where a "client" and a "supplier" agree on a "contract”

However, while a lot of companies talk about Design by Contract, very few, at least in my experience, actually perform this and particularly at the level required for it to be beneficial.  Further, while it is a “clear metaphor to guide the design process” that in of itself makes it hard to simulate the design of a software system or “prove” (i.e. verify) the correctness of a software system, before writing any code.

So how do we design software that meets the criteria of modeling the software and simulate the running of the software to verify its correctness?  After much research, I came across one approach that seems to make the most sense to me, in which why I will explain shortly.   First, an introduction to Alloy and Alloy Analyzer from the Software Design Group at MIT. 

“Alloy is a structural modelling language based on first-order logic, for expressing complex structural constraints and behaviour. The Alloy Analyzer is a constraint solver that provides fully automatic simulation and checking.   Our philosophy is to couple lightweight design and specification notations with powerful tools.”

There is a book called, “Software Abstractions” by Daniel Jackson on Alloy and Alloy Analyzer.  You can also download a few chapters of the book.  In the preface chapter, I was particularly impressed with these two paragraphs,

 

The experience of exploring a software model with an automatic analyzer is at once thrilling and humiliating. Most modellers have had the benefit of review by colleagues; it’s a sure way to find flaws and catch omissions. Few modellers, however, have had the experience of subjecting their models to continual, automatic review. Building a model incrementally with an analyzer, simulating and checking as you go along, is a very different experience from using pencil and paper alone. The first reaction tends to be amazement: modeling is much more fun when you get instant, visual feedback. When you simulate a partial model, you see examples immediately that suggest new constraints to be added.

 

Then the sense of humiliation sets in, as you discover that there’s almost nothing you can do right. What you write down doesn’t mean exactly what you think it means. And when it does, it doesn’t have the consequences you expected.  Automatic analysis tools are far more ruthless than human reviewers.  I now cringe at the thought of all the models I wrote (and even published) that were never analyzed, as I know how error- ridden they must be. Slowly but surely the tool teaches you to make fewer and fewer errors. Your sense of confidence in your modeling ability (and in your models!) grows.”

 

Let me step back in time for a moment to illustrate to you why these two paragraphs are of particular interest to me.  Back in the 80's, in the electronics field, I went through this type of “industrialization” first hand with electronics circuit design.  At first we drafted our circuit designs on real blueprint paper, we then built some sketchy prototypes (anyone remember breadboards?), designed our tests and implemented test harnesses (i.e. scopes, analyzers, generators, etc.) and tested the design “after” it was implemented.  Note that it may take a number of manual iterations or cycles to get the design right as well. 

 

I would say this pretty much sums up the same approach we use to design software today.  I.e. “sketch-up" some designs, start coding right away, spend an inordinate amount of time “refactoring” and developing test after test after test and then in the end, in some cases, figuring out that, guess what? all of the tests simply validate and verify that the software design is so flawed from the original specification (read: sketch-up) that we have to start over again.  Whether the process is called TDD, Agile, iterative, waterfall, whatever, in the end it really does not matter as the process itself is flawed because it completely misunderstands the role of software design and therefore the result can only be the sorry state of software in our industry.  But I digress.

 

Then the electronics design world was revolutionized when SPICE (Simulation Program wi