Friday, 28 November 2008
In our software development business, we are always asked for estimates on:
- How long is the project going to take?
- How much will it cost?
There is a simple to use online estimation tool at: http://www.cms4site.ru/utility.php?utility=cocomoii
It takes the lines of code and hourly rate as input:
Clicking the go button immediately gives you an effort and cost estimate:
You can fiddle with team skills and project complexity and see the result.
I know some people and organizations are adverse to counting lines of code, but after 17 years in this business and having tried several different estimation methods and models, I have to admit, using COCOMO II as an estimation model has proven in practice, at least for me, the most accurate way of estimating how long a project is going to take and how much it is going to cost.
You can read the extensive COCOMO II estimation model manual at:
Wednesday, 29 October 2008
I have been coding web applications In Visual Studio since Visual InterDev was introduced in 1997. Over that time, I have seen a wild array of error messages, but yesterday, while debugging an ecommerce web application using Commerce Server, I got this interesting message that I have never seen before:
Object is in a zombie state? I wonder what that really means?
Clicking Yes, produced this dialog box:
Huh? How can an object be a “zombie” and how can debugging be stopped, but not yet complete? I know it is Halloween, but... oooooo pretty scary, huh kids!
Sunday, 19 October 2008
Get Off My Cloud
Dare Obasanjo wrote about Cloud Computing and Vendor Lock In where Dare commented on Tim Bray’s Get In the Cloud which Tim described a couple of issues. One of them, called a tech issue was, “The small problem is that we haven’t quite figured out the architectural sweet spot for cloud platforms.” I would say we have not figured out the architectural sweetspot for “any” platform, cloud or otherwise.
It seems that every API we program to, whether cloud based or not, is still a custom, one off API. It appears to be that way since the dawn of software programming and still continues that way, even when we are in the clouds.
What I am talking about? I know people cringe about analogies to software development, but this one contains the single point I am making. Do you know what an 8 pin DIP is? DIP stands for dual inline package and an 8 pin DIP is one “standard” package (or interface) for integrated circuits (IC’s). The keyword being “standard”. There are millions of different types of 8 pin DIP IC’s out in our world today used in virtually anything electronic. Much like our accompanying software, however, there is one crucial distinguishing difference between the two. Can you guess what it is?
Those millions of IC types can fit into “one" package, or put another way, exposed through “one” interface and that is what the 8 PIN DIP is, an interface. Through that one interface type or package, I can access millions of features and functions. Further, through using multiples of those packages/interfaces, I can make almost anything I want, from a motherboard, to a robot servo controller to circuitry that makes my Blackberry a Crackberry (oops that’s just me, the carbon unit) to circuitry that guides a rocket to the moon.
How come we have (still) not figured this our in our software world where we continue to hand craft one off interfaces that seemingly are tied to the implementation, even though we don’t think that (i.e. the vendor lock-in described by Tim’s article). Brad Cox seemed to have that figured out in his concept of a Software IC years ago and further in his book on Superdistribution. A man before his time, I would say.
What’s my point? My point is that there is going to be an inevitable conclusion at some point in time where “how” you interface with any given piece of software functionality is going to be more important than the functionality itself. Know what I mean?
Imagine you woke up this morning and you could access all software libraries through an operations based API like, Query, Create, Update, Delete, Copy and Move. That was it. That was your “complete” API and all software libraries, components, features and functions exposed that API where the general pattern is create a request comprising one or more operations on specified items. Submit the request to the API service for processing. Interpret the result returned by the service. That’s it.
Remember I am just talking about the interface exposed by the software library and not its internal representation. In other words, a standard API used by everyone. How would that change the way we would consume libraries (or components or services or features or functions) to develop software today?
Don’t get me wrong, I am not saying we are not heading in this direction, we are. There are several examples of this today, but it is not ubiquitous or universal or all encompassing as it could be. As a .NET developer and using the .NET framework class library of some 30,000 types, I can tell you it is not that way today. In fact, it does not matter what framework class library I use to be clear I am not picking on Microsoft. I am sure that at the PDC with some 38 sessions on cloud services and the announcement of .NET 4.0, we will see some interesting developments there, I am just hoping it is towards the direction of a standardized interface for cloud services.
Maybe it is just wishing thinking that I will see in my lifetime a standard interface or package like the 8 PIN DIP for software libraries where through one "standard" interface I can access thousands of library functions. Then again, software industrialization is occurring at a dizzying pace and I can’t help feel that it is just around the corner. History in the making is the phrase I believe and I hope to be a part of it.
Sunday, 21 September 2008
Global System Builder Promo
- don't forget to click on on bottom right of player
Sunday, 03 August 2008
Recently, I have been focusing on updating my web-based DLR IDE which is one of five components that make up a larger project called Global System Builder.
In a previous post, I mentioned that I would like to develop a DLR IDE in Silverlight, but I am still having a time figuring out Silverlight, especially trying to debug web-based Silverlight applications.
· Live syntax highlighting for several languages and the ones that I am interested with the DLR which is Python, Ruby and Managed JScript.
· Multiple-document support with tabs
AJAX load and save functions
· Line numerations
· Search and replace (with regexp)
· unlimited undo and redo
· Auto-indenting – great Python support!
· Font resizing
· Toolbar customization – add your own commands via plug-ins
· Full screen capability
· Create new syntax or language files
· Multi-browser support
· Multiple instances on the web page (if required)
Really quite amazing and it has been working excellent for me. Great job Christophe!
I can load and save files to the disk in a project hierarchy. I can create new files and then save to disk. If you open a file and start editing it, you will notice that an asterisk is added to the filename tab to tell you that the file has changed.
It looks like there are two items left for me to implement, each one with their own set of difficulties:
I prefer to GUE (Go Ugly Early) to try and mitigate risks, so it is going to be Adventures in Debugging. But before I do, Intellisense will be tricky as well since Intellisense has quite an array of features. My goal is to only implement “List Members” and “Parameter Info.”
Back to debugging and without getting too philosophical, I can’t image any “real” IDE that does not have debugging support, especially if you are working on any size of code base. I view it as the same as driving at night with no headlights – you are bound to hit something hard sooner than later. So what choices do I have for debugging with the DLR and my custom IDE? As far as I can tell, there appears to be one choice and that is MDbg.
You can download MDbg and compile it. You will need to add this trick (see answer at bottom) to get the help to work on the command line.
Mike Stall is the MDbg guru and he has a great MDbg link fest to get you started. I also really liked Jan Stranik’s Introduction to the Managed CLR Debugger as it shows how you can easily write an extension using the debugger API. You can also download the videos.
So, how to start? I started off by running MDbg and then issue the command line “load gui” to load a Windows GUI extension to make it easier to use then the command line. This extension provides an excellent example on showing how you can get a debug window to step through code, show locals, etc. Here is the code that I want to debug in my editor:
Dead simple, but a place to start. On the web server, I have a C# Windows Forms application that hosts an IronPython 1.1.1 engine in it. I also have IP 2 with the DLR prototyped, but just trying it out on IP 1 for now. The application hosting the IP engine is called GSBService1.exe, compiled with debug symbols. I can run MDbg and launch the app in the debugger:
The debugger attaches to the running process and breaks into the code at the main entry point:
You can type help to get a list of commands available that allows you to do all sorts of things such as setting/clearing breakpoints, stepping in and out of code, etc. In addition, from the tools menu, you can open other windows such as viewing the Callstack, Locals, Modules, Threads and QuickWatch.
One of the commands is “x” which lists the loaded modules. You can see which modules are loaded in my application. You can also specify a particular module and display specific functions by pattern matching. Here is the help usage:
Note that I am looking for the function in the module gsbservice1 that executes my .py file – I know the function contains “ParseInteractive”, so I use the x command again to find it using the pattern matching. Then I can specify a break point by the command b ~1. I then issue the go command (g) and now I am running and all set to debug at my breakpoint.
I go back to my web-based code editor and hit the command Start Debugging and immediately drop into the debugger at my specified breakpoint:
Here I am stopped at my breakpoint and I am starting to step over (command o) the code. In fact, I stepped over the code until the Python code was compiled by the IronPython engine. So how come I could not step into the IronPython code?
If you look closely, you will see that I used the x command again to see what modules are loaded and lo and behold a module called, “debuggableSnippets2.dll” What’s in there? Using x command on the module, you can see my compiled .py code with the class name and methods that correspond to the source that I entered in the code editor. Searching on debuggableSnippets points to this source:
// IronPython has two units of compilation:
// 1. Snippets - These are small pieces of code compiled for these cases:
// a. Interactive console expressions and statements
// b. exec statement, eval() built-in function
// c. Types generated by NewTypeMaker for instances of Python types
// d. others ...
// All snippets are created in the snippetAssembly. These are created using GenerateSnippet.
// 2. Modules - Modules are compiled in one shot when they are imported.
// Every Python module is generated into its own assembly. These are created using GenerateModule.
// OutputGenerator manages both units of compilation.
Makes sense. It’s nice that I can step through my C# code that hosts the IP engine, but what I am really interested in is stepping through the Python code.
I see that others have been able to do it and others here as well. But not sure exactly “how” they are doing it. Maybe someone will enlighten me.
Note in the forum that Mike Stall indicates that there may be some issues with MDbg and that perhaps an updated version is forthcoming. That was over 6 months ago. I really hope the MSFT DLR/IronPython teams keep tool support in mind and provide an updated version of MDgb. Maybe Michael Foord, Douglas Blank, Ben Hall, and Stefan Dobrev can help lobby Harry Pierson to keep MDbg current and able to work with the DLR.
We are all building DLR IDE’s of various sorts (IDE Developer and DLR IDE on Codeplex) and MDbg seems the most likely candidate as a debugger – for those that want debugger support. Also, I suspect that this is a precursor for people wanting to create their own languages on top of the DLR, so you would think debugging tool support would be important.
In the meantime, if I can only figure our how to get MDgb to debug IronPython code… And I may have to wait for Beta 4 to try it out on IronPython 2.0.
Wednesday, 30 July 2008
Dell’s Fashionable PC’s– Yours Is Not Here
I have poked marketing fun at Microsoft’s Dinosaur Ads and Oracles “hot pluggable” EAI platform, but Dell just beat them with, “Get the mobile, fashion-forward student laptop.”
“This personalized laptop reflects your sense of style and keep you connected to fun, friends and assignments, no matter where the school day takes you.”
Wow, check it out man, FREE COLOUR!
I know after being in this industry for 17 years, I have a little bit of cynic in me, of the Dilbert kind, but honestly, Dell is not only marketing to an ever younger audience, (reminds of Camel cigarette ads for kids), but it is to the point of trying to make computers be as hip as skateboards (psst hey Dell, never going to happen!). Note the ‘Street” version above. I wonder if I got one of those that I would develop a bad boy, street attitude. Oh wait a minute, most of my co-works already think that about me...
"More You. Inside and Out. Personalize your Dell STUDIO with ‘Designed for Dell” accessories – the brands you trust customized to match the colour, fit and style of your system.”
Never would I believe such branding could be applied to a... computer? Now for one moment, I will admit that I was always attracted to Alienware computers as being a closet gamer, plus they are cool – and the marketing and branding is slick. But I will always associate Dell with business computers – that’s their brand to me. Why would they jeopardise their business brand to go after the skateboard market? Share value? Pfffttt!
“A cool campus accessory that is ready to move.” Honestly Dell, just what is this marketing message supposed to convey? That a computer is a cool campus accessory for woman? That it is the new purse? And what about that locker... Show me one student that has a pink fur lined shelf for her books. Even my five year daughter feels pink fur is on the outs. What is that picture of? Of her and her Mom when she was little or her and her daughter or ?? This is so wrong. My wife says, “Who are they kidding? Computers are supposed to be tools to help people and now it has become a fashion statement – an image conscience thing. F*&! - there is no stopping these marketing people.” OK, that was a quote from my lovely wife when I showed her this. She said a lot more, but none that I can repeat here It is embarrassing to me being in the computer industry to be associated with this. Good thing I don’t have any Dell computers.
“Make Your Dorm Room The Centre Of Fun”
‘Whether they’re an aspiring botanist or a fan of film noir, this PC will bring inspiration and entertainment to their dorm room for a fantastic price.”
Oh man, I can tell you that when I was taking computer courses in college, my dorm room was the center of fun and inspiration, but there were no computers in it
“Handles Whatever Your World Throws At It.”
Dell, what happened to your brand? I picked up a Globe and Mail on Monday and you had this flyer in it. It has changed my view of Dell forever – lost all credibility to me. How can I ask my business customers to take your brand seriously when you are all trying to be all hip and designer like to a younger generation? Worse yet, the ads are seemingly designed by someone in marketing that seems to have no clue about that demographic. That’s aside from being pretty money grubbing going after an ever younger audience – pretty soon we will see Dell ads for grade school kids in summer camp...
Someone else from the fashion industry wonders about the same thing, but in reverse, "Why Would Dell Hold a Fashion Show.” I can only hope that this new low in computer marketing is just a total oversight on Dell’s behalf and they will say it is an experiment gone awry and turn back to what they do best – build practical home and business computers for the masses. But somehow I doubt it. With all of this advertising comes the sunk cost of designs and tooling to produce all of these free color laptops. .
Tuesday, 22 July 2008
I was reading an interesting post at Ted Leung’s blog called, “IDE’s and Dynamic Languages”. It is interesting to me for a number of reasons. One is how a Text Editor can be considered an IDE, even though Ted does say that automatic syntax verification and code completion is certainly beyond a Text Editor.
One thing that did surprise me was no discussion on debuggers as part of an IDE. How can people code using a text editor without a debugger? I guess I have been (totally) spoiled back in the VB6 days (yes, I will admit it) that I could step though code and when I bumped into an error, I could back up the debugger a few statesments, make my correction and keep on stepping through. I have never been as productive since! Know what I mean? Yes, I know this says nothing about design, but an IDE is a tool for using a programming language, yes? So how come we (as in developers) have so few tools or is that choice of tools?
According to a recent analyst report, 97% of developers on the .NET framework use Visual Studio and over 70% use Eclipse or Eclipse based IDE’s for Java. As much as I love Visual Studio, and being a .NET developer, I have no control as to what I want in an IDE. Worse yet, with the emerging new dynamic languages, IronRuby, IronPython and Managed Jscript, have almost no tool support at all in Visual Studio. While there have been some announcements, and articles and some tooling, its like bolt-ons to Visual Studio and still yet to come.
As a .NET developer, specifically a .NET web developer, I would like to use something other than Visual Studio to develop web applications using a dynamic language. My wish list is for something lightweight and web-based so that I can explore using, an interactive interpreter and a simple code editor just using a web browser. Maybe something like this:
Several months have passed since I released this and I have been working on a new version that targets the IronPython 2 and the DLR. In that time, other folks have started to develop similar web-based interactive consoles and code editors. One example is Jim Hugunin’s DLR (interactive) Console which you can download from the Silverlight Dynamic Languages SDK. I should point out that this is a modified version. Also note that it uses Silverlight:
Wow, I am impressed!
Here is another web based shell (that you cannot download) from Oleg Tkachenko:
Also Michael Foord has a Silverlight based Python in the Browser:
Jimmy Schementi has an IronRuby console:
Of course there are other implementations, but they are not web-based. Nonetheless, Joe Sox’s IronTextBox works very well:
And Stefan Dobrev's DLR Pad:
And Ben Hall has just released his IronEditor:
So what’s my point? I think all of these projects are great and kudos to the people that built them. It takes a lot of time and effort above and beyond just regular work hours. I have been there myself, my hats off to you folks! But, there are 8 versions of the interactive console and a few versions of a basic code editor. I know it may be a dream, but it would be great to collaborate with these people and write out a simple set of requirements for what a great DLR console and code editor would be. And then as a virtual team, implement it.
After all, to a large degree, it will be how well supported the language is from a tools perspective that will really determine the rate of adoption. And right now, the tools (or IDE) experience for Dynamic Languages on .NET is severely lacking to the point of having several people independently developing their own tooling. In this post I only pointed out a handful of these tools and I know there are others, but I was really targeting web-based IDE’s. Maybe that is an opportunity? Or is it a pipe dream?
Thursday, 26 June 2008
In my best Sam Kinison voice, “ah ahhh ahhhhhhhhhh!!!!” I can’t take it anymore. I am re-installing Office 2003 and forgetting about Office 2007. Why? It’s the ribbon man! For all of the usability design, I find it unusable. No offense to Jensen Harris or Microsoft, but for me, the consumer of the product, and after trying it for over a year, I just can't get used to it.
First, full disclosure, I am not a “usability designer” or a Microsoft “hater." In fact, I have been making a living as a software architect/programmer type on the Microsoft stack since 1991 and have been fairly happy with the platform (I love VS2008!) – except for the ribbon. But I digress.
The “ribbon.” Jensen says one of the reasons it was invented was because people could not find the new features when they were added to the product. Then he goes on to say that there are over 250 menu items and over 30 toolbars in Word 2003, which resulted in this satirical view:
Now, fair enough, but I would suggest that if a “word processing” application has +250 menu items and over 30 toolbars, then “Toto, we're not in Kansas Anymore." Meaning, this is no longer a word processing application.
Honestly, Word should have been “refactored” into perhaps multiple products or features split into a desktop publishing application or a whole other suite of applications. But instead, the UX team went through an honorable and noble design process of solving the wrong problem. Kudos to you Jensen, but I just can’t do it anymore. Every time I look at the ribbon, my brain freezes - I have to think, which means bad usability design.
Why? It boils down to simple math. When I see the Word 2003 menu, I see this:
Ok, I see 9 “objects.” Notice no toolbars. That’s right, simple is better… right? Ok when I get crazy, and add a toolbar, I see:
Even then, it is 19 objects on the toolbar and another 9 objects for the menus. But what do I really use?
Yah, that’s right 13 objects in total! That’s it. The bullets, numbering and indent/outdent are merely conveniences for me. Note one complaint already is that these are 2 separate toolbars and there is no way for me to put them on one row, even though there is lots of horizontal space, I am forced to use up two vertical rows. That ain't usability.
Oh yeah, not full menus on the pull down – who designed that? Yes, I know what you thought, and I know the "fix", but honestly, it does not work. Give me the full menu every time so I do not have to click twice. In my mind, usability is all about minimizing the choices a user has to make and minimizing the number of mouse clicks to make those choices. If you have too many choices, maybe you are trying to solve the wrong problem?
Here is my default Word 2007 "Ribbon":
There are, count them, over 60 possible choices or selections to make. And that is the problem. Too many visible choices! My poor brain needs to parse and process each item to see if it matches what I want to do. Whereas before, I had a pretty good idea that in the one of the 9 menus in Word 2003, I would be able to locate and narrow down the “decision tree” to find what I am looking for. In fact, I got really good at in 2003 and did not have to "think" about it. And that's the point of good usability design - no think time. In Word 2007 I have 5 times as many visible choices per "ribbon" x 8 menus, which means exposing ~480 visible objects to the user, which is way too many! In my mind, this is a classic case of solving the wrong problem – i.e. if a “word processor” has 480 objects, commands, menu items, whatever the heck you want to call it, give it a name, then it is no longer, by far, a word processing application. Something is really wrong here.
Oh and some hidden UI gems. When I first fired up Word 2007, I was trying desperately to find the “options” menu item which has always been Tools/Options, for like 10 years its been there - if it ain't broke... After several minutes of hunting, I had to ask one of my co-workers, where the heck is the Options option? It is hidden at the bottom of the "magic" Microsoft Office Button. I say magic because a) who knew it was a button? and b) why the heck is it there? I might as well be playing a pinball game for all the pretty widgets!
Funny that there is a “Locations of Word 2003 commands in Word 2007” article... What does that say about the user experience? Ok, I will admit to being totally programmed by the “File” menu approach, but so is the rest of the world and the mass majority of applications in the world (meaning everything but Office 2007) also operates that way, so what up? As mentioned before, I believe the wrong problem is being solved.
As a related aside, it took me forever to find on the IE7 toolbar where the “find on this page” menu item was. Have a look at the screenshot below. Where would you look?
My first instinct (decision) was to look under the “Page” menu/toolbar for "find on this page" menu item:
Nope, not there. Other related page menu items are there, but not my find on this page menu item. So then of course I looked under each menu, in random desperation, and still no go. WTH? I had to search on the internet to find the “find on this page” menu item and lo and behold it is hidden away here:
Again, I feel the wrong problem is being solved here. We have a menu called Page and if you wanted to find something on the “Page” you would look under the “Page” menu, yes? I know I live and breathe software for a living, but I just don’t get how this is usable. Again, I am not trying to pick on MS, but as someone that uses MS tools daily, there are items that come up that defy any sort of logic. And that can be said for any software products and services company.
What’s my point? While there is a lot of hype around usability and the user experience, it does no good to be solving the wrong the problem. Rule #1 in software development, regardless if it is usability or not, make sure the right problem is being solved. And if the software industry moves towards adopting the "ribbon" as a standard user experience widget, I think I will take early retirement!
Sunday, 18 May 2008
“The required techniques of effective reasoning are pretty formal, but as long as programming is done by people that don’t master them, the software crisis will remain with us and will be considered an incurable disease. And you know what incurable diseases do: they invite the quacks and charlatans in, who in this case take the form of Software Engineering Gurus.”
EWD1305. Answers to questions from students of Software Engineering, Edsger W. Dijkstra, 2000.
A very insightful, but somewhat harsh observation by Professor Doctor Dijkstra. Also consider:
“No, I’m afraid that Computing Science has suffered from the popularity of the Internet. It has attracted an increasing – not to say: overwhelming! – number of students with very little scientific inclination and in research has only strengthened the prevailing (and somewhat vulgar) obsession with speed and capacity.” Again from EWD1305.
As an aside, Dijkstra, and as some of you may have heard of Dijkstra’s algorithm, made a number of fundamental contributions to the area of programming languages and received the Turing Award in 1972. You can read his notes diary on line which makes for fascinating reading.
Why did I quote Dijkstra? Well, I tend to agree with his view and as described in my previous post, I don’t think we, as Software Engineers, know how to perform Software Engineering. In fact, I don’t even think Software Engineering really exists in our world today – and in some respects, we seem to be moving farther away from it instead of getting closer. That is to say our predilection for programming languages blinds us to what our real focus as Software Engineers should be.
Let me be more succinct. When I say Software Engineering, I am picking specifically on that huge black hole called software design – i.e. we don’t know how to “design” software. We sure know how to program the heck out of it, with our humungous list of programming languages, but what techniques and tools do we have for “designing” the software to be programmed? How do we model our design? How do we prove (i.e. verify) our design is correct? How do we simulate our design without coding it? Ponder.
Designing software is all about designing “abstractions.” Software is built on abstractions. Programming languages are all about implementing abstractions. But where do those abstractions come from and how do we describe or model (i.e. design) those abstractions?
Let’s look at some approaches to designing software “proper.” Formal methods is an approach to software design. From Wikipedia, “In computer science and software engineering, formal methods are mathematically-based techniques for the specification, development and verification of software and hardware systems. The use of formal methods for software and hardware design is motivated by the expectation that, as in other engineering disciplines, performing appropriate mathematical analyses can contribute to the reliability and robustness of a design. However, the high cost of using formal methods means that they are usually only used in the development of high-integrity systems, where safety or security is important.”
“Hoare logic (also known as Floyd–Hoare logic) is a formal system developed by the British computer scientist C. A. R. Hoare, and subsequently refined by Hoare and other researchers. It was published in Hoare's 1969 paper "An axiomatic basis for computer programming". The purpose of the system is to provide a set of logical rules in order to reason about the correctness of computer programs with the rigor of mathematical logic."
Another approach is program derivation, “In computer science, program derivation is the derivation of a program from its specification, by mathematical means.”
“Model checking is the process of checking whether a given structure is a model of a given logical formula. The concept is general and applies to all kinds of logics and suitable structures. A simple model-checking problem is testing whether a given formula in the propositional logic is satisfied by a given structure.” As an aside, you will note that the Unified Modeling Language (UML) is not on the list of model checkers. Interestingly enough, any UML diagram cannot be checked (i.e. verified) for correctness, so what good is UML? OK, I am being a bit facetious, but for the most part, I have found this similar to Dilbertisms – they are satirical truths.
One approach that is familiar to me is, “Design by Contract, DbC or Programming by Contract is an approach to designing computer software. It prescribes that software designers should define precise verifiable interface specifications for software components based upon the theory of abstract data types and the conceptual metaphor of a business contract." The metaphor comes from business life, where a "client" and a "supplier" agree on a "contract”
However, while a lot of companies talk about Design by Contract, very few, at least in my experience, actually perform this and particularly at the level required for it to be beneficial. Further, while it is a “clear metaphor to guide the design process” that in of itself makes it hard to simulate the design of a software system or “prove” (i.e. verify) the correctness of a software system, before writing any code.
So how do we design software that meets the criteria of modeling the software and simulate the running of the software to verify its correctness? After much research, I came across one approach that seems to make the most sense to me, in which why I will explain shortly. First, an introduction to Alloy and Alloy Analyzer from the Software Design Group at MIT.
“Alloy is a structural modelling language based on first-order logic, for expressing complex structural constraints and behaviour. The Alloy Analyzer is a constraint solver that provides fully automatic simulation and checking. Our philosophy is to couple lightweight design and specification notations with powerful tools.”
There is a book called, “Software Abstractions” by Daniel Jackson on Alloy and Alloy Analyzer. You can also download a few chapters of the book. In the preface chapter, I was particularly impressed with these two paragraphs,
“The experience of exploring a software model with an automatic analyzer is at once thrilling and humiliating. Most modellers have had the benefit of review by colleagues; it’s a sure way to find flaws and catch omissions. Few modellers, however, have had the experience of subjecting their models to continual, automatic review. Building a model incrementally with an analyzer, simulating and checking as you go along, is a very different experience from using pencil and paper alone. The first reaction tends to be amazement: modeling is much more fun when you get instant, visual feedback. When you simulate a partial model, you see examples immediately that suggest new constraints to be added.
Then the sense of humiliation sets in, as you discover that there’s almost nothing you can do right. What you write down doesn’t mean exactly what you think it means. And when it does, it doesn’t have the consequences you expected. Automatic analysis tools are far more ruthless than human reviewers. I now cringe at the thought of all the models I wrote (and even published) that were never analyzed, as I know how error- ridden they must be. Slowly but surely the tool teaches you to make fewer and fewer errors. Your sense of confidence in your modeling ability (and in your models!) grows.”
Let me step back in time for a moment to illustrate to you why these two paragraphs are of particular interest to me. Back in the 80's, in the electronics field, I went through this type of “industrialization” first hand with electronics circuit design. At first we drafted our circuit designs on real blueprint paper, we then built some sketchy prototypes (anyone remember breadboards?), designed our tests and implemented test harnesses (i.e. scopes, analyzers, generators, etc.) and tested the design “after” it was implemented. Note that it may take a number of manual iterations or cycles to get the design right as well.
I would say this pretty much sums up the same approach we use to design software today. I.e. “sketch-up" some designs, start coding right away, spend an inordinate amount of time “refactoring” and developing test after test after test and then in the end, in some cases, figuring out that, guess what? all of the tests simply validate and verify that the software design is so flawed from the original specification (read: sketch-up) that we have to start over again. Whether the process is called TDD, Agile, iterative, waterfall, whatever, in the end it really does not matter as the process itself is flawed because it completely misunderstands the role of software design and therefore the result can only be the sorry state of software in our industry. But I digress.
Then the electronics design world was revolutionized when SPICE (Simulation Program with Integrated Circuit Emphasis) came along. It ran on a computer that not only allowed you to design your electronic circuits, but also simulated the implemented design (i.e. an instance) where you hooked up your “virtual” test instruments, in software, and completely emulated the circuited deign without ever having to breadboard the circuit. I personally lived that era and is one of the main reasons, after spending 17 years designing and programming software since, have come to the humble realization, there must be a better way.
The industrialization part of SPICE meant that we were moving from a completely manual process, in which we could only verify the circuit design after the fact, to dramatically reducing the cycle time to design and verify the design of an electronic circuit without ever expending any tooling or material costs whatsoever. Further, you could do numerous design iterations basically for free, plus apply thousands (billions!) of test cases to the simulated design that we would never be able to achieve manually. This was modern day industrialization in real practice. We certainly can’t do design iterations in our software world for free today.
From Software Abstractions, I love the lines, “then the sense of humiliation sets in, as you discover that there’s almost nothing you can do right. What you write down doesn’t mean exactly what you think it means. And when it does, it doesn’t have the consequences you expected. Automatic analysis tools are far more ruthless than human reviewers.”
This is exactly what happened to us electronic circuit designers when we first started using SPICE. We thought we really could design analog circuits and as it turned out, even some of our basic design assumptions were completely flawed. We struggled at first to understand what the heck we were doing was so wrong. Then after careful analysis, tuning and testing of the design models, we started seeing the errors or our ways and our designs became more exact, precise and tolerant of numerous error conditions. It was truly a humbling experience.
And that’s the point of this post. In my opinion, we in the software design community could use some humbling. I am pretty sure that most of our designs, regardless of programming language used, are majorly flawed. We, meaning so called Software Engineers, find this out after being in the field for several years, hacking (or is that tooling?) away in our favourite programming languages without any real verifiable proof that our design is the right one or even correct or would even work before we started coding.
Maybe we are all in denial. Maybe I am over generalizing, but having been designing and programming software for many years, I feel I need to reset and look at software design from a much more formal perspective, hence this intro to Alloy and Alloy Analyzer. Both the technique and tools embody what I personally experienced in another industry that underwent nothing short of a revolution in the way how electronic circuits are designed (i.e. industrialization). One could say today that the electronics design industry has been fully industrialized.
Software industrialization will occur one day; history will repeat itself, in the same way that it happened in the electronics design world (and other engineering disciplines) will also happen to the software design world. In fact, it already is happening today. Software design techniques and tools like Alloy and Alloy Analyzer are making it possible to design software and verify the design of the software before the software is actually implemented (i.e. coded). And that is what I call the industrialization of software.
I ordered the Software Abstractions book, downloaded the tools and tutorials and will report back my findings sometime in the future. Needless to say, this is the first time, in a long time, I have become excited again about being in the software development industry.
Saturday, 03 May 2008
The truth is... and as much as it breaks my heart to say this professionally, there is no such thing as Software Engineering.
How can I make this claim? While I can tell you that I have been employed in the software development industry since 1991, that I have graduated from a two year post graduate program in Software Engineering Management, which is now incorporated as a Masters of Software Engineering at the University of Calgary, that I have worked on dozens of small and large commercial projects in North America, including my own open source project, that I have read dozen’s of wonderful textbooks on the very subject of software engineering, still might not convince you I am telling the truth.
So how about I make a claim that I defy anyone to dispute. I make the claim that there is no one on the face of this planet today that can predict the outcome (i.e. schedule, cost, resources, quality, etc.) of ANY software project of any significant size (like >10K SLOC) with any sort of accuracy (I will be generous and say + 25% as we all know it will never be a negative percentage).
If you happen to get lucky and actually hit the target once, I make the further claim that your odds are far better in Vegas then they are in the software world for repeatability – that is repeatability for success, not failure – which I would argue that we are very good at.
Big deal you say. Vast amounts of documentation have been written on the subject of Software Engineering and “how to” avoid mistakes and failures. In fact, for every ailment, we have a remedy we can trot out to counter the ailment. But we still make (the same) mistakes. In fact, just the other day, a very popular web site was down for two days that has kicked off this type of tirade on my blog again. In fact, "I am too old for this shit".
I love Steve McConnell’s books, as I do Robert Glass’s, “.” It contains 55 facts and 10 fallacies. Anyone that has been “in the biz” for any length of time is likely to have made some or all of the mistakes listed in the book. I can admit I fall into the category of having made all of those mistakes and then some. And it is not just me.
What gives then? Why is developing successful software so impossible to predict and repeat? Of sure, all of those books (and experience) outline and recommend treatments, but I sometimes feel we are no closer than the founders of the term software engineering. Of course, there are the people that point out that software development is much as art as is it is engineering (remember I said it wasn’t engineering – even though that breaks my heart to say it, given what I have written on this blog in the past) and that Software Engineering is completely a non-deterministic activity, but hey, so is life.
Stepping back a bit, I would say that our world of software development is unindustrialized compared to other industrialised “engineering” disciplines such as civil, chemical, electrical, electronics, mechanical engineering, etc. This is simply based on a time line which we can do nothing about other than wait it out.
Well, I am not a patient guy and would like to do something about this. One thought that has occurred to me, which I believe is of direct relevance to Software Engineering, is how to do we “visualize” software? Specifically, how do we visualize the “design” of software and how do we visualize the “verification” of the built software?
The closest answer I have seen is this:
Ok, so a little tongue in cheek, but ain’t that the truth about Software Engineering? The title of this blog post did not lie
Note that other engineering disciplines all have ways of “visualizing” the design of their domain. Even designing DNA strands and application specific integrated circuits (ASIC) deal with +100 million objects. What is truly ironic about this is that most of these disciplines use computer software (CAD) to model and verify their designs.
What’s my point? I believe that we are thinking about “visualizing” software the wrong way. CASE tools in the early 90’s are proof of this as I would say that UML is soon to follow the same fate. Why? Because, in my opinion, UML stickman and the other funny looking symbols do not visualize the design of software nor the verification of software – it is simply “Lost in Translation.”
How can we design software on any scale if we can’t model it or visualize it with 100% fidelity? The answer is that we can’t. The closest we can get to modelling or visualizing software is using the same text editor we have been using for over 50 years and using our favourite programming language to design software. Ergo - the source code is the design. Certainly a favourite topic of Jack Reeves. Btw, I happen to concur, which is also the fundamental problem. Visualizing 10,000 lines of source code in any programming language is (far and away) beyond any of us mere mortals that are good for about 7 + - 2 objects at any given moment in time.
It is on this basis that I feel we are way (WAY!) off the target path for Software Engineering and the root cause of why there is no such thing today as software engineering. We have very poor approximations of Software Engineering, still using, comparatively speaking, the equivalent of the Stone Age hammer and chisel. Yet it is the year 2008 – what’s wrong with our industry?
How should we visualize software, both the design and (mathematical) verification (i.e. proof) of the runtime executable? How will visualizing software improve Software Engineering? How can we make the industrialization of software happen? Subject of my next post.
© Copyright 2008 Mitch Barnett - Software Industrialization is the computerization of software design and function.
newtelligence dasBlog 2.2.8279.16125 Theme design by Bryan Bell
| Page rendered at Monday, 08 December 2008 17:48:31 (Pacific Standard Time, UTC-08:00)
On this page....