# Wednesday, 30 July 2008
Dell’s Fashionable PC’s– Yours Is Not Here

I have poked marketing fun at Microsoft’s Dinosaur Ads and Oracles “hot pluggable” EAI platform, but Dell just beat them with, “Get the mobile, fashion-forward student laptop.”

“This personalized laptop reflects your sense of style and keep you connected to fun, friends and assignments, no matter where the school day takes you.”

Wow, check it out man, FREE COLOUR!

I know after being in this industry for 17 years, I have a little bit of cynic in me, of the Dilbert kind, but honestly, Dell is not only marketing to an ever younger audience, (reminds of Camel cigarette ads for kids), but it is to the point of trying to make computers be as hip as skateboards (psst hey Dell, never going to happen!).  Note the ‘Street” version above.  I wonder if I got one of those that I would develop a bad boy, street attitude.  Oh wait a minute, most of my co-works already think that about me...


"More You. Inside and Out. Personalize your Dell STUDIO with ‘Designed for Dell” accessories – the brands you trust customized to match the colour, fit and style of your system.”

Never would I believe such branding could be applied to a... computer?  Now for one moment, I will admit that I was always attracted to Alienware computers as being a closet gamer, plus they are cool – and the marketing and branding is slick.  But I will always associate Dell with business computers – that’s their brand to me.  Why would they jeopardise their business brand to go after the skateboard market?  Share value?  Pfffttt!


“A cool campus accessory that is ready to move.”  Honestly Dell, just what is this marketing message supposed to convey?  That a computer is a cool campus accessory for woman?  That it is the new purse?  And what about that locker...  Show me one student that has a pink fur lined shelf for her books.  Even my five year daughter feels pink fur is on the outs.  What is that picture of?  Of her and her Mom when she was little or her and her daughter or ??  This is so wrong.  My wife says, “Who are they kidding?  Computers are supposed to be tools to help people and now it has become a fashion statement – an image conscience thing.  F*&!  - there is no stopping these marketing people.”  OK, that was a quote from my lovely wife when I showed her this.  She said a lot more, but none that I can repeat here :-) It is embarrassing to me being in the computer industry to be associated with this.  Good thing I don’t have any Dell computers.


 “Make Your Dorm Room The Centre Of Fun”

‘Whether they’re an aspiring botanist or a fan of film noir, this PC will bring inspiration and entertainment to their dorm room for a fantastic price.”

Oh man, I can tell you that when I was taking computer courses in college, my dorm room was the center of fun and inspiration, but there were no computers in it ;-)

“Handles Whatever Your World Throws At It.”

Dell, what happened to your brand?  I picked up a Globe and Mail on Monday and you had this flyer in it.  It has changed my view of Dell forever – lost all credibility to me.  How can I ask my business customers to take your brand seriously when you are all trying to be all hip and designer like to a younger generation?  Worse yet, the ads are seemingly designed by someone in marketing that seems to have no clue about that demographic.  That’s aside from being pretty money grubbing going after an ever younger audience – pretty soon we will see Dell ads for grade school kids in summer camp...

Someone else from the fashion industry wonders about the same thing, but in reverse, "Why Would Dell Hold a Fashion Show.”  I can only hope that this new low in computer marketing is just a total oversight on Dell’s behalf and they will say it is an experiment gone awry and turn back to what they do best – build practical home and business computers for the masses.  But somehow I doubt it.  With all of this advertising comes the sunk cost of designs and tooling to produce all of these free color laptops. .

Wednesday, 30 July 2008 13:16:14 (Pacific Daylight Time, UTC-07:00)  #    Comments [0]
# Tuesday, 22 July 2008

I was reading an interesting post at Ted Leung’s blog called, “IDE’s and Dynamic Languages”.  It is interesting to me for a number of reasons.  One is how a Text Editor can be considered an IDE, even though Ted does say that automatic syntax verification and code completion is certainly beyond a Text Editor.  


One thing that did surprise me was no discussion on debuggers as part of an IDE.  How can people code using a text editor without a debugger?  I guess I have been (totally) spoiled back in the VB6 days (yes, I will admit it) that I could step though code and when I bumped into an error, I could back up the debugger a few statesments, make my correction and keep on stepping through.  I have never been as productive since!  Know what I mean?  Yes, I know this says nothing about design, but an IDE is a tool for using a programming language, yes?  So how come we (as in developers) have so few tools or is that choice of tools?


According to a recent analyst report, 97% of developers on the .NET framework use Visual Studio and over 70% use Eclipse or Eclipse based IDE’s for Java.  As much as I love Visual Studio, and being a .NET developer, I have no control as to what I want in an IDE.  Worse yet, with the emerging new dynamic languages, IronRuby, IronPython and Managed Jscript, have almost no tool support at all in Visual Studio.  While there have been some announcements, and articles and some tooling, its like bolt-ons to Visual Studio and still yet to come.


As a .NET developer, specifically a .NET web developer, I would like to use something other than Visual Studio to develop web applications using a dynamic language. My wish list is for something lightweight and web-based so that I can explore using, an interactive interpreter and a simple code editor just using a web browser.  Maybe something like this:




And this:



This was my first version of a web-based “very” lightweight IDE that used a JavaScript based interactive console and a JavaScript based code editor.  You can download the code from Codeplex


Several months have passed since I released this and I have been working on a new version that targets the IronPython 2 and the DLR.  In that time, other folks have started to develop similar web-based interactive consoles and code editors.  One example is Jim Hugunin’s DLR (interactive) Console which you can download from the Silverlight Dynamic Languages SDK.  I should point out that this is a modified version.  Also note that it uses Silverlight:



As an old ASP.NET/AJAX/JavaScript/HTML/CSS developer, I am quite excited about Silverlight for a number of reasons.  I am hoping the adoption rate and the tooling for this technology also increases over time.  Silverlight development also suffers from not being fully integrated in Visual Studio, but I should be fair as it is also Beta.  But trying to run a Silverlight “web” application on a web server and interacting with other lanaguages is tough at best.  But the rich UI experience is really quite nice compared to old school ASP.NET forms apps.  So even after 6 months of releasing my lightweight dynamic languages development environment, I am still torn between “tried and trued” JavaScript and the new Silvelight.


I thought about how I would implement intellisense in a JavaScript console and cringed at the thought of actually how to implement this.  I could see using a few of the controls from the AJAX Control Toolkit to implement this, but it would be quite the effort.  Not only does the DLR console support intellisense, but so does Daniel Eloff's interactive console called :



Wow, I am impressed!


Here is another web based shell (that you cannot download) from Oleg Tkachenko:



Also Michael Foord has a Silverlight based Python in the Browser:



Jimmy Schementi has an IronRuby console:




Of course there are other implementations, but they are not web-based.  Nonetheless, Joe Sox’s IronTextBox works very well:



And Stefan Dobrev's DLR Pad:



And Ben Hall has just released his IronEditor:



So what’s my point?  I think all of these projects are great and kudos to the people that built them.  It takes a lot of time and effort above and beyond just regular work hours. I have been there myself, my hats off to you folks!  But, there are 8 versions of the interactive console and a few versions of a basic code editor.  I know it may be a dream, but it would be great to collaborate with these people and write out a simple set of requirements for what a great DLR console and code editor would be.  And then as a virtual team, implement it.


After all, to a large degree, it will be how well supported the language is from a tools perspective that will really determine the rate of adoption.  And right now, the tools (or IDE) experience for Dynamic Languages on .NET is severely lacking to the point of having several people independently developing their own tooling.  In this post I only pointed out a handful of these tools and I know there are others, but I was really targeting web-based IDE’s.  Maybe that is an opportunity?  Or is it a pipe dream?


I am also wondering how, in a code editor, one could hook up a debugger to actually step through the code, regardless what Dynamic Language (or Static or JavaScript or Silverlight) is being used?  Hopefully I won’t have to wait until PDC 2008 to see what the next step is from MSFT.  Who knows maybe there is enough interest to develop a Web-based IDE for Dynamic Languages on .NET.

Tuesday, 22 July 2008 16:35:28 (Pacific Daylight Time, UTC-07:00)  #    Comments [2]
# Thursday, 26 June 2008

In my best Sam Kinison voice, “ah ahhh ahhhhhhhhhh!!!!”  I can’t take it anymore.  I am re-installing Office 2003 and forgetting about Office 2007.  Why?  It’s the ribbon man!  For all of the usability design, I find it unusable. No offense to Jensen Harris or Microsoft, but for me, the consumer of the product, and after trying it for over a year, I just can't get used to it.


First, full disclosure, I am not a “usability designer” or a Microsoft “hater."  In fact, I have been making a living as a software architect/programmer type on the Microsoft stack since 1991 and have been fairly happy with the platform (I love VS2008!) – except for the ribbon.  But I digress.


The “ribbon.”  Jensen says one of the reasons it was invented was because people could not find the new features when they were added to the product.  Then he goes on to say that there are over 250 menu items and over 30 toolbars in Word 2003, which resulted in this satirical view:




Now, fair enough, but I would suggest that if a “word processing” application has +250 menu items and over 30 toolbars, then “Toto, we're not in Kansas Anymore."  Meaning, this is no longer a word processing application.


Honestly, Word should have been “refactored” into perhaps multiple products or features split into a desktop publishing application or a whole other suite of applications.  But instead, the UX team went through an honorable and noble design process of solving the wrong problem.  Kudos to you Jensen, but I just can’t do it anymore.  Every time I look at the ribbon, my brain freezes - I have to think, which means bad usability design.


Why?  It boils down to simple math.  When I see the Word 2003 menu, I see this:



Ok, I see 9 “objects.” Notice no toolbars.  That’s right, simple is better… right?  Ok when I get crazy, and add a toolbar, I see:



Even then, it is 19 objects on the toolbar and another 9 objects for the menus.  But what do I really use? 



Yah, that’s right 13 objects in total!  That’s it.  The bullets, numbering and indent/outdent are merely conveniences for me.  Note one complaint already is that these are 2 separate toolbars and there is no way for me to put them on one row, even though there is lots of horizontal space, I am forced to use up two vertical rows.  That ain't usability.


2nd complaint:



Oh yeah, not full menus on the pull down – who designed that?  Yes, I know what you thought, and I know the "fix", but honestly, it does not work.  Give me the full menu every time so I do not have to click twice. In my mind, usability is all about minimizing the choices a user has to make and minimizing the number of mouse clicks to make those choices.  If you have too many choices, maybe you are trying to solve the wrong problem?


Here is my default Word 2007 "Ribbon":



There are, count them, over 60 possible choices or selections to make.  And that is the problem.  Too many visible choices!  My poor brain needs to parse and process each item to see if it matches what I want to do.  Whereas before, I had a pretty good idea that in the one of the 9 menus in Word 2003, I would be able to locate and narrow down the “decision tree” to find what I am looking for.  In fact, I got really good at in 2003 and did not have to "think" about it.  And that's the point of good usability design - no think time.  In Word 2007 I have 5 times as many visible choices per "ribbon" x 8 menus, which means exposing ~480 visible objects to the user, which is way too many!  In my mind, this is a classic case of solving the wrong problem – i.e. if a “word processor” has 480 objects, commands, menu items, whatever the heck you want to call it, give it a name, then it is no longer, by far, a word processing application.  Something is really wrong here.


Oh and some hidden UI gems.  When I first fired up Word 2007, I was trying desperately to find the “options” menu item which has always been Tools/Options, for like 10 years its been there - if it ain't broke...  After several minutes of hunting, I had to ask one of my co-workers, where the heck is the Options option?  It is hidden at the bottom of the "magic" Microsoft Office Button.  I say magic because a) who knew it was a button? and b) why the heck is it there?  I might as well be playing a pinball game for all the pretty widgets!



Funny that there is a “Locations of Word 2003 commands in Word 2007” article...  What does that say about the user experience?  Ok, I will admit to being totally programmed by the “File” menu approach, but so is the rest of the world and the mass majority of applications in the world (meaning everything but Office 2007) also operates that way, so what up?  As mentioned before, I believe the wrong problem is being solved.


As a related aside, it took me forever to find on the IE7 toolbar where the “find on this page” menu item was.  Have a look at the screenshot below.  Where would you look?



My first instinct (decision) was to look under the “Page” menu/toolbar for "find on this page" menu item:




Nope, not there.  Other related page menu items are there, but not my find on this page menu item.  So then of course I looked under each menu, in random desperation, and still no go.  WTH?  I had to search on the internet to find the “find on this page” menu item and lo and behold it is hidden away here:



Again, I feel the wrong problem is being solved here.  We have a menu called Page and if you wanted to find something on the “Page” you would look under the “Page” menu, yes?  I know I live and breathe software for a living, but I just don’t get how this is usable.  Again, I am not trying to pick on MS, but as someone that uses MS tools daily, there are items that come up that defy any sort of logic.  And that can be said for any software products and services company.


What’s my point?  While there is a lot of hype around usability and the user experience, it does no good to be solving the wrong the problem.  Rule #1 in software development, regardless if it is usability or not, make sure the right problem is being solved.  And if the software industry moves towards adopting the "ribbon" as a standard user experience widget, I think I will take early retirement!

Thursday, 26 June 2008 17:02:02 (Pacific Daylight Time, UTC-07:00)  #    Comments [0]
# Sunday, 18 May 2008

“The required techniques of effective reasoning are pretty formal, but as long as programming is done by people that don’t master them, the software crisis will remain with us and will be considered an incurable disease.  And you know what incurable diseases do: they invite the quacks and charlatans in, who in this case take the form of Software Engineering Gurus.”

EWD1305. Answers to questions from students of Software Engineering, Edsger W. Dijkstra, 2000.

A very insightful, but somewhat harsh observation by Professor Doctor Dijkstra.  Also consider:

“No, I’m afraid that Computing Science has suffered from the popularity of the Internet.  It has attracted an increasing – not to say: overwhelming! – number of students with very little scientific inclination and in research has only strengthened the prevailing (and somewhat vulgar) obsession with speed and capacity.”  Again from EWD1305.

As an aside, Dijkstra, and as some of you may have heard of Dijkstra’s algorithm, made a number of fundamental contributions to the area of programming languages and received the Turing Award in 1972.  You can read his notes diary on line which makes for fascinating reading.

Why did I quote Dijkstra?  Well, I tend to agree with his view and as described in my previous post, I don’t think we, as Software Engineers, know how to perform Software Engineering.  In fact, I don’t even think Software Engineering really exists in our world today – and in some respects, we seem to be moving farther away from it instead of getting closer.  That is to say our predilection for programming languages blinds us to what our real focus as Software Engineers should be.

Let me be more succinct.  When I say Software Engineering, I am picking specifically on that huge black hole called software design – i.e. we don’t know how to “design” software.  We sure know how to program the heck out of it, with our humungous list of programming languages, but what techniques and tools do we have for “designing” the software to be programmed?  How do we model our design?  How do we prove (i.e. verify) our design is correct?  How do we simulate our design without coding it?  Ponder.

Designing software is all about designing “abstractions.”  Software is built on abstractions.  Programming languages are all about implementing abstractions.  But where do those abstractions come from and how do we describe or model (i.e. design) those abstractions?

Let’s look at some approaches to designing software “proper.”  Formal methods is an approach to software design.  From Wikipedia, “In computer science and software engineering, formal methods are mathematically-based techniques for the specification, development and verification of software and hardware systems.  The use of formal methods for software and hardware design is motivated by the expectation that, as in other engineering disciplines, performing appropriate mathematical analyses can contribute to the reliability and robustness of a design.  However, the high cost of using formal methods means that they are usually only used in the development of high-integrity systems, where safety or security is important.”

“Hoare logic (also known as Floyd–Hoare logic) is a formal system developed by the British computer scientist C. A. R. Hoare, and subsequently refined by Hoare and other researchers. It was published in Hoare's 1969 paper "An axiomatic basis for computer programming". The purpose of the system is to provide a set of logical rules in order to reason about the correctness of computer programs with the rigor of mathematical logic."  

Another approach is program derivation, “In computer science, program derivation is the derivation of a program from its specification, by mathematical means.”

Model checking is the process of checking whether a given structure is a model of a given logical formula. The concept is general and applies to all kinds of logics and suitable structures. A simple model-checking problem is testing whether a given formula in the propositional logic is satisfied by a given structure.”  As an aside, you will note that the Unified Modeling Language (UML) is not on the list of model checkers.  Interestingly enough, any UML diagram cannot be checked (i.e. verified) for correctness, so what good is UML?  OK, I am being a bit facetious, but for the most part, I have found this similar to Dilbertisms – they are satirical truths.

One approach that is familiar to me is, “Design by Contract, DbC or Programming by Contract is an approach to designing computer software. It prescribes that software designers should define precise verifiable interface specifications for software components based upon the theory of abstract data types and the conceptual metaphor of a business contract."  The metaphor comes from business life, where a "client" and a "supplier" agree on a "contract”

However, while a lot of companies talk about Design by Contract, very few, at least in my experience, actually perform this and particularly at the level required for it to be beneficial.  Further, while it is a “clear metaphor to guide the design process” that in of itself makes it hard to simulate the design of a software system or “prove” (i.e. verify) the correctness of a software system, before writing any code.

So how do we design software that meets the criteria of modeling the software and simulate the running of the software to verify its correctness?  After much research, I came across one approach that seems to make the most sense to me, in which why I will explain shortly.   First, an introduction to Alloy and Alloy Analyzer from the Software Design Group at MIT. 

“Alloy is a structural modelling language based on first-order logic, for expressing complex structural constraints and behaviour. The Alloy Analyzer is a constraint solver that provides fully automatic simulation and checking.   Our philosophy is to couple lightweight design and specification notations with powerful tools.”

There is a book called, “Software Abstractions” by Daniel Jackson on Alloy and Alloy Analyzer.  You can also download a few chapters of the book.  In the preface chapter, I was particularly impressed with these two paragraphs,


The experience of exploring a software model with an automatic analyzer is at once thrilling and humiliating. Most modellers have had the benefit of review by colleagues; it’s a sure way to find flaws and catch omissions. Few modellers, however, have had the experience of subjecting their models to continual, automatic review. Building a model incrementally with an analyzer, simulating and checking as you go along, is a very different experience from using pencil and paper alone. The first reaction tends to be amazement: modeling is much more fun when you get instant, visual feedback. When you simulate a partial model, you see examples immediately that suggest new constraints to be added.


Then the sense of humiliation sets in, as you discover that there’s almost nothing you can do right. What you write down doesn’t mean exactly what you think it means. And when it does, it doesn’t have the consequences you expected.  Automatic analysis tools are far more ruthless than human reviewers.  I now cringe at the thought of all the models I wrote (and even published) that were never analyzed, as I know how error- ridden they must be. Slowly but surely the tool teaches you to make fewer and fewer errors. Your sense of confidence in your modeling ability (and in your models!) grows.”


Let me step back in time for a moment to illustrate to you why these two paragraphs are of particular interest to me.  Back in the 80's, in the electronics field, I went through this type of “industrialization” first hand with electronics circuit design.  At first we drafted our circuit designs on real blueprint paper, we then built some sketchy prototypes (anyone remember breadboards?), designed our tests and implemented test harnesses (i.e. scopes, analyzers, generators, etc.) and tested the design “after” it was implemented.  Note that it may take a number of manual iterations or cycles to get the design right as well. 


I would say this pretty much sums up the same approach we use to design software today.  I.e. “sketch-up" some designs, start coding right away, spend an inordinate amount of time “refactoring” and developing test after test after test and then in the end, in some cases, figuring out that, guess what? all of the tests simply validate and verify that the software design is so flawed from the original specification (read: sketch-up) that we have to start over again.  Whether the process is called TDD, Agile, iterative, waterfall, whatever, in the end it really does not matter as the process itself is flawed because it completely misunderstands the role of software design and therefore the result can only be the sorry state of software in our industry.  But I digress.


Then the electronics design world was revolutionized when SPICE (Simulation Program with Integrated Circuit Emphasis) came along.  It ran on a computer that not only allowed you to design your electronic circuits, but also simulated the implemented design (i.e. an instance) where you hooked up your “virtual” test instruments, in software, and completely emulated the circuited deign without ever having to breadboard the circuit. I personally lived that era and is one of the main reasons, after spending 17 years designing and programming software since, have come to the humble realization, there must be a better way.


The industrialization part of SPICE meant that we were moving from a completely manual process, in which we could only verify the circuit design after the fact, to dramatically reducing the cycle time to design and verify the design of an electronic circuit without ever expending any tooling or material costs whatsoever.  Further, you could do numerous design iterations basically for free, plus apply thousands (billions!) of test cases to the simulated design that we would never be able to achieve manually.  This was modern day industrialization in real practice. We certainly can’t do design iterations in our software world for free today.


From Software Abstractions, I love the lines, “then the sense of humiliation sets in, as you discover that there’s almost nothing you can do right. What you write down doesn’t mean exactly what you think it means. And when it does, it doesn’t have the consequences you expected.  Automatic analysis tools are far more ruthless than human reviewers.”  


This is exactly what happened to us electronic circuit designers when we first started using SPICE.  We thought we really could design analog circuits and as it turned out, even some of our basic design assumptions were completely flawed.  We struggled at first to understand what the heck we were doing was so wrong.  Then after careful analysis, tuning and testing of the design models, we started seeing the errors or our ways and our designs became more exact, precise and tolerant of numerous error conditions.  It was truly a humbling experience.


And that’s the point of this post.  In my opinion, we in the software design community could use some humbling.  I am pretty sure that most of our designs, regardless of programming language used, are majorly flawed.  We, meaning so called Software Engineers, find this out after being in the field for several years, hacking (or is that tooling?) away in our favourite programming languages without any real verifiable proof that our design is the right one or even correct or would even work before we started coding. 


Maybe we are all in denial.  Maybe I am over generalizing, but having been designing and programming software for many years, I feel I need to reset and look at software design from a much more formal perspective, hence this intro to Alloy and Alloy Analyzer.  Both the technique and tools embody what I personally experienced in another industry that underwent nothing short of a revolution in the way how electronic circuits are designed (i.e. industrialization).  One could say today that the electronics design industry has been fully industrialized.


Software industrialization will occur one day; history will repeat itself, in the same way that it happened in the electronics design world (and other engineering disciplines) will also happen to the software design world.  In fact, it already is happening today.  Software design techniques and tools like Alloy and Alloy Analyzer are making it possible to design software and verify the design of the software before the software is actually implemented (i.e. coded).  And that is what I call the industrialization of software.


I ordered the Software Abstractions book, downloaded the tools and tutorials and will report back my findings sometime in the future.  Needless to say, this is the first time, in a long time, I have become excited again about being in the software development industry.

Sunday, 18 May 2008 20:21:14 (Pacific Daylight Time, UTC-07:00)  #    Comments [0]
# Saturday, 03 May 2008

The truth is... and as much as it breaks my heart to say this professionally, there is no such thing as Software Engineering.

How can I make this claim?  While I can tell you that I have been employed in the software development industry since 1991, that I have graduated from a two year post graduate program in Software Engineering Management, which is now incorporated as a Masters of Software Engineering at the University of Calgary, that I have worked on dozens of small and large commercial projects in North America, including my own open source project, that I have read dozen’s of wonderful textbooks on the very subject of software engineering, still might not convince you I am telling the truth.

So how about I make a claim that I defy anyone to dispute.  I make the claim that there is no one on the face of this planet today that can predict the outcome (i.e. schedule, cost, resources, quality, etc.) of ANY software project of any significant size (like >10K SLOC) with any sort of accuracy (I will be generous and say + 25% as we all know it will never be a negative percentage).

If you happen to get lucky and actually hit the target once, I make the further claim that your odds are far better in Vegas then they are in the software world for repeatability – that is repeatability for success, not failure – which I would argue that we are very good at.

Big deal you say.  Vast amounts of documentation have been written on the subject of Software Engineering and “how to” avoid mistakes and failures.  In fact, for every ailment, we have a remedy we can trot out to counter the ailment.  But we still make (the same) mistakes.  In fact, just the other day, a very popular web site was down for two days that has kicked off this type of tirade on my blog again.  In fact, "I am too old for this shit".

I love Steve McConnell’s books, as I do Robert Glass’s, “.”  It contains 55 facts and 10 fallacies.  Anyone that has been “in the biz” for any length of time is likely to have made some or all of the mistakes listed in the book.  I can admit I fall into the category of having made all of those mistakes and then some.  And it is not just me.

What gives then?  Why is developing successful software so impossible to predict and repeat?  Of sure, all of those books (and experience) outline and recommend treatments, but I sometimes feel we are no closer than the founders of the term software engineering.  Of course, there are the people that point out that software development is much as art as is it is engineering (remember I said it wasn’t engineering – even though that breaks my heart to say it, given what I have written on this blog in the past) and that Software Engineering is completely a non-deterministic activity, but hey, so is life.

Stepping back a bit, I would say that our world of software development is unindustrialized compared to other industrialised “engineering” disciplines such as civil, chemical, electrical, electronics, mechanical engineering, etc.  This is simply based on a time line which we can do nothing about other than wait it out.

Well, I am not a patient guy and would like to do something about this.  One thought that has occurred to me, which I believe is of direct relevance to Software Engineering,  is how to do we “visualize” software?  Specifically, how do we visualize the “design” of software and how do we visualize the “verification” of the built software?

The closest answer I have seen is this:


Ok, so a little tongue in cheek, but ain’t that the truth about Software Engineering?  The title of this blog post did not lie :-)

Note that other engineering disciplines all have ways of “visualizing” the design of their domain.  Even designing DNA strands and application specific integrated circuits (ASIC)  deal with +100 million objects.  What is truly ironic about this is that most of these disciplines use computer software (CAD) to model and verify their designs.

What’s my point?  I believe that we are thinking about “visualizing” software the wrong way.  CASE tools in the early 90’s are proof of this as I would say that UML is soon to follow the same fate.  Why?  Because, in my opinion, UML stickman and the other funny looking symbols do not visualize the design of software nor the verification of software – it is simply “Lost in Translation.” 

How can we design software on any scale if we can’t model it or visualize it with 100% fidelity?  The answer is that we can’t.  The closest we can get to modelling or visualizing software is using the same text editor we have been using for over 50 years and using our favourite programming language to design software.  Ergo - the source code is the design.  Certainly a favourite topic of Jack Reeves.  Btw, I happen to concur, which is also the fundamental problem.  Visualizing 10,000 lines of source code in any programming language is (far and away) beyond any of us mere mortals that are good for about 7 + - 2 objects at any given moment in time.   

It is on this basis that I feel we are way (WAY!) off the target path for Software Engineering and the root cause of why there is no such thing today as software engineering.  We have very poor approximations of Software Engineering, still using, comparatively speaking, the equivalent of the Stone Age hammer and chisel.  Yet it is the year 2008 – what’s wrong with our industry?

How should we visualize software, both the design and (mathematical) verification (i.e. proof) of the runtime executable?  How will visualizing software improve Software Engineering?  How can we make the industrialization of software happen?  Subject of my next post.

Saturday, 03 May 2008 14:57:29 (Pacific Daylight Time, UTC-07:00)  #    Comments [2]
# Sunday, 27 April 2008

OK, I promised I would be good.  At the beginning of this year, I made myself a promise not to be a cynical “you know what” when it comes to the world of software development.  Lord knows I have seen my fare share of Dilbertesque wackyisms over the last 17 years in this biz.  Heck, I even contributed back to the industry with my own open source project.  I felt it was giving back to the software world, that has been so kind to me, but then again maybe it is some form of redemption.  I don’t know.  Maybe that is why I have had only one post so far this year.


But… there are still some things that defy any sort of rationale, logical explanation whatsoever.  For example, I am a frequent flyer (~25,000 miles per quarter) and so I keep pretty close track of those miles on my Aeroplan.  Last week, I got back from a business trip to LA and Medford, Oregon.  Oregon, truly a beautiful place, but not as beautiful as the Sunshine Coast, where I live, but I am biased.


So, as usual on Friday, I pop over to the Aeroplan site and am greeted with the following web page.





Now, I know from time to time, there are outages, and maybe it would last an hour or so, cause how serious could it be?  Pop in a new drive, slide in a new web server in the farm, switch failed, power supply failed – but hey, even those would actually not cause an outage, cause as any data center system engineer knows, these are all “hot swappable” in the year 2008. Well it is Sunday at 9pm and the outage is still in effect.  Now what in the heck would cause an outage of a fairly heavy traffic laden web site for over two days?  I can’t imagine.


Note the site says, “Following a routine maintenance procedure, we experienced a system failure.”  A system failure?  Folks this is a catastrophe of biblical proportions in web terms for the year 2008.  A rip in the time space continuum has occurred over there at Aeroplan.  No halfway decent web site goes down for two days.  Even my own rinky dinky web log here that I host on an ancient 1990 Pentium III 600 MHz piece of you know what has not even seen 2 days of outage – ever - even when my power supply blew up and I had to shoe horn the ol’ mother board into a brand new case only took 6 hours.


So what the heck could have happened?  I can only guess that somehow the entire data center melted down and there is no back up of the source code and somewhere someone, or a group of people are madly waiting until Monday morning to get into a bank’s safety deposit box to get the latest tape back up.  Like I just can’t image what it could be.  Can you?


What will be really surprising to me come Monday morning and Aeroplan is still not up.  If that is the case, I should call Aeroplan – I will host their site for a mere $10,000 per month (cheap!) and I guarantee that their site will never be down for longer than 24 hours, or they can have all of their money back.  How about that Areoplan, do we have a deal?  Oh by the way, I might upgrade my PIII to host it on :-) 


But seriously, man, you are giving us professionals a bad name in our biz for not having your sh*t together.  There is absolutely no excuse whatsoever, in the year 2008, for your main business, i.e. your web site, to go down for 2 days!  Either you do not know what the heck you are doing or you just don't care Mr.Aeroplan - either case, if you did not "own" my aero miles, I would switch to a new provider in a flash for such blatant shoddiness.


Updated Monday April 28 - Aeroplan manages to get their web site back up and running with little explanation.

However, I still see that my Aeroplan miles I collected from the previous week have not been updated on the site.  We will see if my Aeroplan Miles were affected or not...

Sunday, 27 April 2008 20:43:52 (Pacific Daylight Time, UTC-07:00)  #    Comments [0]
# Wednesday, 30 January 2008

Official website: http://globalsystembuilder.com   Download source from CodePlex: http://www.codeplex.com/gsb

Wednesday, 30 January 2008 11:48:57 (Pacific Standard Time, UTC-08:00)  #    Comments [0]
# Friday, 04 January 2008

I am working on a new user interface for my web-based IDE, called Global System Builder.  It will use a 3D map-based user interface.  So how does a 3D map-based user interface work with an Integrated Development Environment?  You will see in the following posts.  First step is the evaluation.


I lived in Calgary, Alberta, Canada for quite a few years and know the downtown area quite well.  One of its famous landmarks is the Calgary Tower. So I wanted to see how the two major (free) players compared to the real thing (above).

Check this out, on the left and Virtual Earth 3D on the right.  Hmmm… interesting.  Google Earth’s picture is a drawing.  Now I have played with SketchUp and it is incredibly powerful to model 3D – including the interior of a building.  However, not the case with the Calgary Tower here.  You may want to look at the as they have a nice inventory of 3D objects.

Now Virtual Earth’s image on the right is interesting as the software used by MS to gather and make the 3D images got messed up a bit on the Calgary Tower.  However, if you look past that and compare the background buildings in the images, Microsoft’s sure looks more realistic.

Microsoft has also acquired a 3D modeling tool like Google’s SketchUp called 3DVIA.  It appears to have most of the same capabilities as SketchUp, but I need to spend more time with it.  And similarly to Google’s warehouse, Microsoft has a way to place your models on Virtual Earth.

So how do the technologies compare?  Well, Microsoft has opted for a web-based approach using a JavaScript control that you can play with live and see the source.  Google Earth is a thick-client exe and its SDK is actually a with a C# .NET wrapper that you can get, which you will need a Subversion client like TortoiseSVN in order to get the source.

Having developed both, I must say that the Google thick client is much smoother and quicker than the Windows web-based client.  Meaning that when you zoom in, rotate, pan, etc., and play around with the 3D features, Google Earth’s movements are much smoother and seem to download and render much quicker than Virtual Earth.  Now this could be my internet connection speed to the data centers, the number of hops or … a whole bunch of things, but, it still leaves me with this impression.

Other than that, there a very specific feature in one product that is not in the other which will make the decision for me.  Wonder what it is?

Finally, a company called 3Dconnexion has developed a 3D mouse called SpaceNavigator that can fly around and through these 3D worlds and models.  I have one on order and can’t wait to try it out!  If you want to see what it can do, have a look at this interesting “fly by:”  


I did  look at NASA’s World Wind and it is really impressive, but I have not had enough time to evaluate whether it meets my needs or not.  One immediate short coming is the inability to draw and import a 3D model, or so it seems…

Still trying to figure out how a 3D map-based user interface will work with a programmers IDE?  Stay tuned.

Friday, 04 January 2008 15:57:04 (Pacific Standard Time, UTC-08:00)  #    Comments [0]
# Wednesday, 26 September 2007

In Part 1, I briefly introduced the guide to the Software Engineering Book of Knowledge (SWEBOK) that was published in 2004.  “The IEEE Computer Society establishes for the first time a baseline for the body of knowledge for the field of software engineering…”  We will come back to the SWEBOK in a future post as this post is about how to qualify for “professional” league play.


In Part 1, I discussed software engineering as being a team sport.  This is nothing new as far as I am concerned, but I am still amazed when a multi-million dollar software development project is assigned to, and executed by, a newly assembled team.  This team has never played together, and quite likely consists of shinny players to NHL stars and everything in-between.  Now imagine that this team is somehow “classified” as an amateur Junior “B” league hockey team and their very first game was to play a real NHL professional hockey team.  What is the likelihood of the B hockey team winning?  Don’t forget, the B team has not played together as a team yet.  Their first game is to play together and play against an NHL pro hockey team.  Did I mention that just before actual game play the team figures out that they are missing their Center, who also happens to be the Captain of the team.  Again, what is the likelihood of success?


Of course, this hockey scenario would never happen in real life, but it is certainly happens in our software development world.  Where teams (assembled or not) get assigned software development projects that are way beyond their software engineering capabilities to even have a chance at successful delivery.  I see it everyday.  And to varying degrees, I have seen it over the past 16 years of being in the software development business.  I have been fortunate as some of the dev teams I have worked on are at “professional” league play, where all team members have +10 years of software development engineering experience.  Aside from work being fun on theses teams, our success rate for the design and delivery of software on time and on budget was very high. Our motto was give us your biggest, baddest, dev project you got – c’mon, bring it on!


However, most teams I have worked on have been a mix of shinny players to pros.  Our success rate was highly variable.  And even more variable (chance has better odds) if the model is assembling players from the “resource pool.”  Some of the shinny players have aspirations of becoming pros and take to the work, listening to the guidance of the pros.  Other shinny players are finding out for the first time what it means to try out in the pro league and are questioning whether they are going to pursue their career in software development.  It’s a hard road, being unindustrialized and all.


There are shinny players (and pro posers) that try real hard but simply don’t have the knowledge or skills (aptitude?) to play even at the shinny level, for whatever reason.  This is especially difficult to deal with, if one or more of this type of player happens to be on your team.  It is the equivalent of carrying a hockey player on your back in the game because s/he can’t skate.  Never happens in the pro hockey league, but amazingly often in our software development world.  If our software development world was more “visible”, you would see in every organization that develops software of any sort, some people wandering around carrying a person on their back.  It would be kind of amusing to see, unless of course you were the one carrying… or being carried.


That is the irony of what we do as software developers.  It is nowhere near as “visible” as a team (or even individual) sport where everyone can visibly see exactly what you are doing.  And even if people could see what you are doing, it still may not matter, because to most people, other than software developers, software engineering is a complete mystery. 


So how does one find out what level of league play you are at?  One problem in our software development world is that we do not have levels of team play.  We are not that mature (i.e. industrialized) yet. Well, some would say that the CMMI has levels of play and I would agree, but I am talking about individual level here first, then we will discuss what that means when you qualify to be on a team (or even the remote possibility of finding a team that has the same knowledge and skill level you do).  Another way to determine your knowledge and skill level is through certifications.  There are several ways to get certified.  Some popular ones are: Certified Software Development Professional and the Open Group's IT Architect Certification Program.  Other certifications are directly related to vendors’ products such as Microsoft’s Certified Architect and Sun's Certified Enterprise Architect.


For this series of articles, I am looking at level of play as being a “professional software engineer.”  I firmly believe that software development is an engineering discipline and it appears that there is only one association, in the province I live in, that recognizes this and that is the Association of Professional Engineers and Geoscientists of BC (APEGBC).  That designation is called a Professional Engineer or P.Eng.  “The P.Eng., designation is a professional license, allowing you to practice engineering in the province or territory where it was granted.  Only engineers licensed with APEGBC have a legal right to practice engineering in British Columbia.”  Of course, this may be slightly different depending on your geographic location. 


Note how this is entirely different than any other certification – it is a professional license giving you a legal right to practice software engineering. The term Professional Engineer and the phrase practice of professional engineering is legally defined and protected both in Canada and the US.  The earmark that distinguishes a licensed/registered Professional Engineer is the authority to sign and seal or "stamp" engineering documents (reports, drawings, and calculations) for a study, estimate, design or analysis, thus taking legal responsibility for it.” 


Ponder this for a while, what would it mean in your software development job right now if you were a professional software engineer and that you were legally responsible for the software that you designed?  How would that change your job today?  I know it would change mine.


All of this is not news to any practicing electrical, electronic and all of the various other engineering disciplines as this have been standard practice for years, even decades.  Yet it is seemingly all new to us software professionals.


Let’s take a look at the requirements for applying for P.Eng., specifically related to software engineering.  These two documents are: "2004 SOFTWARE ENGINEERING SYLLABUS and Checklist for Self-Evaluation” and “APEGBC Guide for the Presentation and Evaluation of Software Engineering Knowledge and Experience.”    


From the Software Engineering Syllabus:  “Nineteen engineering disciplines are included in the Examination Syllabi issued by the Canadian Engineering Qualifications Board (CEQB) of the Canadian Council of Professional Engineers (CCPE). Each discipline examination syllabus is divided into two examination categories, compulsory and elective. Candidates will be assigned examinations based on an assessment of their academic background. Examinations from discipline syllabi other than those specific to the candidate’s discipline may be assigned at the discretion of the constituent Association/Ordre. Before writing the discipline examinations, candidates must have passed, or have been exempted from, the Basic Studies Examinations.”


That’s right – exams.  While I have 16 years of professional software experience, I may end up having to write exams.  I wrote 17 exams when a postgraduate Software Engineering Management program, so what’s a few more.  I say bring it on.  Oh yeah, did I mention I was applying to become a professional software engineer?  I want to work on teams near or at “professional” league play.  Practice what you preach, eh?  I will be documenting the process and my progress through this blog.


Open up the Software Engineering Syllabus PDF and have a look for yourself.  Can you take your existing education and map it to the basic studies?  How about that Digital Logic Circuits? Or how about discrete mathematics, remember your binomial theorem?  Now look at Group A.  Unless you are a very recent Comp Sci grad, it is unlikely you took Software Process – so perhaps exams for everyone!  Who’s with me?!


While I am having a little fun here, no matter what, it is going to be work.  And that’s the point, you don’t become a professional engineer overnight.


In addition to the educational requirements, have a look at the Presentation and Evaluation of Software Engineering Knowledge and Experience PDF.  You need a minimum of 4 years of satisfactory software engineering experience in these 6 main software engineering capabilities:


  1. Software Requirements
  2. Software Design and Construction
  3. Software Process Engineering
  4. Software Quality
  5. Software Assets Management
  6. Management of Software Projects

Some capabilities are further decomposed into sub-capabilities.  To each capability area can be associated:


  • the fundamental knowledge or theory , indirectly, pointing to the model academic curriculum of Software Engineering,
  • the applicable industry standards,
  • the recognized industry practices and tools,
  • and a level of mandatory or optional experience on real projects. 

The document then defines these capabilities and sub-capabilities as to what they mean.  Finally the document provides a suggested presentation of experience and an example of how to layout your projects.  Seems pretty straight forward enough, but when you sit down and actually go through it, remembering the projects you worked on and what capabilities were used and referenced to the sub-capabilities, it can take a while.


While I won’t go through some of the details of the  other requirements, which you can read in this 25 page application guide,  two other items stand out, one is writing the Professional Practice Exam, in addition to attending the Law and Ethics Seminar and the other is references.


The Professional Practice Exam. Before being granted registration as a Professional Engineer, you must pass the Professional Practice Examination.  The Professional Practice Examination is a 3-hour examination consisting of a 2-hour multiple-choice section and a 1-hour essay question. The examination tests your knowledge of Canadian professional practice, law and ethics.  There is a (large) book reference that you need to study in order to prepare for the exam.


The reference form is interesting in the sense that the Association requires that Referees be Professional Engineers with first hand working knowledge of the applicant’s work and that the applicant must have been under suitable supervision throughout the qualifying period.  I don’t know what your experience has been, but in my 16 years of being in the software development business, I have only worked with two professional engineers.


One more specific aspect I would like to point out is the APEGBC Code of Ethics.  The purpose of the Code of Ethics is to give general statements of the principles of ethical conduct in order that Professional Engineers may fulfill their duty to the public, to the profession and their fellow members.  There are 10 specific tenets and while I understand and appreciate each one, there is one in particular that is very apt to the state of software engineering today and that is:


(8) present clearly to employers and clients the possible consequences if professional decisions or judgments are overruled or disregarded;


You know what I mean.


This sums up the application process for becoming a professional software engineer. As you can see it is considerable effort and can take 3 to 6 months to acquire your license.  However, the main benefit is that it tells employers that they can depend on your proven skills and professionalism.  The main benefit for me is to use it as a qualifier for any new software development teams I will join in the future.  My goal is to work on teams that are at the “NHL” level of play. 


In Post 3 we are going to dive into the SWEBOK for an explanation of what the guide to Software Engineering Body of Knowledge is and how through this knowledge and practice, we as software professionals, can assist in the industrialization of software.

Wednesday, 26 September 2007 20:09:21 (Pacific Daylight Time, UTC-07:00)  #    Comments [3]
# Thursday, 13 September 2007

“In spite of millions of software professionals worldwide and the ubiquitous presence of software in our society, software engineering has only recently reached the status of a legitimate engineering discipline and a recognized profession.”


Software Engineering Body of Knowledge (SWEBOK) 2004.



“Software industrialization will occur when software engineering reaches critical mass in our classrooms and workplace worldwide as standard operating procedure for the development, operation and maintenance of software.”


Mitch Barnett 2007.


I had the good fortune to have been taught software engineering by a few folks from Motorola University early in my career (1994 – 96).  One of the instructors, Karl Williams, said to us on our first day of class, “we have made 30 years of software engineering mistakes which makes for a good body of knowledge to learn from.”  He wasn’t kidding.  A lot of interesting stories were told over those two years, each of which had an alternate ending once software engineering was applied.


Over 16 years, I have worked in many different software organizations, some categorized as start-ups, others as multi-nationals like Eastman Kodak and Motorola, and a few in-between.  I have performed virtually every role imaginable in the software development industry: Business Analyst, Systems Analyst, Designer, Programmer, Developer, Tester, Software/Systems/Technical Architect, Project/Program/Product Manager, QA Manager, Team Lead, Director, Consultant, Contractor, and even President of my own software development company with a staff of 25 people.  These roles encompassed several practice areas including, product development, R&D, maintenance and professional services.


Why am I telling you this?  Well, you might consider me to be “one” representative sample in the world of software engineering because of my varied roles, practices areas and industries that I have been in.  For example, in one of the large corporations I worked in, when a multi-million dollar software project gets cancelled, for whatever reason, it does not really impact you.  However, if you happen to be the owner of the company, like I have been, and a software project of any size gets cancelled, it directly affects you, right in the pocket book.


I wrote this series on software engineering to assist people in our industry on what the real world practice of software engineering is, how it might be pragmatically applied in the context of your day to day work, and if possible in your geographical area, how to become a licensed professional software engineer.  Whether you are a seasoned pro or a newbie, hopefully this series will shed some light on what real world software engineering is from an “in the trenches” perspective."


One interesting aspect of real world software engineering is trying to determine the knowledge and skill of the team you are on or maybe joining in the near future if you are looking for work.  While there are various “maturity models” to assist in the evaluation process, currently only a small percentage of organizations use these models and even fewer have been assessed.


Did I mention that software engineering is a team sport?  Sure you can get a lot done as “the loner”, and I have been contracted like that on occasion. I am also a team of one on my pet open source project.  However, in most cases you already are, or will be part of a team.  From a software engineering perspective, how mature is the team?  How would you tell?  I mean for sure.  And why would you want to know?


Software engineering knowledge and skill levels are what I am talking about.  Software engineering is primarily a team sport, so let’s use a sports team analogy, and since I am from Canada, it’s got to be hockey.  A real “amateur” hockey team may be termed “pick up” hockey or as we call it in Canada, “shinny."  This is where the temperature has dropped so low that the snow on the streets literally turns to ice – perfect for hockey.  All you need are what look like hockey sticks, a makeshift goal and a sponge puck. I can tell you that the sponge puck starts out nice and soft at room temperature, but turns into real hard puck after a few minutes of play.  The first lesson you learn as a potential hockey star is to wear “safety” equipment for the next game.


Pick-up hockey is completely ad-hoc with minimal rules and constraints.  At the other end of the spectrum, where the knowledge and skill level is near maximum maturity level, is the NHL professional hockey team.  The team members have been playing their roles (read: positions) for many years and have knowledge and skills that represent the top of their profession.


How does one become a professional hockey player?  It usually starts with that game of shinny and over the years you progress through various leagues/levels until at some point in your growth, it is decided that you want to become a professional hockey player.  Oh yes, I know a little bit about talent (having lived the Gretsky years in Edmonton), luck of the draw, the level of competition and sacrifices required.  The point being it is pretty “obvious” when you join a hockey team at what knowledge and skill level the team is at.  And if you don’t, it becomes pretty apparent on ice in about 10 minutes as to whether you are on the right team or not – from both your and the teams perspective.


In the world of software engineering, it is not obvious at all if you are on the right team or not and may take a few months or longer to figure that out.  Why?  Unlike play on the ice where anyone can see you skate, stick handle and score, it is very difficult for someone to observe you think, problem solve, design, code and deliver quality software on time, on budget.  This goes for your teammates as well as the evaluation of… you.


Software engineering is much more complicated than hockey for many different reasons, one of them being that the playing field is not in the self contained physical world of the hockey rink.  The number of “objects” from a complexity point of view is very limited in hockey, in fact, not much more complex than when you first started out playing shinny, other than the realization for wearing safety gear, a small rule booklet and a hockey rink. 


The world of software engineering is quite a bit more complex and variable, particularly in the knowledge and skills of the team players.  It is more likely that your team has a shinny player or two, a NHL player and probably a mix of everything in-between.  Without levels or leagues to progress through, it is actually more than likely, it is probably fact that the team is comprised of members at all different levels of software engineering knowledge and skill.  Why is this not a good thing?


This knowledge and skills mix is further exacerbated by the popularity of “resource pools” that organizations favor these days.  The idea is to assemble a team for whatever project comes up next with resources that are available from the pool.  As any coach will tell you, at any level of league play, you cannot assemble a team overnight and expect them to win their first game or play like a team that has been playing together for seasons of play.  This approach just compounds the fact of a mixed skill level team by just throwing them together and expecting a successful delivery of the software project on their very first try.  We have no equivalent to “try outs” or training camps.


And that’s a big issue.  People like Watts Humphrey, who was instrumental in developing the Software Engineering Institutes, Capability Maturity Model realized this and developed the Personal Software Process.  Before you can play on a team, there is some expectation that the knowledge and skill levels are at a certain point where team play can actually occur with some certainty of what the outcome is going to be.  I have a lot of respect for Watts.


So how do we assess our software engineering knowledge and skills?  In part, that is what the guide to the SWEBOK is about.  It identifies the knowledge areas and provides guidance to book references that covers those knowledge areas for what is accepted as general practice for software engineering. 


The other assessment part is to compare our knowledge and skills to what is required to becoming a licensed professional engineer (P.Eng.)  We will do this first before we look at SWEBOK in detail.


I see licensing professional software engineers as a crucial step towards software industrialization.  I base this on my “in the trenches” experience in the electronics world prior to moving into the software development world.  The professional electronics engineers I worked with had a very similar “body of knowledge”, except in electronics engineering, that was required to be understood and practiced for 4 years in order to even qualify to become a P. Eng. 


Most importantly, the process of developing an R&D electronics product and preparing for mass production, which I participated in for two years a long long time ago, was simply standard operating procedure.  There was never any question or argument as to what the deliverables were at each phase of the project, who had to sign them off, how the product got certified, why did we design this way, etc.  Comparatively speaking to our software industry, that’s what the guide to the Software Engineering Body of Knowledge is all about, a normative standard operating procedure for the development, operation and maintenance of software.  Yes, it is an emerging discipline, yes, it has limitations, but a good place to start don’t you think?


In Part 2, we are going to look at the requirements in British Columbia for becoming a licensed software engineer.  We will use these requirements to assess the knowledge and skill level to uncover any gaps that might need to be filled in the way of exams or experience.  If you want to have a look ahead, you can review the basic requirements for becoming licensed as a P. Eng., and the specific educational requirements and experience requirements for becoming a professional software engineer.


PS. Part 2 posted


PPS.  Happy Unofficial Programmer's Day!

Thursday, 13 September 2007 21:16:31 (Pacific Daylight Time, UTC-07:00)  #    Comments [0]
© Copyright 2009 Mitch Barnett - Software Industrialization is the computerization of software design and function.

newtelligence dasBlog 2.2.8279.16125  Theme design by Bryan Bell
Feed your aggregator (RSS 2.0)   | Page rendered at Monday, 23 February 2009 16:46:50 (Pacific Standard Time, UTC-08:00)