# Thursday, 17 November 2005
Our human civilization rests on the power of abstraction, insofar as the volume of specific cases handled by an abstraction is typically much greater than the overhead of the abstraction mechanism itself.  Charles Simonyi

In part 1 of raising the level of abstraction, I discussed a computer-assisted business process management tool that provided Business Analysts the capability, through a modeling tool, to completely define an application integration solution.  This modeling tool sits on top of Microsofts BizTalk Server integration server product. 
The modeling tool is actually a Domain Specific Language (DSL) which is used to configure a software factory schema for a particular application integration scenario.  This software factory schema is then used as input to a software factory template, which configures MSFTs Visual Studio IDE to automatically generate (with minimal coding) and build the application integration solution.  This raising the level of abstraction increases programmer productivity through systematic reuse of pre-built components and for the customer, increases product time to market, increases product quality, and introduces predictability and repeatability to an otherwise CHAOS process.  I call this software industrialization.
Whats my point?  DSLs are a key enabler to raising the level of abstraction.  Yet, DSLs are not well known in the software development world.  In my observations as someone that has been in the software development industry for15 years, we typically still code (by hand) at a very low level.  My intent here is to raise the level of awareness of DSLs to programmers to help raise the level of abstraction for producing quality software products through programming with models.  As such, I will introduce some links to help facilitate this awareness.
I have a lot of respect for Martin Fowler.  He has written many great books and one of my favorites is, ."  Martin has written an excellent paper called, Language Workbenches: The Killer-App for Domain Specific Languages?.  I would suggest this as a starting point for any programmer to grok the idea of what DSLs are and why we, as programmers, should be using them.
In Visual Studio 2005, MSFT have added a DSL Toolkit (SDK), which is the same infrastructure used for their Visual Class Designer and Distributed Designers.  With respect to the DSL Toolkit, there are samples that can be downloaded along with an on-line virtual lab  that allows you try it out for yourself at no cost.  Alan Cameron Wills has an excellent site for DSL FAQs  (thanks Alan!).
As a final thought to why we should be raising the level of abstraction:
The value of an abstraction increases with its specificity to some problem domain.   
Thursday, 17 November 2005 00:21:54 (Pacific Standard Time, UTC-08:00)  #    Comments [0]
# Wednesday, 24 August 2005
Before we continue with on-demand application generators, lets look at something completely different, but related.  Software size and effort estimation (read: cost estimation).
There are several formal methods for estimating software size, complexity, effort and cost.  Function Point Analysis, Lines of Code, tools like CoCoMo, SLIM, etc., including the most common technique called Wideband Delphi Estimating where senior programmers, architects, etc., base their estimates on real world experiences while following a formal estimating process.
Why do we do estimate?  Every customer I have come across always asks, how much? And shortly followed up with, and when can I get it? And finally, can you make it a fixed price estimate? 
Its like getting a building constructed, you need a blueprint to estimate a price for the construction, whether the blueprint is custom designed or "off the shelf". Regardless, the customer had to spend money on acquiring the blueprint. The blueprint puts in scope whether it is a house the customer wants or maybe it turns out to be an apartment complex or skyscraper that the customer really wants, hence the blueprint.  Constructing software is no different, you need a blueprint.  The blueprint we require are in order to provide an accurate cost estimate.
If we had the use case scenarios written down at this stage, then it would be much easier to come up with an accurate estimate.  However, the customer typically does not understand, that the level of detail we need is way more than the customer has supplied.  Almost always.  And the level of detail is down to documenting use case scenarios, usually a dozen to two dozen scenarios on smaller projects.  The quantifiable output is usually represented as written documentation with an average of 3 pages written per use case scenario, which in our example, means roughly 15 use cases x 3 pages each = 45 pages of  use case documentation.  Based on 15 years in the biz, I can count on one hand where I actually got enough detailed and documented use case scenarios from the customer to come up with an accurate cost estimate, as is.  For people that know me, I usually get requirements written on a cocktail napkin.  Funny, but not enough detail to be sure.
Geri Schneider and Jason Winters wrote an excellent book called,   In the book is a cost estimation model based on using the number of use cases and the complexity rating for each use case as input to the estimation model.  The model outputs effort estimates, with upper and lower bounds and by project phase and iterations of phases, in person days along with the ability to put in labor rates, which then outputs a cost, which then you can apply a contingency factor and out comes the final price estimate.
You can calibrate the use case estimation model as you use it over time, based on comparing estimates to actuals.  You can adjust estimating factors around skill sets and culture, plus the ability to calibrate around toolsets employed.  Whats great about it is that it works!  My team and I have used it successfully on a number of projects of varying size and complexity. After several iterations of comparing actual to the estimates, we got our estimation model calibrated within 10 to 20% differential between estimates and actual.
How does this tie in with on-demand application generators?  Next post.
Wednesday, 24 August 2005 04:06:06 (Pacific Daylight Time, UTC-07:00)  #    Comments [0]
# Wednesday, 03 August 2005
David Frankel wrote an excellent article called, Software Industrialization and the New IT, that describes what software abstractions are and why we want to raise the level.  First we started with 1s and 0s, which is ultimately what the computer understands, but as David points out, there has to be a better way.  There is, it was assembly language, ha ha, then third generation languages (3GLs) and then Model Driven Architecture (MDA) which represents the highest level of abstraction today.  As described in David's article, we continue to push the envelope with the next level of abstraction called, Computer-Assisted Business Process Management (CA-BPM) in which Barry Varga and I have invented one of these for the application integration world.
Ultimately, we want a drawing/modeling tool that allows a business analyst or a person that is not necessarily a programmer, to draw and describe the software to be built.  Since we are talking a virtual world here, the construction process (i.e. writing the code) can be fully automated using code generators that know how to read the output format of the drawing tool and code generate the solution to run on a target business process engine or server platform.  Therefore turning our incredibly labor intensive and error prone software development process into one that is predictable and repeatable.
Why dont these CA-BPM tools exist today?  There are several reasons for this, as discussed in earlier posts, but mostly due to the newness of our industry compared to other engineering disciplines.  I would like to introduce you to a CA-BPM tool that can fully describe size and complexity in the application integration domain. 
The need for application integration arose from executing business processes across software applications because no one application can do it all (unless it is SAP, right George :-)  For example, when you order your computer from DELL, there are several business processes that are executed.  Placing the order, credit card transaction, procurement, inventory and order management, scheduling, back-orders, assembly, test, burn-in and finally, delivery.  I may be missing a few steps, but you get the idea.  It may come as a surprise that there could be a dozen computer applications/programs used in this end-to-end business process, with each application communicating data to one or more applications.  Or at least trying to communicate data, which is where all the integration issues are.
Application integration is considerably more complex, abstract and error prone then straight application development.  The middleware engine is what controls message flow (i.e. data) and orchestrates a business process (or multiple processes) between these applications.  Application integration development is very difficult to understand as it is akin to trying to connect wires together in a wiring closest that has thousands of colored coded wires, yet there is no wiring diagram with numbers or labels or an obvious meaning to the color codes.  This leads to a process of discovery that is long, painful, expensive, mostly trial and error and with an end result that may be less than the 20% success rate discussed in the CHAOS Study.
However, over time, eventually you figure it out and if you do it enough times you get good at it and the more you do, the more you see the patterns on how it is done and how to do it better each time.  This is how we evolved our invention, which after 25 application integration projects over a period of 4 years, Barry Varga and I arrived at BRIDGEWERX.  And by all accounts, it appears we were first to market with this invention.
Tomorrow we reveal our design pattern to show how we can model any application integration scenario that scales to any size or complexity.
Wednesday, 03 August 2005 04:06:06 (Pacific Daylight Time, UTC-07:00)  #    Comments [0]
# Thursday, 28 July 2005
Why do most people today view software as a commodity product when nothing could be farther from the truth?  As a software developer type, I could blame it on marketing, but that is not the whole story. 
Software is to most people, completely intangible.  That is to say nebulous.  The laws of physics seemingly dont apply to our world of software.  Software does not have a physical shape or a form that one can readily see or touch, other than on a computer screen.  And even on the computer screen, you do not see the actual size or complexity of the software because it isnt all on one screen.  In fact, you have no idea how many screens there are.  And even if you knew how many screens there were, it still gives you no indication to the size and complexity of the software program.  But once the computer is turned off, where did the software go?  Poof!  Its just like being at a magic show (and for some vendors of software, it truly is a magic show :-)
Hundreds of billions of dollars are spent on software development, it permeates into almost everyones daily life, but it mostly does not occupy a physical space like the Empire state building for example.  The Empire State building cost $43 million dollars to design and construct, required 3600 people and 7 million person hours of effort and took one and half years to complete.  Most people can understand why once they see the size and complexity of the Empire State building. 
However, in the software world, we are routinely asked to build such structures, with respect to equivalent size and complexity, with a nothing more than a few hundred thousand dollars (at best) with a handful of people, whose programming skills vary wildly, (a topic for a future post), and guess what, can you deliver that software to us next week?  Ridiculous, yet it happens all the time.  It is still happening today.  Status quo continues.
Until we have a way of describing software size and complexity in the form of an architectural drawing or structural blueprint, that people can understand, we will continue to perpetuate software development as a massively labor intensive, non-predictable and non-repeatable, error prone process and remain in the pre-industrialization world forever.
Next week we will look at a new way of describing size and complexity using a modeling tool that produces architectural blueprints (and code generated solutions as it turns out). 
However, tomorrow is Friday and time for Stupid Computer Tricks!
Thursday, 28 July 2005 04:06:06 (Pacific Daylight Time, UTC-07:00)  #    Comments [0]
© Copyright 2010 Mitch Barnett - Software Industrialization is the computerization of software design and function.

newtelligence dasBlog 2.2.8279.16125  Theme design by Bryan Bell
Feed your aggregator (RSS 2.0)   | Page rendered at Tuesday, 10 August 2010 13:59:27 (Pacific Daylight Time, UTC-07:00)