Monday, 18 May 2009
Unprecedented – A Two Year Software Guarantee!
As an 18 software development professional, I have been waiting for this day for a long time. Honestly, I thought this day would never happen in my lifetime, but it did and in a form I would have never guessed:
“The future of games development has been called into question after the EU Commission suggested developers provide a two year guarantee.”
Two year guarantee on… games? I would have thought that medical software or aviation/guidance software or any other software application/system that could put people in harms way would have been the first industries to have instituted guarantees on defective software.
Never did it cross my mind that it would be the gaming industry, but then again, after thinking about it, it makes perfect sense. Video games are a multi-billion dollar business targeted at mass consumers, a.k.a. the public. So when lots of people’s games don’t work, something has to give.
The key clause is, "two year guarantee on all games in the event that a bug or glitch is encountered preventing you from completing the game and/or ruining the experience."
As the articles states:
“At present, retailers are not obliged to give a refund on a video game that has a bug or glitch that prevents a user completing a game. If the proposals become law, this could change as users would have the right "to get a product that works with fair commercial conditions".
Note it says proposal, but I say great, it is about time! Why is it only in the software industry, people and corporations spend trillions of dollars on software and none of it is guaranteed? No, I don’t mean delivery guarantees, or warranty periods, I mean outright quality guarantees.
For me, as someone who develops software, I could not be happier. This marks the day that there is enough critical mass in the public to demand better quality software products. Really it means that software will need more engineering, as in software engineering.
“Software engineering is the application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software, and the study of these approaches; that is, the application of engineering to software.”
At age 50, I have had two careers, the first in electronics engineering, where the application of systematic, disciplined, quantifiable approach… was never even questioned, talked about, it was simply expected, standard operating procedure, institutionalized, etc. But when I entered my software career in 1991, I was very surprised to find that in the software world, there were debates going on, still today, whether software development is engineering or art or science or…
Fortunately today, many universities are offering undergraduate degrees in software engineering and have defined the difference between computer science and software engineering.
This means that younger generations will be more exposed to software engineering rather than us “old guys” where there was no such thing called software engineering when we went to university.
I also believe that software engineers should be licensed as Professional Engineers (P.Eng. in
Canada). In fact, in British Columbia, Alberta and Ontario, you can be licensed as a professional software engineer.
You can read an excellent paper on, "Licensing Software Engineers" by Philippe Kruchten From the article, “The only purpose of licensing software engineers is to protect the public.” I totally agree. Now Philippe suggests that licensing software engineers who designs and develops software that can kill is protecting the public. I agree with that as well. But I would take bit of a broader definition that not only includes software that kills, but again protects the public from financial impact, which basically covers most business software, like ecommerce, banking, trading insurance and any applications that deal with money, but also even t the point of gaming software, which is what this article began with and the proposed two year warranty.
Perhaps a little reflection is required to let what I am saying fully sink in. For any software development project that has any impact on the public, financial or otherwise, the software engineer responsible for “sealing” the blueprints should be a licensed software engineer. A licensed professional software engineer is bound by professional practice and code of ethics. That means that a licensed professional software engineer is legally responsible for the design of the software. Think about that. Aside from the responsibility, the way software is designed and engineered is going to change in our world. I can only be grateful for that and like I said early on, it is about time. Today is a red letter day!
Monday, 11 May 2009
Real Software Engineering
Over a half dozen years or so, I have noticed a trend in our software engineering world towards collecting less and less data about what we do and how well we are doing it. This would be contrary to lets say professional sports where data collection (i.e. stats) is a well know practice and pastime, plus on the rise, whereas in our professional software engineering world, it appears to be on the decline. I wonder why?
A history lesson first. A fantastic short paper called, “Lessons learned from 25 years of process improvement: The Rise and Fall of the NASA Software Engineering Laboratory” provides some real insight to a problem that now seems to be rippling through our industry en mass. I would say the Software Engineering Laboratory (SEL) was ahead of its time and has seen some possible future outcomes that may be coming true for some (most?) organizations in the corporate world.
The SEL collected software engineering metrics on process improvement from 1976 to 1991 and beyond. I am not going to summarize the lessons here, you can read the excellent paper for that, and if you were/are ever a part of a real software engineering organization, none of the lessons learned should come as a surprise. But I think that is the real lesson to be learned from this paper and the amazing contributions to software engineering that came out of the SEL and that is, very few organization are actually practicing real software engineering process improvement. And in fact, I would suggest that the latest trends in software development lifecycles, like Agile, XP, Scrum, et al are actually going the other way.
Allow me to explain where I am coming from; here is one of the lessons learned from the SEL: “Establishing a baseline of an organization’s products, processes, and goals is critical to any improvement program.” In most other industries, this is a given, including professional sports. Heck even in non professional sports, like your college track and field events, you establish your baseline for the 100 yard/meter sprint and constantly trying to improve on that as a simple example. Yet, in the software engineering world, where is our baseline? We have embarrassing baselines like spending millions and billions (note - how embarrasing, in my home country!) of software that is of poor quality or doesn’t even work. Maybe that is why we don’t measure ourselves?
We have “new” software development lifecycle methodologies that promote no collecting metrics on how well we are developing software. Its astounding and disturbing, not only to me but to businesses that are forking over millions of dollars for software development projects that seem to have nothing more than an ad hoc process of going wherever the project meanders, no matter how long or at what cost, let alone even validating if the right problem is even being solved in the first place.
How do I know this? I have been in the software engineering business for 18 years and had the privilege to have experienced many a software shop ranging from multinationals like Eastman Kodak, Motorola to midsize companies and to small start ups, including my own, in several different industry verticals and locations in
North America. I had the good fortune of being taught Software Engineering Management, a 2 year post graduate program from the University of Calgary, taught by Motorola University, where each of the instructors had many a year experience and stories about software engineering projects, both failures and successes and most importantly, what the differences were/are. I have kept in touch with past employees/friends in the industry to hear interesting anecdotes over a virtual beer. And finally, I spend a lot of time researching and reading about software engineering in general.
And my conclusion is that we as an industry are (way) off track and seem to be heading further off track with the invention and adoption of new methodologies that are not based on (any) actual factual experience/metrics/historical data like the 25 years of solid software engineering experience that the SEL has produced.
Why do I believe that? I see it everyday and in several companies that I may come in contact with due to my job. I don’t need any further proof than that. Just even the simple metric of “establishing a baseline” of anything in your organization should be proof enough for you. What data have you collected for x software engineering projects you have undertaken, in any of the companies you have worked for? I bet it is a small percentage, if any at all.
Why is that? I think the SEL folks hit it on the head with one of their lessons learned, “It is difficult to make an engineering organization aware of the importance of software engineering to their mission.” I would substitute engineering organization with “any” organization. Software is everywhere and in everything. It is almost as ubiquitous as the telephone or power, yet we have no way to measure its quality, effectiveness or how productive we are at designing and constructing it. In fact, I would say that that any given orgranization is not even aware of how important the role of software plays in their very own organization.
And we software engineers, developers, programmers, coders, craftsman, or whatever we believe in (I believe that software design and construction is an engineering discipline) are our own worst enemies. Behold, I give you programming.reddit.com where almost 100,000 subscribers argue over which is the best is programming language, ever, every day, year after year. Or CodingHorror.com, with +100,000 subscribers, with its blog name, light hearted or not, sends the wrong message to the masses that coding results in a horror show. Or even Hacker News… How would any organization looking to solve a serious business problem that costs millions of dollars in software engineering take anyone of these seriously? Or just assume that the whole lot of us is represented by these blogs and news feeds? No wonder we have credibility problems. No wonder we don't collect stats!
So what we do about it? Just like anything, it takes a lot of hard work and some critical mass for an organization to realize that software may be a big part of their business and that maybe they should seriously consider looking at how software could be engineered for their business based on real software engineering principles and not on the latest and greatest fad of the month. I don’t know about you, but if I was to spend a million dollars on any software engineering project, I would want to be sure that it was on budget, on time, had some level of quality in it and actually solved the right problem. Establishing a baseline of what you are doing and how well you are doing it, is the start of real software engineering.
Tuesday, 14 April 2009
Ola Bini wrote an interesting article called, “Languages Should Die.” I was hoping that this was really the case, but actually it was the opposite. I believe it is this type of thinking (i.e. language proliferation) that has our software development industry in trouble:
“No existing language will die in a long time. But we need new ideas. New fresh ways of looking at things. And new languages that incorporate what other languages do right. That’s how we will finally get rid of the things we don’t like in our current languages.
You should be ready to abandon your favorite language for something better.”
Define better. Define do right. I feel we have already reached good enough long ago with C or Smalltalk. Heck, Smalltalk is one of the simplest languages there is - the entire language syntax can fit on a postcard. What more do we want? Oh sure, there are some small technical issues, like with all languages, but technical deficiencies is not what did in Smalltalk, it was marketing, but I digress.
Wikipedia says there are +560 programming languages. Sure they are not all in use today, but how many programming languages do we really need? How many written and/or spoken languages (i.e. English, Spanish, German, etc.) do you know well? Expert level? How many people do you know that are expert level in 2 or more spoken and/or written languages? How about 5 or more spoken and/or written languages? How long do you think it would take for you to learn 5 spoken/written languages at an expert level? Apply your CoCoMo model to that!
AJAX frameworks like JQuery, Mootools, Prototype, etc.
It is no wonder to me that we cannot get any real work done as we are all learning existing or new languages, frameworks, etc. It is no wonder to me that business is getting pissed off at developers in general because everything takes too long, too error prone and worse it does not meet their requirements.
Ok my point is not to be some old stodgy dinosaur, yeah I just turned 50, but I have a 6 year old daughter that can play Lego Star Wars better than I can. I am not trying to stifle innovation, but let’s be smart about what we are trying to innovate. Here is an excellent example in my neck of the woods, I spent a bunch of time learning C# since 2000 and kept up with the versions. Recently, I went to PDC and checked out all of the new C# 4.0 features. One of them was introducing a dynamic type to what is otherwise a very statically typed language. I used (Iron)Python for my dynamic needs and now some of that is in C#. Is that good, bad, better or worse or what? Damned if I know, it is yet another thing I have to figure out as a software engineer. Oh, and I really liked F#, a brand new functional language. So how many days, months or years do I need to invest in learning F# to become an expert? And most importantly, why would I do that? Other than the coolness factor, admittedly really cool to me personally, but what possible business value or real world application does it have that would cause me to use it? Answer = none.
See what I mean? As Ola says,
“So - you should go out and fork your favorite language. Or your least favorite language. Fix the things that you don’t like with it. Modify it for a specific domain. Remove some of the cruft. Create a new small language. It doesn’t have to be big. It doesn’t have to do everything - one of the really cool things about functional languages is that they are often have a very small core. Use a similar approach. Hack around. See what happens.”
Give me one sound business reason why this would be a good thing to do? You want a simpler language, well, we have that already, see Smalltalk example above. You want a better or a “right” programming language, you better have some real definitions as to what those mean, have identified real shortcomings (not just some syntactic sugar) in any other mainstream language, and your proposed improvement must be like 10X or why do it?
I feel that we already have (more than) enough programming languages to choose from. Let alone the frameworks and batteries that come with them. We software developers/programmers/engineers seem to be our own worst enemies as we are causing more and more complexity to a domain that is already complex enough. What are we really doing to reduce the complexity instead of adding more? Adding another language to my skill set like F# (which I would love to learn personally) absolutely has no business value for me or my customers in my world of ecommerce web applications.
In my software engineering world, I am looking at every angle to reduce complexity. It is simply a matter of numbers, the less is better. So if I can reduce the number of programming languages, frameworks, integration points, executables, assemblies, etc., the simpler the solution, the lower the cost, the less time to deliver, the easier it is to change and maintain and therefore representing the best business value to the purchaser of the custom developed software.
However, these days it feels like I am a minority in our software industry as we proliferate everything, including software development methodologies until it becomes insane. I am concerned that this is happening to programming languages (and everything else in our software development industry) as well. Just like the financial market situation we are in now, I am wondering when that will happen in our industry? For all of our brain power, we seem to be following the same path. What is it going to take to divert this path? When will the pendulum swing the other way to favor economic sense (i.e. proven software engineering principles) instead of the crazy proliferation of anything in the name of continuous improvement?
As Brad Cox says in, “Planning the Software Industrial Revolution, I only wish that I were as confident that the changes will come quickly or that we, the current software development community, will be the ones who make it happen. Or will we stay busy at our terminals, filing away at software like gunsmiths at iron bars, and leave it to our consumers to find a solution that leaves us sitting there? “
Wednesday, 08 April 2009
I may be getting old, but I don’t think I am out of touch. My sanity of being a software development professional seems to be tested daily by our industries predilection for Silver Bullets. The latest it seems is Scrumban. My wife can’t stop laughing when I say it to her.
I have not read the book, so I have no opinion on it. I have read some of the author’s blog material. Seriously, I mean no disrespect to Corey Ladas at all, but when I read the material, I can’t help to think that this is written by someone with a MBA. When I read this article, I think I get what he is saying, but I swear it is written in a different language than the software engineering world I live in.
Regardless of what marketing terms are used, the reality is that software development is always: understanding the requirements, creating a design, implementing the design, and testing the design and implementation to ensure it meets the requirements. Requirements, Design, Code and Test. Always have, always will, no matter what other fancy (marketing) words are used to describe it.
With respect to very large software projects, I understand the label Feature Crews, but to me, this is nothing more than classic functional decomposition at work, but with a new (fancy marketing) label.
Questions that the Agile, Scrum, Lean, give it a name, practitioners seem to never answer is the two most asked questions by people that pay for software development projects, which are, “how much is it going to cost and how long is it going to take?” It seems there is no best practice in any of these methodologies to answer these fundamental questions as these methodologies are focused on very limited scopes of the project as opposed to the entire project. And quite frankly, that is my main beef with these Silver Bullet’s, because as Fred Brooks postulated 23 years ago, there is no such thing as a Silver Bullet in software development. In my opinion and professional experience, he is right.
Software development is an incredibly complex, massively manual labor intensive effort, whose primary work product is lines of source code. For people that write code day in and day out, know this to be the truth. There is no hiding from it. We grind it out as we know how and love it. So when I am asked how much the software development project is going to cost and how long it is going to take, I apply a tried and trued approach to answer what is a very, very difficult question. This is why you see software estimation models like the Constructive Cost Model or CoCoMo for short. You can read the gory details of the model here. PDF alert! No surprise to see that counting lines of source code (or equivalent lines of source code) is the way to answer the tough question of how long and how much. Dr. Barry Boehm had it figured out years ago. When I took Software Engineering as a post graduate course at the
University of Calgary, this is what we were taught and has been consistent with what I have found in the field. So yes, I use CoCoMo to answer the ugly, gnarly questions. And there are several automated tools that implement CoCoMo’s model, even ones online.
So what has happened to our software development industry that we need to keep reinventing the wheel in the name of continuous improvement? I think it is indicative of anything that is really tough to do. Everyone is looking for an easy answer or the next big thing. But as most things in life, the answer is already figured out. You just need to look in the right place, listen to wisdom and apply some common sense. Congratulations, you just made pro software engineer!
What’s my point? My point is two fold. Our software development industry is being run by marketing people and has gone insane Ok, I half jest. I know over time the pendulum will swing back to the basic fundamentals of what software development really is. My other point is to the young aspiring software engineers. Kids, look at some of the real software engineering techniques that hold up to the test of time. These are the gems. This is what is real in our industry. CoCoMo is real, it works, and is based on facts and historical proof. If you were to only read one book on software engineering, read the as it embodies what is really happening in our software development world, regardless of when the book was written. There is a reason why Fred Brooks earned the Turing Award. Your assignment is to understand why.
Monday, 09 February 2009
Man, it seems that every few years there is a new software development methodology that comes out and while not purported as the 2nd coming, it certainly has all of the fanfare of the latest and greatest.
Let’s see there has been CMMI, TDD, Agile, Scrum, Lean, Waterfall, Continuous Integration, Spiral, Extreme Programming, RAD, MDD, YAGNI, Cowboy Programming, and the list goes on. I would like to add to the list an oldie but a goodie: Brute Force Development or BFD.
I would suggest that BFD is the most widely practiced software development methodology in the world. In fact, I would claim that the majority of organizations and people use this methodology daily and have been since the inception of software development.
How do I know this? In the real world of software development, where the size and complexity of even the smallest projects (e.g. >5000 lines of code) exceed the allocated budget and timeline, almost everyone resorts to brute force development in the end. Why? Because, we have to. How else can we do it? When was the last time, as a professional programmer, that you actually finished your project/product on time and on budget? Did you do it working 40 hours per week? Honestly?
Typically we start out with the best intentions, but as the schedule starts to slip and the budget is disappearing at the rate that a 426 Hemi goes through a tank of gas, we drop in BFD mode. We try and do the impossible. Extra hours are burnt, features are slashed, quality goes out the window, and we brute force our way to meet the impossible schedule.
Now, I am not complaining. This is just an observation having worked in many different shops, large and small, including my own start-up. We end up with BFD in the end.
Heck even the guru himself (who I have nothing but respect for), in his own post called, "Building a Fort – Lessons in Software Estimation" made some pretty interesting slip ups. My favorite was the, “I dropped a little piece of my laser level down the side of one of the footing holes, between the concrete form and the dirt, after I'd poured the concrete.” Oh Steve, think of all the things we have dropped in the software!
I will go out on a limb by saying I have yet to see any evidence that we, as software architects, developers, estimators, etc., are actually getting any better at this. I have been doing this for 18 years professionally and maybe I am dreaming, but it seemed simpler years ago. Not just that the requirements were simpler, but even from a technology standpoint. What I mean is that software vendors that produce tools, programming languages and applications have grown (seemingly) exponentially during this time frame as it seems any solution (and the tooling, languages, and apps) I am involved in has way more moving parts. A lot of these moving parts are new and unforeseen issues crop up well into the development cycle where one vendors library interfaces don’t seem to match what the documentation says, for example. And then we brute force it – to make it work.
Part of this is tongue in cheek as there is another meaning to BFD than can be applied to programming methodologies. You can get a hint by looking at the blog category this was filed under. I sincerely don’t mean any disrespect to the authors and believers of these software development methodologies, but sometimes the “marketing messages” can be a little much and even downright embarrassing. For example, try explaining to your significant other what Agile and Scrum mean. What do you think the “business folks” are thinking when you explain it to them? Do they even care? I would hazard a guess that all they care about is how much is it going to cost and when can we start using the software. Btw, they are also thinking, it better do what I want it to do for this amount of money... or else...
So the moment that things don’t go as planned, BFD kicks in. Whether you know it or not.
In 1994, before most of these methodologies and marketing names came into effect, I had the good fortune of taking a 2 year post grad course in Software Engineering Management at the
University of Calgary in Alberta, Canada. It was taught by Motorola University and one of the instructors, with 30 years experience, had some awesome stories on how “yer doin it wrong.” The funny thing was, while we learned a great deal about software engineering (that’s the last time I write 17 exams in 2 years!), what we learned most was common sense and communication. In other words, how to tell your customer (ahem, the one paying your rate, salary, contract, or whatever) that we can’t write 100,000 lines of code in 2 weeks. The real methodology here folks is just called common sense.
I don’t think much has changed since then as we are always fighting that battle. Developing software for any decent sized project (>5,000 lines of code) is really, really hard, maximally labor intensive and fraught with… well, you name it.
I can hear the Agile folks saying that our methodology is the one that mitigates this risk. While that may be partially true, how do you answer the top two questions asked by the customer: how long will it take? And how much will that cost? And our requirements list is just that, a one pager with bulleted high level fatures items and some of the bulleted items have two words explaining the requirement. Oh yeah and at fixed price. Ready to sign up? In the end, in order to make that deadline or not burn through your fixed cost, it's BFD man. That’s the reality. And btw, could you not have come up with a better name. I mean did you not know that Agile is Dead?
So what’s my point? Well, aside from having some fun with the BFD acronym, as with most things, there is some truism there for sure. We have all done it, yes? I am sure that anyone that has written code for any length of time has done BFD. Which makes my newly minted, TLA marketing buzzword an instant leader in the world of software development methodologies!
All kidding aside, maybe it is time to step back and look at some of the basics for any software development project. The very first I would think is answering the two most basic questions of any software development project – how long and how much $’s. Do you have a predictable and repeatable way of doing that? How accurate is it? If you don’t, then you are likely to be doing BFD even before you write a single line of code and therefore none of those fancy software development methodologies won’t help you one bit. Know what I mean?
Remember, keep the rubber side down!
Tuesday, 23 December 2008
Goes to zoomii
In my last post, I asked the question, “Show Me a Good User Interface Design.” I asked the question simply because of the length of time I have been in the software development business, sometimes you get so close to your work that you can’t see the forest for the trees. And I was certainly feeling that way. It is further exacerbated by the more you look at user interface designs the less objective you become – I needed a fresh perspective.
Thankfully, Breton commented on my blog about zoomii and as soon as I saw it, I knew I was back in business. As I commented to Breton, zoomii reminded me of a user interface design I was involved with back in 1994 working for a company that developed banking applications using the Smalltalk programming language (oh, how I miss that programming language and environment, but that is another story). We developed a visual (read: physical) representation of a banker’s desk (by banker’s role) where there were forms and files in desk drawers that could be pulled out, a calculator on the desk, calendar, etc., you get the idea.
What I remember the most about the user interface design was when we rolled it out to the bank employees and had a person sit down in front of the interface, she just started using it with no training! The bank person said, “oh, I get it, it’s just like my desk”, and proceeded to pull out on of the drawers and flip through the file folders and clicked on a form to be filled out. We were initially astonished by this as we had developed a training course to use the software, but the bank personnel did not need any training! It was amazing, pretty soon everyone wanted to try it out and was using it immediately, with no help from us.
Now the paradigm is back and even better. And what is that paradigm? I am sure there is a formal way of saying it that is probably in Jeff Raskin’s book, The Humane Interface, but for me it is dead simple, it is a model of the physical world designed and implemented in software as a zooming user interface.
Chris Thiessen, the one and only developer for zoomii, states on his web site, “…spending afternoons wandering the shelves. Happening across great books I didn't even know existed. But it's an experience I never found online.” It is so true. I used to frequent a computer bookstore in
Calgary Alberta that carried only computer books on its shelves. I would spend 2 to 4 hours on Saturday afternoons just browsing the shelves, picking up books and browsing through them and leaving the stores with a pile of books. Over a few years, I had accumulated over 80 books this way. Unfortunately, the bookstore succumbed to competition by the big box book store chains that moved into Calgary and is no longer. I sure miss it!
But it is back in zoomii! Let’s look at some of the user interface design elements. First of all, if you click on Categories, you get this view:
Note that as you hover the mouse cursor over the left hand categories, the right hand representation of category on the book shelves is highlighted (white in this case). Brilliant! Very quickly, I could easily discern where the Computer book "shelves" were, so when I went back to the shelves, I could easily navigate (pan by holding the left mouse button down and zooming in and out using the mouse wheel) to the computer books and get the view below in a matter of seconds and minimal mouse clicks/movements.
Then I can zoom into a shelf that is sorted by the author’s last name:
Notice the little left and right arrows on either side of the Computers name category. If you click on one of these, you navigate to the next shelf on the left or right.
When you find the book you want, you can click on it and get a detailed description of the book, plus the ability to look at what reviewers of the book had to say about it. Essentially, for the reviews, it takes you to Amazon.com and if you read zoomii's frequently asked questions, Chris tells us that he uses Amazon’s Associates Web Services to interact with Amazon’s data and refers the sales to them and gets a cut using . Brilliant business model.
Chris also states that the way he stocks his shelves is, by the top ranked 25,000 books. He does state that if he used subcategories that if could to 100,000 or more. Chris is considering an optional view that will shelve Amazon’s entire book inventory. I hope he does as I would be very interested in that – more books to browse!
I love the simple navigation of the site:
The basic navigation fits into the classic 7 +- 2 paradigm that our short term memory can handle, a far cry from the +60 clickable items navigation you get with "Office 2007 and the Killer Ribbon".
Clicking on the home button, produces this view:
Creating an account is dead simple, with no confusing user name versus email address. Simple, yet complete – something we truly lack in our software industry.
Another feature is the unlimited undo. Clicking on the back button in the browser basically retraces all of the steps (zooms, clicks, everything!) that you performed from the time you entered the site.
The search capability is my only small quibble on the site. It is not as exact as I would like, but it is not Chris’ design, but rather the Amazon web service returning too many books that seemingly don’t match the search criteria. For example, when I type in, “Expert F#”, I get returned 29 matches, when there are only really 2 matches that should have shown up (ideally it should have been one match).
I am sure I am missing other features, and certainly will peruse Chris’ zommi.js more to understand how he put it all together, but for now I am enjoying myself more by simply using the interface then almost any other web based interface I have used or designed on my own.
As someone who develops ecommerce applications at work, I can see this user interface design making shopping at any “brick and mortar” store online a lot of fun and more so, exactly what the general public would want and enjoy, IMO. I can just image browsing through the “shelves” at Bestbuy or Costco or FutureShop or… I could see how it would be applied in a B2B supply chain relationship scenario as well for electronic parts or manufacturing for example. It would seem the applicability is universal.
Kudos to you Chris for designing a top notch user interface that brings the joy back for someone that used to browse computer books for fun many years ago. I am even more impressed that you did this all by yourself!
Which really begs the question for our software development industry, are we all doing it wrong and Chris is doing it right?
Wednesday, 17 December 2008
I mean it, honestly! I have been programming for so long and have built so many apps and have looked at, and used 100’s of (web and desktop) apps, that I no longer know what a good user interface design looks like anymore. I have lost my objectivity.
Think of it this way. As a user, I have been using Outlook and Outlook Web Access (OWA) daily for I don’t know how many years, but yet it (read: both) “feels” like a good user interface design to me. Maybe I have been “programmed” by using it so much that I just feel that way. Which is maybe why I dislike the ribbon, but that’s another story.
As a side comment, Jeff Atwood at Coding Horror wrote an article called, “Avoiding The Uncanny Valley of User Interface” saying, or at least I think it says from a summary point of view, that web applications should not try to mimic desktop applications. Clearly, I don’t get it. I believe the whole point to OWA is to indeed mimic the desktop (i.e. Outlook) as closely as possible so that I, the user, don’t have to “think” about the differences between the two email applications. In fact, I think I would be quite pissed, as a user, that the desktop email application I use everyday and the web mail application were so different that I would have to then learn two different ways of working with email, calendars, tasks, folders, etc. Don’t make me “think” about the differences, I just want to “use” the applications.
I also read most of the comments form Jeff’s blog, and a few made sense to me (see below), whereas I think most were offering up their personal opinions, while valid, I know my job, as a programmer, is to give my clients what they want. Oh sure, I have my own ideas, like many programmers, but after spending a lot of time in the trenches talking to customers over 17 years, it is clear to me that they are paying me money to get what they want – even if they don’t know what they want. “do you get me, sweetheart?”
I really like Shane’s comment:
Yeah, that "Google Maps" web app thing that acts like a desktop app, that'll never work. People only want web apps that act like web apps damn it! And all the cool
AJAX features in Facebook/MySpace/Gmail, no one really wants those. They would prefer to wait for the entire page to refresh every time they make any kind of change. Or wait for a huge page to download just to make sure the page contains every bit of data that they could possibly want. Because they would prefer their "web apps" to act just like the crappy 1st generation web apps that were around before they even knew what the Internet was. Pfft.
Shane on December 17, 2008 05:55 AM
Or how about Daath’s comment – sounds like a Pragmatic Programmer to me:
For a software developer - yes.
For an average user - no. An average user simply doesn't care if it's real, doesn't search for differences, and for him/her it would be the best if the web application and the desktop application would behave the same, because it would be easier to get used to.
For a developer a software is close and personal, like another person for everyone. That's why it feels unnatural for him/her to use web apps that try to mimic desktop apps' behavior. But for an average user, web and desktop apps are nothing but utilities, things that have to be used in order to achieve a goal. They don't care, it's nothing personal for them. (Sorry for breaking your heart, you just have to realize this and move on with your life. :))
Daath on December 17, 2008 08:08 AM
And finally, Brian says it all:
What are you guys talking about! The purpose and intent of Ajax and RIA technologies is to enable web UI designers with the ability to do things that would be considered "closer" to desktop application operations than "traditional" web applications.
That’s because "traditional" stateless web application user interfaces sucked. What are web application expectation anyway? Type stuff... submit(postback)...wait...view result. I say rock on web ui designers! Give me drag-drop, give me background updating panels (event-driven updates). I am still waiting for some of those other crusty old desktop features like great undo/redo functionality and the ability to paste an image directly into an email body, but as they keep working the technology I am sure it's not far off.
For users, web apps accel because of collaboration and accessibility. Users dump outlook for web-based email so they can read their mail from home, work, school, wherever. Certainly not because Outlook's desktop user interface sucks. Users have had to trade off rich interaction for those benefits. Today, that trade off isn’t any where near as bad as it once was. Today many web apps simply rock. And that’s because of the energy and effort by many folks to bring rich (or desktop-like) interactions to the web. So let’s dispense with the noise that this is a bad thing.
Brian H on December 17, 2008 08:09 AM
Hallelujah, I say. I just don’t understand why Jeff would write such a blog post. Which brings me to my point, rather than hypothetically avoiding the uncanny valley, I want to hear about and see pictures of “good” user interface design and most importantly, why they are good. I mean convincingly good.
I work for an ecommerce company that designs and builds large scale ecommerce web applications. When it comes to good user interface design, our customers are predominantly always asking for two things:
- Can our customers (easily) find the product(s) they want easily with the minimal amount of mouse clicks? This all about navigation and search.
- Can our customers (easily) buy the product(s) they want easily with the minimal amount of mouse clicks? This is all about conversion, i.e. turning “browsers into buyers” as quickly as possible.
The point I am making is in my ecommerce web app world, there is a purpose to the user interface design, the key word being design. The ecommerce web app is “designed” to fit the requirements for the intended end users. We, as programmers, use a lot of interesting technologies, like AJAX, to fulfill these requirements, but the end user could care less what technologies or techniques are used to fulfill these two requirements.
Bad user interface design to me is the opposite of fulfilling these requirements in the context of what I do for a living. And if I put my customer hat on while I am shopping online, if I can’t find what I am looking for easily, I give up. As a programmer, I give up even sooner, particularly with a web application that is loaded with so many links, 5 navigation bars, and tons of “stickers”, I already know this is going to be really painful to find what I want, so I just give up - sooner. The consequence is that ecommerce site just lost my revenue, which hits the bottom line. Everything else after that is irrelevant.
Another reason why I think Outlook for the desktop and Outlook Web Access for the web are so successful is because if you use the desktop app and then have to use OWA, the learning curve is almost zero, so it is instantly usable and adopted by the “user.” I.e. I can find what I want easily and I can transact my email easily – no think time differences between the two apps.
I can give another example of good user interface design coming from my 5 year old daughter, who plays web based games on the internet and locally on my computer (i.e. “desktop” games). She already knows that it is either going to be the arrow keys or the WASD keys to move around in the game. I only had to show her once and it is consistent with the many games she plays. Adult note – yes, I (or more correctly, my wife) limit the time and type of games she can play on the computer. Heck, she even knows how to hit the green “play” button in Visual Studio to run some of the XNA games I am working on, but that is another story.
One criterion that makes for a good user interface design, even if it was arbitrarily chosen, is consistency. Example, the ubiquitous “File” menu, in and of itself, in addition to that it is the same for Outlook desktop application and OWA, plus many, many other desktop applications that share the File, Edit, etc. paradigm. It is a bit strange to me that not too many web applications take this same approach. Why not? And I mean from a user perspective, not a programming one.
Of course, Microsoft broke this paradigm with the ribbon, but I think they were actually trying to solve a different problem then good user interface design, see Office 2007 and the Killer Ribbon.
So what makes a good user interface design? I am genuinely asking. Like I say, I am so close to it that I feel I have lost objectivity. Not completely mind you, but as a firm believer in , I have a sense that our industry has moved off the mark somewhat and after reading Jeff’s post, and some of the comments, there are a group of software folks that seem to have forgotten the basic “aesthetics versus function” argument. E.g. if the iPod was not so easy to use from a user perspective, it would not matter how cool it looked. Some may see it differently, but my ecommerce customers see it the same way as well. If the ecommerce site is “flashy” but no-one can find or buy anything easily, then it is nothing more than eye candy as opposed to an ecommerce business, producing revenue.
So what makes a good user interface design? Show me one (links, pictures, descriptions, all good) and tell me why. Thanks!
Tuesday, 09 December 2008
This post is to answer several of the comments posted to my blog, Hacker News, Programming Reddit, and unClog - in no particular order. I tried to answer most if not all questions, but if I missed yours, it was not on purpose.
At the end of this post, are my own answers to the productivity lost questions I have posed.
PJ says, “A bad craftsman blames the tool.” So PJ, if the tool is broken or is in need of serious repair, would a good craftsman ignore it?
Arun says, “…I believe you need to think outside the .Net world. …What's new is not always complex. The productivity in modern languages have actually improved dramatically. People release new features all the time. But then, you need to stop using the old tools.” Arun, I make my living as a .NET programmer and I would not call Visual Studio 2010, .NET 4.0 and C# 4.0 “old tools”, in fact they are so new they have not been released yet, except if you attended PDC.
Alecks says, “At age 17 I sit in programming class trying to find my path through the maze of the .NET framework, thinking to myself 'okay, so I can do it . . . but how does it work?” Alecks, hang in there buddy, one step at a time. Pick a few .NET classes (like the collection or array classes for example) and play with them to see how they work. Look at the MSDN examples or search online. Step through the examples in the debugger and inspect the variables, program flow, etc. Once you see how one or two of these classes work, then you can expand to others. One step at a time and you will be fine. Keep at it!
Sridhar says, “…try Zoho Creator creator.zoho.com which tries to provide on the web the ease of use you get in a VB6 environment.” Thanks Sridhar, I will give it a try, looks interesting.
Jeff says, “…I think it should be as easy to build apps for the web as it is for the desktop…” Bingo! Jeff was able to summarize my two posts in basically one sentence. Good job Jeff! Unfortunately, the reality is that we maybe years away from making this happen.
Andrew says, “We know that code re-use is a wonderful thing, and its just a real surprise to hear someone argue otherwise. … Lastly, I'm not sure quite why you've got some of the highly passionate responses above, for sure its an opinion that differs from most people I've met, but you're not too far of death threat territory in some cases :P” Andrew, what makes you think that I am against code reuse? Where did I say that? Quite the contrary, almost 80% of my open source project is predicated on code reuse, I would have never been able to complete it otherwise. With respect to the passionate responses. Well, I don’t think I would call them passionate…
Roger says, “As a developer you have to keep getting better tools, and get better at using those tools. …When I job interview developers, one of my questions is … if the candidate cares about their own productivity and actively seeks out new tools and techniques.” Say Roger, I think you missed the part where I am using the latest tools and editors, including a pre-release of Visual Studio 2010. I think your brain may have froze when you saw VB6 in my post – don’t worry, I think most of us have the same problem with VB6.
Daniel Lyons says, “It's not just you. I don't know what's going on with these other guys but I feel what you're saying just as strongly as you do and I'm only 27. We're reaching levels of complexity where what we've been doing just doesn't work anymore. Especially with the web.” Daniel, nice quote, “where what we’ve been doing just doesn’t work anymore.” I say amen to that. And as I have said before, it is getting worse, not better, in my opinion.
Roe says, “You raise some good points, but the conclusion is weird: "to me, productivity is everything. Productivity. It is as simple as that." To me, delivering something of value to users is everything.” Hey Roe, I think you missed my point. It is already a given that what is delivered is going to have value to the “users”, and if not, you have bigger problems then productivity. If you can’t deliver to your customers in any timely manner, no matter how much value it has, is of no value when the project is canceled cause it is way overtime and budget.
Mark says, “You should try ruby. Or python.” Yup, I have and in my article I state that I have tried so many languages over the last 17+ years, that I can’t keep track of them. I have been using IronPython and IronRuby, well, because I am a .NET guy. And in fact, the open source project I developed embeds IronRuby and IronPython as the scripting engine to program remote computers in real-time, but that’s another discussion.
Sixbit says, “…I think I've rediscovered your VB 6 experience of 1991. When I got into iPhone programming this year, and got the hang of using Interface Builder to rapidly prototype things, I really got a feeling that, this is how simple things should be in other frameworks, and have rediscovered my love of programming again.” Wow, sixbit, that sounds cool. I will have a look at that. I wonder how that extends, if at all, to general purpose web development, probably not…
Phillip Zedalis says, “I did not comment on your last post because I was still contemplating it - but yes I heard your point loud and clear. I also over time have felt a great discomfort with the growing 'complexity' of the libraries surrounding Visual Studio.” Philip, wait till you get a load of Visual Studio 2010 and .NET 4.0 – more and more and more. I am not sure how it is any better. I used to be an early adopter of the Visual Studio environment, right back to Visual Studio InterDev in 97, anyone remember that environment? Now, I just cringe at the extracurricular effort required to figure out all of the new things have been added to the IDE and framework. I can’t even face going to anymore PDC’s which as a passionate programmer, were my favorite conferences, it is just too much. By the way, fantastic site Phillip!
Joseph Gutierrez says, “It makes you feel less a craftsman. More like a father at Christmas with the bicycle instructions, trying to put the damn thing together. I've started learning Lisp for this simple reason. Less imperative and more declarative programming. Stay with it. An increase in velocity is what you're looking for. Take a look at altdotnet news groups on yahoogroups. VB6 with OCX and ActiveX damn good things, but it seems to have dropped out of the mainstream.” Joseph, thanks for your comments, absolutely true on the “bicycle instructions”, problem is that each part is from a different manufacturer and some sizes are in metric and others in imperial. I hear you on Lisp – have been there, and may even try IronScheme, but still cannot see how I could build a web app using this technology. Btw, I love your snowflakes on your blog!
Buford Twain says, “The next big breakthrough in software development will come when someone makes things radically simpler for us poor programmers.” Buford, I agree, I just hope I get to see that in my lifetime...
Itay Maman says, “When you need two or more frameworks in the same application you have a problem: each one is imposing its own structure, and the two structures are usually in a conflict with each other.” Itay, you are too right, in my own application I have several frameworks and components and the fight is to structurally fit them together even though each one on its own has the functionality I want.
Mark Jones says, “Very interesting. As a hobbyist assembly-language programmer whom has taken courses in the latest Java, C, and VB disciplines, I could not agree with you more Mitch.” Bless your heart Mark! Assembly language was the last thing I thought I would see on my blog about web applications, but I totally get where you are coming from. I sure am tired of “syntax code”, particularly now that Microsoft has fully embraced XAML, it just keeps getting more and more verbose and making me less and less productive.
Evrim Ulu says, "We've faced similar problems in past, and the only solution we've found is to rewrite everything from scratch." Wow Evrim, this is pretty cool. I will indeed look at this in more detail. I also happen to agree with your approach by the way. It sure appears the only solution is to indeed rewrite it all from scratch. I totally get that, thanks!
“1. Why does GSB exist?
2. What is wrong with the normal way of using hosting services (e.g. FTP pre-tested application from localhost development to QA to production)?
3. How does this compare to the new Amazon EC2 or Windows Azura?
4. Does GSB actually have the minimum number of components to (a) minimize maintenance and issues?
5. Hows does Silverlight 2.0 with IronPython running in the browser change GSB? And
Forgot to mention, here is an idea for you..
Wow, great questions mycall, let me try and answer them. GSB exists because I wanted to program in real-time, using dynamic languages, many remote computers from a web browser. I may have 30 or 40 web apps that need maintenance around the world. This tool allows me to geocode the locations in the map program so that I have some physical frame of reference, and keeps all of the IP addresses, notes, etc. contained in one location. I can then remote desktop in and launch the “service component” of GSB which contains the DLR and IronPython and IronRuby interpreters that I access through my web browser via WCF web services in both a REPL console window and a code editor. This allows me to make simple changes to code, scripts, etc. much easier than a) trying to hook up Visual Studio remotely to +30 web sites does not make you productive and b) to answer your question 2) trying to do the “traditional” way of maintaining 40 sites in localhost and promoting those via ftp, etc., again not too productive. And it is highly experimental as mentioned here.
With respect to Amazon EC2 and Windows Azure, this is something completely different. My IDE is “in the cloud” whereas the applications for Azure and EC2 are “in the cloud”.
Yes, I do have the minimum number of components, (don’t need Agent SDK, just used as Robby the Robot demo and almost all of the components are open source that I have reused in my application. My code is the glue code and programming for the overall functionality, along with eh ability to “reflect” assemblies on the remote computers so I can look at the custom classes types and members I have at my disposal.
With respect to Silverlight and IronPython in the browser, the IronPython execution is actually local and not on the remote computer. I won’t get into the other productivity issues of trying to develop and debug a Silverlight application as that is another whole can of worms.
Finally, to answer about Michael Foord’s Silverlight in the browser, have a look at this web-based IDE round up.
Pfedor says on Hacker News, “The article is interesting, but I can't help but think that large part of the guy's problem is that he's not using Unix.” I am wondering how that would make my application any simpler or me more productive, please do tell.
On Hacker News in general about VB6 – I think the point is being missed. I was very productive in developing desktop applications in VB6 but there is no equivalent for web applications.
Thomasmallen on Hacker News says, “
His core argument is wrong. Frameworks don't exist to make programmers' lives easier. Our job difficulty is essentially fixed: we will push ourselves as hard as we can. What frameworks do is increase the scope of what we can accomplish within that level of effort.” I disagree and I think you missed the point. There is so much language and framework being used that it is a humungous job to stitch anything together easily. At least in my .NET web application world. That’s the point.
Gregk on Programming Reddit says, “We can't re-invent the wheel all the time. There is a tendency for over-engineering a little these days. But that is avoidable. In general, I can say we have better tools than 10 years ago but we are also more demanding and tackling more complex tasks.” Again, I think my point may have been missed. I am not trying to reinvent the wheel at all or over engineer anything, I am trying to solve a business problem in the most productive way possible. Like I said before, VB6 allowed me to solve a business problem simply and was a better desktop tool almost 10 years ago then the web development tools we have today to solve the same problem, except on the web.
Vhold on Programming Reddit says, “Every now and then this sort of thing is said, and there is always a very significant missing piece. The software people are writing now does a lot more, and a lot more reliably (on average) than in the past.” Hmm does a lot more, says who? I think we are solving the same problems over and over again, but getting harder to do as our tools and frameworks are not making it any easier as these are increasing in size and complexity, yet our business problems are the same. An ecommerce application 10 years ago is “basically” the same as it is today. Sure lots of extra stuff, but the bottom line is to add stuff to the cart and get it transacted, or at least that what my ecommerce customers really care about, and they care about it doing it faster and cheaper than 10 years ago. That’s the issue.
Recursive on Programming Reddit says, “No one's stopping you from doing pure programming.
Write your own web server in a procedural language, and then write your own procedural web app on top of it. The big scary complicated frameworks will be waiting when you're done banging your head on the desk and crying.” Dude, who said anything about me wanting to do pure programming? Show me in my post where it says I want to do that.
If I had a prize to give out, Ken would get it! My one page web application is not simple. It was a trick question. My one page web app was “designed” to be “simple” to use, it was not simple to design and build. I had to use a lot of languages, frameworks and components that were never really designed to work together and that was the effort involved to reach my goal.
My point in all of this, being that this is a blog about software industrialization, is that we, the software development industry, have not caught up to developing web applications like we can desktop applications. I would say that had I developed my open source web app as a desktop app it would have taken me 10x less time, meaning I would have been 10x more productive (probably like 100x) and that is my point. I hope the future is not too far away where we can develop sophisticated web apps in the same time and ease as we can develop desktop apps today regardless of choice of languages or frameworks.
Friday, 05 December 2008
Wow! Was I surprised to see the number of page loads reported in StatCounter for my little blog article from the night before. My PII 600Mhz 384 meg RAM web server circa 2001 was barely able to keep its CPU from melting down.
Double wow on some of the comments. It seems a split between those who got the article and others, well, maybe giving the benefit of the doubt that I was not as clear as I could have been with what I was trying to communicate.
However, having looked at some of the Hacker News comments and Programming Reddit comments, it is clear that some people did not seem to get the point I was trying to make.
Let me try again using a concrete example this time. Let’s use my open source web application I wrote as the example because you are free to download the source code and examine it to your heart’s content to make up your own minds as to what my beef is, if you so desire.
My web app is really simple, in fact, the application consists of exactly one page! Mind you there is a lot going on as your will see, but still, it is one page. For now, who cares what it does, let’s look at the list of technologies, frameworks, languages, components, etc., used (and reused) in this “one page” web application:
ASP.NET 2.0 framework classes
AJAX (.NET 3.5 FCL)
ASP.NET AJAX Control Toolkit (Tab and Hover controls)
.NET 3.5 Windows Communication Foundation (WCF) framework classes
Visual Studio 2008 IDE
MDbg CLR Managed Debugger (mdbg) Sample application and API framework
.NET 2.0 Winforms framework classes
Microsoft Agent SDK (API framework)
Microsoft Remote Desktop Web Connection Active X component
Google Earth application
Prototype.js open source AJAX framework
IronPython 2.0B2 and Dynamic Language Runtime (DLR) with IronRuby support
IronPythonTextbox – open source IronPython rich client text box and interpreter
Color List Box – open source WinForms modified List Control
RealWorldGrid – open source ASP.NET modified GridView control
For those that did read the first post in this series, do you now see what I mean? No? Let me further illuminate with the following ten points:
Did I say I hated frameworks? No
Did I say I wanted to reinvent the wheel? No.
Did I say I wanted to do pure programming? No.
Did I say I wanted to code from scratch and roll my own framework? No.
Did I say I did not want to use frameworks or reuse code? No.
Did I say that I wanted to return to my beloved VB 6 as my “ideal development model?” No, in fact, I have not used VB6 since 2000 when I converted to C# .NET (I was an early adopter).
Did I say I was sick of programming? No, first sentence in previous post – I love to write code!
Yes, I have tried several programming languages, O/S’s, IDE’s, etc., over the last 17 years that I have been programming professionally – this is not about programming languages, other than that they are a part of a much larger issue.
Am I a hobbyist programmer? No, and not to sound defensive, I am a professionally trained software engineer, with a 2 year post grad in Software Engineering Management and 17 years professional experience. I have worked for Motorola (CMM Level 5), Kodak (CMM Level 3) and several other shops of various sizes and industries, plus ran my own successful custom software development company for four years with a staff of 25 people. Currently I work for one of the best custom software development companies that produce enterprise ecommerce solutions to some of the largest retailers and supply chains in the world – the odds are that you have already used our ecommerce software and don’t even know it.
You want to interview me? (for those that asked) First, pass my test by downloading my open source code and explain to my why my software design is good or bad and the reasons why for either. Then we will talk.
Enough Tom Foolery. What I said was that web application frameworks, components and tools are like 10 years behind the tools we are used to using to develop desktop applications. In fact, I stated that it is getting worse, not better. I used VB6 in a simple two tier application scenario as the example. Now lets compare to my web app.
Look at how much “stuff” has to be used to make my “one page” web application work. Imagine developing this as a desktop application. You could easily cut out 50% of the stuff used in the web application and still have the same functionality in a desktop application, the least of which would be a working WYSIWYG design editor.
This is what my one page web app looks like rendered in IE 7.
This is what my one page web app looks like rendered in Visual Studio 2008 design time mode.
And no, this is not just a Visual Studio problem, I have run into this with a variety of other web development tools.
My point is illustrated by the number of items listed to make a one page web application and the design time view of the web application. Way too much stuff and I can’t even see what I am doing! How is this helping me being more productive? It isn’t and that is my point. Our frameworks and tools for developing web applications are getting worse, not better. Vendors keep adding more and more stuff, yet we can’t even get an editor that shows a proper design time view of what we are trying to build.
How would an engineer design an automobile (or house or electronics or cell phone or game console or name your item) in AutoCAD with a design time view the same as the one for my web application above? It would not only be totally unacceptable, it would be ludicrous! So why do we, the software development industry, accept it? And worse yet, have people defend it in some of the comments I have read? That is what makes no sense to me at all. Am I the only one with the sunglasses from They Live? I hope not, cause if I remember the movie correctly, it ends badly
Updated: A Programmers Dillema - Productivity Lost - Answers
Thursday, 04 December 2008
I love to write code. I am 49 years old and have been programming off and on from 1977 – 1990 and as a fulltime professional since 1991. I hesitate to even guess how many programming languages I have used over that time frame. Since I love to program, not only was I using several different programming languages at various jobs, I was also experimenting with several others after work.
I don’t do as much “on the job” programming as my role has been a “Software Architect” for several years now, but I still do a fair share and even more so in my spare time. For example, I released an open source project that I have been working for the last two years called Global System Builder. It was supposed to be fun, but that is the crux of the issue I am having – it was mostly a lot of really hard work. Not that it was technically difficult, but seemingly something very simple turned out extremely hard to do.
Let me digress a moment to illustrate a point. As the domain name of this blog indicates, I was progressing in programming languages as the level of abstraction was being raised over time. Meaning in modern day times, not worrying about memory management, ala garbage collection in languages like C#, enjoying the REPL feel of dynamic programming languages (e.g. IronPython and IronRuby) and marvelling at the power of functional languages (e.g. F#) and then...
I came to the realization that as time marches on, rather than programming becoming easier (read: simpler) it was actually becoming more complex, to the point where today even to write anything simple seems to take a monumental effort, and seemingly more configuration effort than programming, and with so many moving parts, fraught with errors that are not compile time related. I felt this not only in my professional software development world that I live in, but even on the open source project I was working on in my spare time for fun. And it was supposed to be fun, but instead it was a design exercise at every turn figuring out which of the programming languages, frameworks, components, and widgets I could use that were the lesser evil since none of them did what I wanted them to do. As you will see, this is the irony.
Sure we use all sorts of frameworks today that supposedly make our programming lives easier. The one I am most familiar with is the .NET framework. At +11,000 types and +100,000 members, I am overwhelmed by the sheer size and complexity of the framework. I can barely wrap my head around 7 +- 2 items let alone several orders of magnitude larger in size and complexity. I spend more time looking at MSDN documentation trying to figure out how some Type works and the members I can use rather than actually writing code.
The argument is that we become more productive cause “it is taken care of in the framework” My experience and others would tend to disagree from a practical perspective and let alone the simple math truth. Our brain is not designed to juggle thousands of Types and so we spend a great deal of time, searching, looking up docs, figuring out what to use from a design perspective, looking at the samples from an implementation perspective, looking at how others have used it – only to find that while it is close to what I want, it does not really meet my requirements. Fine, close enough and with a few overrides, no problem. But when you get into low level design and implementation, you run into hard stop limitations and then you a) try and find ways around the limitations or b) go back to the drawing board or c) think about not programming anymore.
And so ends the digression. The point being, as told by Charles’s Petzold, “Does Visual Studio Rot the Mind” under the subtitle, “The Pure Pleasures of Pure Coding”, where he states, “...but there’s no APIs, there’s no classes, there’s no properties, there’s no forms, there’s no controls, there’s no event handlers, and there’s definitely no Visual Studio. It’s just me and the code, and for awhile, I feel like a real programmer again.”
At last we arrive at why I am possibly deciding not to code anymore. There is so much in the way and there is so much “of it”, that doing anything simple has become an incredibly complex task and doing anything complex takes teams of software folks to deliver due to the sheer size and complexity of our own programming environments, let alone trying to solve the problem domain we have been given.
Here comes the “I remember at time when...” Love it or hate it, Visual Basic 6 (or the Delphi equivalent at the time) was the most productive programming language and toolset I have ever worked with (Digitalk Smalltalk takes 2nd place) for building business applications in the last 17 years I have been doing software development. Why?
I used to do storyboards right in the VB6 IDE asking the business guys what they wanted, “so you need a form, with a list box and what should be in the list box, and when you clicked ok, now what... “And on it went. In a couple of days to a week, we basically had the front end of the application prototyped. Since it was back in the “two tier” days all we had to do was hook it up to the database. And if we got really “fancy”, (or had the luxury), we added a third tier of business logic written out as VB6 classes, not in the stored procs, and after a bunch of testing, report writing, etc., boom! off it goes into production.
There were a lot of happy people back then. The business users were happy because they got to sit in and basically design the user interface with me while trying to figure out the app. And then weeks later it was delivered and did exactly what they wanted it to do. I was happy, along with my team as we got a buzz on from being so productive and delivering what the users wanted. And the tools, language and database was simple enough back then to not to get in anyone’s way. Everyone happy! So what happened?
One part of it is web applications happened. Even in 2008, Visual Studio ASP.NET cannot give you a WYSIWYG view of your web application. Yet, VB1 was able to give us WISYWYG back in 1991! What the heck? Not to just pick on Microsoft, I would say the majority of vendors’ web development tools are like a decade or so behind their “traditional” development tools for building desktop applications. Further the vendors push more and more features, more and more “layers”, which mean more moving parts, which means more complexity, which means making it harder and harder for the programmer to use. It reminds me of the infamous air-conditioning “duct” scene in the movie Brazil.
Our tools and applications have become so large, many and complex, it is literally making us less productive. And to me, productivity is everything. Productivity. It is as simple as that.
Updated: A Programmers Dilemma - Productivity Lost - Part II
© Copyright 2009 Mitch Barnett - Software Industrialization is the computerization of software design and function.
newtelligence dasBlog 2.2.8279.16125 Theme design by Bryan Bell
| Page rendered at Monday, 10 August 2009 17:14:13 (Pacific Daylight Time, UTC-07:00)
On this page....