The Cost of Everything

31. December 2011 05:51 by Uriah in   //  Tags: , ,   //   Comments (0)

I've been thinking a lot lately about web pages that do too much in a single page or what I call the "everything page" and what it really costs to develop and maintain one. It's a topic that Trent and I have also been talking about in terms of web development being developing sites as a collection of pages (e.g. your typical shop cart checkout process) versus web applications that function more like a traditional application where a main window is represented by a single page (e.g. the Gmail interface).

It seems that in the majority of web applications I have worked on the paradigm is usually several pages that are smaller in complexity and then one or two pages that are real doozies. Often times the complex pages don't start out that way but as features get added, things have a tendency to compound. Whether they start out that way or end up that way, I'd argue that getting a handle on that complexity and finding a way to reduce or manage it is critical to keeping down the cost of development and future support. 

So how do we deal with these monster everything pages to make sure they don't cost us an arm and a leg?

One approach is to go to lengths to make sure that your page is developed in a modular way. Whether they are webparts, user controls, partial views or whatever your paradigm might call them, having some way to divide the problem space mentally is going to save a lot of trouble in the future. If you can remove dependencies from one part of your monster page on another, it might take a little more effort initially but will pay off when your page can be maintained without hours of study beforehand.  There are obviously varying levels of this approach, including the following examples: you could have each chunk of functionality include all it's necessary javascript, styles, and even submit independently, or you could have defined communication channels between your subsections.

The other approach I would mention is related but one that is often a harder sell. You should consider how you can break that page apart into multiple pages. A product owner or business stakeholder might make an assumption that having everything happen on a single page is just as easy as having a multiple step process (with all that "annoying clicking" to navigate). What is the real cost of cramming a ton of functionality into an everything page versus spreading it out into smaller pieces and do the stakeholders know? For example, if you have 4 pages that each take 5 hours done separately, would you consider putting them into a single page if it instead took 40 hours and all future maintenance took twice as long?  Depending on the UX demand, the answer might be yes, but I don't think this cost gets factored into the decision often enough.

As a side note, we should be aware of the cost of an iterative approach on web development. Iterative development is my preferred approach, and I think in most cases it proves to be quite valuable. Iterative development is dependent though on the design and tools that support it. Having modular code that is easy to change and refactor is key as well as having quality automated testing so that you can find regression problems quickly. Both of those qualities are necessary to confidently make drastic changes but are missing from web UI development. They can be achieved to some extent by using frameworks and trying to create using patterns, but it still feels too cumbersome and fragile to me. Unit testing of javascript doesn't seem to add enough value for the effort. Maybe I'm not trying the right tools, or maybe just the messy interactions with the DOM are too much to deal with easily (anyone have a different experience?)  Iterative development in this case can lead to a lot of hacks when developers are afraid to change code because of a lack of complete comprehension of the page and its interactions.  Both of the solutions above help to address this tendency by keeping things simpler and easier to change.

The most important thing you can do though is to recognize right away when you have an everything page in the works so that you can start to think about ways to manage the complexity and keep the true cost of the page under control.

Q: What did the Zen Buddhist say to the hot dog vendor?

A: Make me one with everything.

Taddeini's Law

3. December 2011 05:31 by Uriah in General  //  Tags:   //   Comments (6)

I was talking with my friend Andre Taddeini recently about the state of the industry and he made an interesting observation that the amount of bad code being written might be increasing in proportion to the demand for programmers. I thought it was interesting enough to talk about here.

The reasoning is that, as the demand for programmers goes up, the aggregate quality of developers goes down because the supply of capable programmers is somewhat upwardly inflexible due to the amount of effort that goes into being expert. Since the supply of expert developers can't keep up with a rapid increase in demand, the expected quality of a hire is necessarily lowered. As the overall quality of programmers is lowered in this fashion, the corresponding code that is written should lower in quality as well.

There is an implied relation between experience and quality, but it could be equally true that the acceptable quality of inexperienced developers would also lower simply through demand. Either way, I accept that experience doesn't necessitate skill, I'm only talking in the aggregate. The other assumption present is that we are holding the quality of code constant over time. This might not be true if constant improvement in frameworks and tools lessen the need for programming skill.  There is certainly some truth to that assertion, but, by observation, the rate of tooling and framework support increases only offsets the increased expectations of application complexity. Another assumption present is that the demand for developers overall is roughly equivalent to the demand for expert developers.

So Taddeini's Law is

Q = E / D

Where Q is the quality of code produced, E is the population of experienced developers and D is the demand for experienced developers.

It's really a pretty trivial observation and I'm probably aggrandizing it by calling it a law, but I think it has some interesting implications for the state of our industry. If it's accepted as true, the implication is that the current increase in demand for developers is creating an influx of bad code for our industry to grapple with in years to come. Did the same thing happen during the dotcom boom? As inexpertly written code rots, will organizations be paying an increased "maintenance tax" in the future? Should larger organizations plan for this in some way?  And finally, is there something we can do lessen the impact of under-qualified people in the industry overall?

Find the evidence you want

28. November 2011 14:36 by Uriah in General  //  Tags:   //   Comments (0)
I recent discovered a fascinating podcast called Hardcore History hosted by Dan Carlin.  While bouncing around through the backlogs, I came across an episode with historian and author Gwynne Dyer who had a bevy of interesting things to say, but one in particular struck me as being relevant to the software development process.

Mr. Dyer was talking about whether World War I had been inevitable.  Some people think that the events that led to the first World War were merely a pretense and that the war would have happened even if Archduke Ferdinand had not been assassinated, it just may have happened a year or two later.  I’m not going to compare software development to World War (although it’s a bit tempting sometimes, I just can’t trivialize a real actual war while I sit in a Herman Miller Aeron).  Instead I’ll say the thing that jumped out at me was instead his analysis of how historians analyzed that claim.

He said that historians looked back on the evidence and found the particular things they wanted to in order to support their claim.  This is, on the surface, a very simple statement, but the implications are far reaching.  In general, people look at the evidence and find ways to make it support the viewpoint they already hold.  Couple this with findings that throw into question whether facts actually help us change our minds and you should start to be worried.

Let me illustrate with a story from my development past:

Imagine a project that is in as much trouble as a project can be.  It’s overtime, over budget and the original scope has been shredded beyond recognition.  How did it get there?  What went wrong?  It depends who you ask.

The testers say there wasn’t enough testing.  The developers say there wasn’t good project planning.  The project managers say the developers went off course.  Management says people didn’t work hard enough.  Analysts say there wasn’t enough analysis done.

Who was right?

In this case, it doesn’t matter (it was me.) The important point is that everyone found the evidence that supported their particular viewpoint and anything else was ignored or rationalized away.  Would it have been possible to figure out where things really went wrong?  Probably if there was a frank discussion and analysis, but that is far harder to do and requires fighting our tendencies.

It seems so easy to spot this when other people do it, yet we seem blind to it from ourselves.  Can you challenge a conclusion you came to recently?

(x-a)^2 + (y-b)^2 = r^2

RecentPosts