2005.08.15 03:08 PM

Ouch

Just finished a conference call with a group of very smart folks who work for a client of mine. The meeting concerned a framework I wrote for the client back in 2001 that allows my client's functional staff (as opposed to their technical staff) to assemble and asynchronously execute complex data-heavy multi-layered abstract function-oriented formulas in Excel using a controlling run-time engine. The engine and the expected Excel formats it utilizes were (at the time) the culmination of years of thinking about this particular problem space, and I've always been pleased with the way it turned out and the relatively short time it took to write. As best I can tell, it has been working without any serious issues since then and has even been adopted for use in some other functional areas.

Of course, as with any interpreter or compiler, my engine is only a small part of the whole picture. The real work is found in the stuff that folks build and feed it. And, it's in this stuff that things tend to get into trouble. There are a number of reasons for this. For one thing, their stuff has to solve real problems. For another, their stuff tends to have a substantially larger footprint than the engine itself, which means there's far more to debug and test. Finally, their stuff is defined and coded at a higher level of abstraction by functional staff who know only a little about how the engine will actually interpret it.

One way to reduce the potential for trouble in these situations is to provide implementers with design-time tools that hide or reduce the complexity of the implementation process so they can concentrate on the problem and not the engine's expectations (let them cook the sausage without also having to make it). Unfortunately, the engine's original schedule and budget didn't allow for creating any tools, so the first full implementation was done entirely by hand. This was sort of like building a complete .NET WinForms application without the forms designer. Not impossible, but very very hard.

Since that first implementation, various folks have over the years added rudimentary design-time tools to the engine, and that's what today's call was about. They wanted to talk about fleshing out these tools to improve the design-time lives of their implementers. I've never seen these tools and so couldn't really contribute to the discussion regarding their construction or utility, but there was something that bothered me about their explanation of how the tools worked. To explain what that was, though, I have to back up a little and explain some things about the engine's original design.

The engine was specifically built to separate the data, presentation, and calculation portions of each implementation. There were a number of reasons for adopting this approach. The first was logistics. The folks responsible for the first implementation were located in different offices, and so it was beneficial to be able to separate the work for simultaneous development. The second was timing. In order to meet our deadlines, implementation work had to begin before the engine was coded. This was accomplished by defining the engine's format standards first (think interface contracts) and providing them to the implementers as targets for their work. As long as they hit the interface targets, I could slide completed parts of the engine in under them without interruption. Finally, and probably most importantly, the separation was intended to reduce coupling between the layers to improve the chances of reusing portions of the calculations between implementations and to ease debugging and testing.

During the engine's first implementation, the first and second reasons achieved their purposes. However, the last reason, reduced coupling, kind of got hijacked. In an effort to get a grip on the constantly evolving data schema, which contained thousands and thousands of data elements, the chief functional implementer got in the habit of synchronizing the names of function parameters in the calculation-layer with the names of the data fields in the data-layer that would (most likely) fill and/or consume them at run-time. In other words, the implementer was writing functions with parameter names that matched the names of data fields, instead of relying on formulas to map data into and out of the function parameters. He compounded this problem by shunning cohesion in favor of functions that took hundreds of parameters.

Now, I don't fault the implementer for having done this. He's one of the smartest fellows I've ever met and a real joy to work with (he's one of those rare smarties who can walk that fine line between the functional and technical camps). But, in the end, he's not a programmer, so things like tight coupling and low cohesion don't raise red flags for him. And, in so much as he was working without the benefit of any design-time tools, he really needed some way to help tie together the logic and constantly evolving data. I suppose matching names seemed the obvious thing to do. To his credit, he knew there was something distasteful about this, but only in the sense that it required him to maintain a time-consuming (and often error-prone) synchronization effort, in addition to doing all the other work of implementation, like writing functions and the formulas that were supposed to map data into their parameters (which in his model, of course, seemed terribly redundant). Finally, I'll note that his use of large functions with lots of parameters was mostly the byproduct of implementing logic already found in other existing Excel workbooks without investing in the functional decompositions needed to bust them into smaller pieces, all in an effort to save time. Fair enough.

Fast forward again to the meeting. What I figured out was bothering me about the existing tools they were discussing was that they apparently incorporated this implementer’s tightly coupled model in their operation. In other words, instead of being tools that promoted good layer separation by offering formula authors easily digested pick lists of data fields and function parameters (and their descriptions) with which to map one to the other, it codified the flowing of field names from the data layer down through formulas and right into function parameter names. Oh nuts.

When I finally figured this out I spoke up and explained my discomfort with this approach and offered up some history on the value of loose coupling (and high cohesion) and described why implementations were originally separated in the engine this way in order to promote Good Things™. To lend some non-technical weight to my position, I recounted my experience successfully implementing a chunky piece of the original implementation's functional work using these techniques. (I was originally given this piece of functional work because of time constraints, and because it had a unique data-enumerating requirement. The thinking was that it would be easier for me to implement this piece using the engine I wrote than it would be for a functional person to implement with no engine documentation and no enumeration samples).

Implementing this functional piece allowed me to see up close and personal how really terrible the implementation process was without the aid of design-time tools. And, while I couldn't talk them into investing in lots of new tools, I did make numerous run-time improvements as a result of my experience, including adding an interactive debugger with logging and run-time data querying. But, I always thought that the most important part of my foray into functional implementation work was that it allowed me to create a fine working example of a best-practice implementation: fully decoupled and highly cohesive functions (not only reusable, but actually reused within my formulas) emanating from an actual decomposition of the requirements, and independently named function parameters joined to data fields with clear mappings in the formulas. It was, I thought, a thing of beauty. It worked, was easy to trace and debug, and satisfied the requirements I was given.

However, following my long-winded explanation, I was informed by the functional lead on the call that they had a slightly different term for my perfect implementation:

"Black hole"

Ouch. I don't think he meant to be mean (anyhow, I'm a consultant, so who cares), and he wasn't suggesting that my functional development adventure wasn't everything I imagined. It's just that what I'd done was so far removed from the typical functional implementer’s experience that everyone was afraid to go near it for fear of breaking it. Rather than serving as an exemplar effort, it was effectively off limits. It was a black hole.

So, is there a lesson here? Yes, there is.


Comments


TrackBack

TrackBack URL:  http://www.typepad.com/services/trackback/6a00d8341c7bd453ef00d8351da0c153ef

Listed below are links to weblogs that reference Ouch: