Friday, August 29, 2008

TOGAF and Evaluating Architectures

The old adage is that you always have an enterprise architecture even if you never designed one -- the point being to encourage an organization to spend the time to design one. This is all well and good, but given an ongoing enterprise, what's the best way to determine what enterprise architecture you have, where you want it to go and most importantly, how to get there.

For various reason I've started looking at this issue again and have just refamiliarized myself with TOGAF (The Open Group's Architecture Framework). I had forgotten how much I liked it: it is pragmatic, highly tailorable and focused on open cross-organizational solutions. I'm not going to do a detailed analysis of TOGAF vs. other frameworks -- it's not really my interest as I'm definitely in satisficing mode here. What follows is my (long) elevator pitch of the TOGAF take home message.

The top level graphic of the TOGAF process captures the flavor pretty well

blog__togaf_dev_cycle.jpg

The preliminary phase is key but very easy to overlook. TOGAF suggests that this phase consists of defining the overall objectives and scope


  • Define Objectives
    • Assure that everyone who will be involved in or benefit from this approach is committed to the success of the architectural process

    • Define the architecture principles that will inform the constraints on any architecture work

    • Define the ‘‘architecture footprint’’ for the organization — the people responsible for performing architecture work, where they are located, and their responsibilities




  • Define Scope and Assumptions

    • The business units that are involved

    • The level of detail to be defined

    • The specific architecture domains to be covered (Business, Data, Applications, Technology)

    • The time horizon that should be addressed by the architecture.



What I find attractive about this whole approach is its focus on getting getting buy in from the key players in the organization, defining their roles and developing a shared set of expectations around what's going to be done as part of the architecture effort. The preliminary steps give the team some initial criteria for driving the architectural vision, but then TOGAF immediately requires them to ground it as supporting the needs of the business users. I find this grounding critical; often the business thinks that architecture efforts are worthless and often they are right because the architectural model hasn't been grounded in the business process. Note: in this case "business process" and "user's scientific process" are equivalent.

The key to the success of an architecture effort is to address current pain points as they will be reflected in the business processes that will be in place when the architecture rolls out. Sorry if the tense of the last sentence was a bit torqued. What I mean is that the architecture needs to hit a mark to support business operations as they will be in the future, not as they are now, and that some of the problems that are being experienced now will only be exacerbated by these planned changes.

The dialog with the Business Unit leader sounds something like

We're planning to do more collaborations in the future, but with our current collaborations we have a terrible time registering new users and tracking responses to our questions about the data. However, if we put an architecture in place which uses our new authentication mechanism that supports OpenId it will radically simplify the process of adding new users.


In addition, if we use vendor X's implementation of the Life Sciences Industry Architecture, queries will be automatically tracked.


Our ability to handle more collaborations on the back end is increased as our new system allows us to share extra capacity across multiple business units, thereby sharing the cost of reserve capacity to meet any unanticipated surges in demand.



Such a "political/operational" model for rolling out an architectural analysis implies that everyone who contributes to the effort should get something out of it (this is a goal, but the closer you can come to meeting the goal, the more self-organizing the system becomes).

As one proceeds around the TOGAF loop, you pick and choose what makes sense given the decisions made previously (which of course you are always free to revisit) analyzed to the level of depth that is appropriate.

Think of TOGAF as providing a (partial) checklist of processes to use and things to consider that help you reach the end state of
Boundaryless information flow (tm)

blog__Brokerage_applications.jpg


I think of it as being similar in spirit to the way the Software Engineering Institute's Risk Management Taxonomy provides a comprehensive checklist of things to consider when undertaking a project -- it keeps you from forgetting something that would be obvious in retrospect.

An aside:

My favorite quote from one of their pages:

Another company is developing a flight control system. During system integration testing the flight control system becomes unstable because processing of the control function is not quick enough during a specific maneuver sequence.

The instability of the system is not a risk since the event is a certainty - it is a problem.



Thursday, August 14, 2008

Optimization: Premature and otherwise

My post on structuring database tables made me think again about when (and how) to optimize one's code/design. The caveats about premature optimization are well known and well considered, as are the reasons for not following them slavishly.

In my mind the core questions involve "what are we trying to optimize" and "who cares".

My (obvious?) claim is that we should only spend time optimizing things that have impact upon the high level goals for the project. The reasons for performing an optimization should be articulated and evaluated in this framework. Driving optimizations by focusing on the top level goals sounds obvious but the tradeoffs are difficult to make in practice.

There are legitimate tensions about the proper framework relevant to the analysis. That is, it is all well and good to speak of "strategic business goals," etc. but if the product is unusable, the long term strategy doesn't matter. A strategic focus simply assures that long term impacts are also evaluated when an optimization is considered, e.g., an optimization that greatly increases execution efficiency may or many not be appropriate if it increases the complexity of product installation and set up.

For example, the "narrow table" approach that I've been advocating is designed to support deep changes in the business processes and science over the life the system, without necessitating deep changes in the data model. The "strategic horizon" for such a project is > 10 years (that is total system life of > 10 years) with the expectation that the fundamental data model reflected in the narrow tables will be relatively stable during that time. Even in a rapid prototyping environment with one or more iterations shipping each quarter, the essence of the data model should be fairly stable since fundamental changes to the data model induce data migration efforts which distract from product improvements.

The question is: are there inefficiencies caused by the narrow table approach that will make it overly difficult to achieve a usable product in the short term. My current intuition is to go with the narrow table approach and let either materialized views (which in my knowledge are most easily obtained in Oracle), special database jobs, or a data grid provide the optimizations

That said, one of my personal rules of optimization (based on some experience) is that your intuitions are almost always wrong -- if something is taking too long it is usually worthwhile to benchmark it even if you "know the cause of the problem" (unless the putative fix takes substantially less time than doing the benchmark). The corollary being that if you're worried about something taking too long, and think that you "should be OK" set up a test suite at scale to validate your intuitions going forward, so that you can monitor performance as development proceeds.

This principle holds even if the bottleneck is development time: you need to determine if the problem is with the language, the developer, the user/developer interaction, or the developer's manager (e.g., providing more interrupt driven activity than the developer can handle)? As always, measurement is the key.