Sunday, January 13, 2008

The Unintended User?

For years now I have been in meetings and listened to talks from experts that describe something called the "Unintended User Problem." This term, like many in the SOA world, is bandied about with little to no real definition. This is very convenient for the speaker, as they sound like they know what they are talking about, and it means many things to many people in the audience, so no one tries to make waves by asking a silly questions like "What to you mean, exactly, when you say 'Unintended' user?" Although, even if asked that question, I'm sure and adept and erudite presenter would respond with something like "Well, it depends." That kind of talk has now pushed me the point of being physically ill when I hear a non-answer like that.

At any rate, I’m not sure what people are talking about with the term "Unintended User". After some thought about what the definition of what the "Unintended User" might be, I think people mean: The Unanticipated User

Some definitions may be in order:

The Unanticipated User:
A user that you simply did not expect to show up and use your service.
Of course, you may have certainly intended this user to show up and consume your service. You just don't have quite enough horsepower to support them. This problem is usually solved by adequately scaling your service. Usually, people never quite the all the funding that they ask for and end up deploying on "not quite enough" hardware to support the eventual load. Nonetheless, the basic solutions include running service in a Cluster or “Virtualize” the service; ergo, make the service appropriately scalable.

The Unknown User:
A user that shows up to use the service that are not authorized, regardless of whether you planned enough capacity for them.

This problem is solved by dynamically changing security profiles and access control.

The Unintended User (a.k.a. the Demanding User)
A user that is probably known, and authorized, but does not want to use your service the way it was intended to be used.

This is what I believe is truly meant by the phrase "The Unintended User." It's important to note that the notion of "unindented" should be a mutually understood characteristic of the consumer among the consumer and producer organizations. It not that there is some misconception about the consumer that was caused by short-sightedness on the part of the service provider. The service provider should know what the hell they are doing when they expose a service and be able to explain that to potential consumers. If there are users out there that want to use that service, but in a way that is inconsistent with how the service providers intended, then the user is, by definition, "unintended."

In this case the service provider may create a new service just for that consumer to align with what the consumers idea of "intended" usage. I would hope that the service provider would require the consumer to “pay a premium” to the services producer for that "new" service.
Or, the demanding (a.k.a unintended) consumer will just have to adjust their expectations and user the provided services and the service provider organization intended it to be used.

If the service provider somehow missed the mark with respect to what consumers really wanted, then shame on the service provider. If that is the case, then the provider should take it's medicine, re-evaluate the needs of the consumer and try again.

Tuesday, October 23, 2007

On Change and SOA

There is a quote with which I begin most of my SOA presentations; especially those that are to a largely non-technical audience. I do not go into the deep philosophical ramification of this quote, but I hope that it leaves the seed of an idea my audiences heads that will help them better understand the value of SOA in the long run. It’s a not-so-famous quote from a very famous person. A quote consistently is forgotten in the shadow of popular, shall we say, “Machiavellian” interpretation of The Theory of Evolution by Charles Darwin.

It is not the strongest of the species that survives, nor the most intelligent that survives. It is the one that is the most adaptable to change.

Charles Darwin


So, even Darwin himself has issue with the popular conception of “Darwinism” that only the strong survives. For anyone that has gone through a significant change, it’s certainly not easy. But what is the definition of change that Darwin is driving at? Is it more along the lines of a “catastrophic” change where the world changes in an instant and only those in the best position to adapt can survive? Or is the inability to catch-up and adapt to a multitude of small changes along the way over a life-time, or perhaps over several generations that makes the difference?

Change is a funny subject and certainly with respect to SOA. People have different ideas about the nature of change and how it applies to technology, their business and their overall world-view. However, since a major value proposition of SOA is “agility”, it’s important to define this to some degree. When we speak of agility we largely mean the ability to rearrange how distributed systems are integrated and how business processes are created and run. This is not the only definition but it’s the one, in my humble opinion, most relevant apropos SOA. This agility is important as it allow corporations and government entities to react quickly to market/competitive changes, new legislation or world events. And, of course, this agility is a major component is the Return-On-Investment (ROI) calculations of SOA.

From my experience, when the buyers of SOA think of change, they think of changing all the time. That is to say, change happens often, perhaps everyday, and it’s imperative to react accordingly or doom and gloom is the unfortunate result of sluggish action. Further the users of SOA think that when change occurs, it’s usually imperative to do something about it, if necessary, as soon as possible. The speed at which you are able to react to changes in your environment (and any context will do here: commercial, economic, competitive, military, legal, etc.) is just as important as how often change occurs. In fact, change might manifest itself as the realization that you’ve made a mistake in tactics or strategy and you need to fix it quickly. Obviously, if you can’t react to change faster than it happens, then you’ll be hopelessly left behind. For example, if the competitive landscape changes every 3 months, but you can only make substantive changes to how you do business in a 5 month window, then you’re in for a bumpy ride and perhaps ultimate doom. Therefore, I think that the fear factor of the rate of change dominates the thought process of decision makers when contemplating the subject. But we should remember that there are two aspects to change that we need to balance: A) how often change occurs and B) the speed at which we can react to the change. It’s my position that B is more important than A when thinking about SOA.

This SOA stuff is still not quite fully proven. And we’ve all seen silver bullets come and go and they have never quite lived up to their hype. Now were talking about ESB software and BPEL that will magically mitigate your enterprise integration woes and do it in minutes instead of months. As it turns out, what were talking about is far more than just the technology and network plumbing that connects our enterprise applications and business processes. We’re talking about the fundamental nature of business and complex distributed systems engineering. All of the SOA tools that are available on the market today simply do not do distributed systems engineering and design for you. Nor do they analyze the business implications of the way you’re integrating your components either internally or with outside business partners. The business problems you face with change happens, to me at least, seems to be the “long pole in the tent” with respect to agility. It used to be painfully true that IT department and all the consultants they could possibly hire could not effect a change to the IT infrastructure fast enough to satisfy the business owners. That is changing. Not changing quite as fast as the SOA vendors might lead you to believe, but it’s changing nonetheless. Soon months to minutes will be possible. Give it a couple more years before that kind of technology is really ready for primetime.

When you decide to make changes to react to some event or events in your world, it’s not how often those changes occur that is the issue. It’s how fast you can react to these changes when they do occur. That is what SOA agility is meant to convey. Very soon, it will be time to think about how your business decision making process will be able to keep up, not how fast your IT infrastructure can change.

Friday, October 19, 2007

I thought this was how The Matrix worked

I was browsing the internet today and came across something that I just had to comment on.

The link in question was referenced by Slashdot.

Reducing Lag Time in Online Games

Predictions from a neural network could reduce characters' jerky movements.


This just floored me.

Here are my thoughts:

If you think about it, the computer-generated world made famous in the movie "The Matrix" is simply just a big multi-player online game (MMOG), if you will. The distributed computing problems of "latency jitter" have been around since one computer talked to another across a network. The Matrix would have suffered from the same problems. There are a couple of things that really jumped out at me when I saw the movie that made the possible story line, as I saw it, really interesting.



  • Ultra-super-fast reaction in The Matrix:   Aside from the jumping from building to building and flying business (which is cool, but not a particularly interesting technical problem) the ability to move and react in ultra-fast time frames got my attention. I have been building distributed systems and various types of simulators (and a computer game is really just a type of simulation) for a long time. That "latency jitter" is a big problem and its causes are more complex than just network distance and packet size. One way to get around it, of course, is to predict actions in the latency window so that action/reaction pairs have parity in "game time" ... as if action/reaction was in the real world. So, the story revolves around this "Neo" dude who magically has the ability to react faster in "game time" better than anyone else. "Hmmmmm," I say. Where does this lead?



  • Jacking into the Matrix from "broadcast depth":   This really got my juices flowing. "Broadcast depth?" They were connecting to "The Matrix" network through a wireless connection; using a radio. And trying to do it quietly. I know a little but about trying to stay undetected while using radios and the nature of wireless networks. Good wireless network connections are "loud" or "bright". Kinda like having a big, bright light bulb on in the middle of a dark room. You're gonna get noticed. High-bandwidth connections to The Matrix while you are trying to hide from the Sentinels would be a big trick. Either way, the ultra-fast reaction time of Neo gets shakier with their WiMax-Max-Max hovercraft connection. You're gonna have piles of dropped packets and other radio interference from those crazy-cool-looking-lightning-spewing "hover pods" on the Nebuchadnezzar. Trying to stay "radio quiet" would exacerbate the problem far more as the radio would not be able to broadcast with too much energy (loudness) and that broadcast energy is proportional to your effective bandwidth and smoothness of your connection.



  • The Oracle:   This chick can predict the future!!! So there is a "program/being" that is a Matrix bigwig (the "Mother of the Matrix" no less) that can predict the future!!! Now we're getting somewhere. Neo's gotta go see this "future predicting" entity because he is "The One." And Neo can do all this crazy-fast-dodging-bullets and wicked-awesome-zippy kung-fu action!!! Maybe, just maybe, Neo has some ultra-special prediction algorithm that let's him act/react faster than anything else ... even Agents. Or, maybe The Matrix itself has allowed Neo to use it's special "Neuro-Reckoning" module (which would likely be more interesting).



  • There are Real-world robots and Matrix "programs"?:   This one slightly confuses me. So in the Matrix there are "Machine-world programs" that seems to live in it. For example: The Agents, The Merovingian, the very hot Persephone, all henchmen and S&M club-goers, The Architect, The Oracle and let's not forget that "family" of programs that were escaping on the train (not sure I quite fully understand that). Even these Matrix programs have the same engineering problems that "jacked-in" humans would have regarding "latency jitter." And having played online "continuous simulation games" for at least 20 years now, even over dial-up connections, I'm sure you'd all agree that if you were experiencing "latency jitter"walking around in your everyday life, you'd want to do something about it if you could. Perhaps adding a little "Neuro-Reckoning" would work here to smooth things out. So there are also these "real-world" physical robots as well. Maybe they have problems processing real-world event in a consistent manner as well.



  • Human brains are ... neural networks:   Coincidence, you say? I think not! Things might be coming together now. That whole "we use humans as batteries" story always sounded like a load of crap to me. Now, imagine that the "machines" really needed to use humans as surrogate "Neuro-Reckoning" processors, much the same way as we have graphics and physics co-processors today. There are, in fact, neural network processor boards available for computers (http://www.accurate-automation.com/Products/NNP/nnp.html) and have been for years. It seems we have never really had a great grasp on how the mind really works (read "How the Mind Works" by Steve Pinker). There are some thoughts going around that describe the brain as a continual "future predictor". See the TED talk by Jeff Hawkins (http://www.ted.com/index.php/talks/view/id/125). If that is the case, then much of the brain's machinery is built to do "Neuro-Reckoning" of various forms. from the split-second type needed to walk and talk and work in the physical world as well as the longer term "action/consequence" things we think about which is largely the domain of classic AI (if I get good grades in college, I'll get a good job, make good money and attract hot women). Perhaps THIS is why the machines needed the people. Perhaps the machines needed the "Neuro-Reckoning" processors that are human brains.

    The "human battery" thing seems way too hokey for me and it doesn't make sense WRT the movie dialog either. In the exchange between Neo and The Architect:

    Neo: You won't let it happen, you can't. You need human beings to survive.

    Architect: There are levels of survival we are prepared to accept.

    The whole dialog can be found here.

    So, if all the humans die, the machines go on, just with really jittery interaction with their world. Painful and frustrating, but survivable indeed. Obviously they had this problem before and they understand it. Why else would they enslave the humans to be co-processors.

    Therefore the real function of humans is as "Neuro-Reckoning" processors. So, the next question is: why destroy Zion and let Neo live and repopulate Zion all over again? At the risk of reading way too much into the dialog of a movie, let's look at the dialog one more time. Just before the "level of survival" comment, The Architect says:

    Architect: But, rest assured, this will be the sixth time we have destroyed it. And we have become exceedingly efficient at it. The function of the One is now to return to the source allowing a temporary dissemination of the code you carry reinserting the prime program after which you will be required to select from the matrix 23 individuals, 16 female 7 male, to rebuild Zion.

    What the heck does "temporary dissemination of the code you carry" mean? I wondered that. The only possible explanation of "code you carry" I can think of, because Neo is really a live human, is DNA. The "dissemination of the code" likely means making babies and spreading his genetic code. Why this? I think it might mean that the machines recycle the human population and seed it with the genetic code of the individual who as the best innate "Neuro-Reckoning Processor" based on predictive speed and accuracy. Neo was chosen because of an accident of his genetics. An accident of how his brain worked. His DNA contains the "prime program". Returning to "the source", I believe, means going to a special place in the physical world where he will deposit a portion of his DNA (or maybe a part of his brain) which will then be processed and injected into the next generation of human co-processors.

  • The machine world knew about him, nurtured him and then allowed Morpheus to come and get him as a precursor to destroy and repopulate the humans in a massive breeding effort ("this will be the sixth time we have destroyed it").  This was to create, through a pure genetic algorithm, the best "Neuro-Reckoning" processor they could get to make their "online" experience as smooth as possible.

    Holy shit. That's much cooler than whatever other story line The Wachowskis were trying follow (I totally did not get the ending of Matrix Revolutions ... maybe I'm just dumb). Maybe the end of the third movie was an indication that the "machine" entities thought it might be better to live in harmony with their "creator" race or begin to blend with then than continue to subjugate humans. Maybe. Sounds like a couple new Matrix movies are in order here. I'll have my people call the Wachowski's people and we'll do lunch.

    Ok ... so why don't the machines just build neural network based "Neuro-Reckoning" co-processors and dispose of those pesky humans? Good question. Perhaps there is something special about how human brains work that the machines could not figure out. Perhaps they tried and failed.

    The Architect: The first matrix I designed was quite naturally perfect; it was a work of art, flawless, sublime. A triumph equaled only by its monumental failure.

    The "dead reckoning" algorithms and their variants just didn't work or the neural networks they tried made the latency jitter worse instead of better. There are some theories that there is something special in how our neurons are made that allow for something called Quantum Computing which would allow for hyper-speed computations that would be quite useful for things like "fast complex predictive algorithms." But that's just a hypothesis. (See http://www.iscid.org/arewespiritualmachines-chat.php)


    If we figure this stuff out, then it will have a massive effect on how armed conflict would occur. If we can help fix the latency problem, then we could have real-world battles run in "remote-control." The autonomous Unmanned Aerial/Ground Vehicle (UAV/UGV) are not that smart. I'd rather have people run them. The F-35 was the last manned fighter to be designed and built. But how do you fly a remote-controlled jet at Mach 3 pulling 14Gs with a 1 second latency? Not well, actually. Not something you want to do with real-would consequences. Frankly, I would prefer a war of robots run by remote control rather "intelligent autonomous" robots. I would like more control over things with big guns. I'm just like that.

    It's been a while

    It's been a while since I've posted. I had to take a hiatus from public postings while some possible career changes were underway. That's just about to end, and it looks like I'm clear to go back to posting my thought on various themes, the main being Federal SOA.

    It's good to be back in action.

    Monday, May 21, 2007

    Federal SOA Watershed gets published in GCN

    The first of the blog posting to get published was the Federal SOA Watershed article.

    I had to edit it extensively but the final product came out well.

    The link to the GCN website is:

    http://www.gcn.com/online/vol1_no1/42609-1.html

    Oil and Water post published in Government Procurement Magazine

    One of my posts, "Oil and Water" was rewritten and published in

    The article can be found online at the following address: http://www.govpro.com/Issue/Article/52712/Issue

    The editors took some liberties with the wording of the article that I was not all that thrilled with, but all in all it seems a good article.

    Peter Bostrom, the Federal CTO of BEA and former Federal CTO of Tibco played editor on this article.

    Here comes NCES SOAF

    The draft RFP for the flagship NetCentric program has been on the streets for a couple of weeks now. There has been some significant interest, however, folks are not quite sure that to make of it. Is DISA just throwing in the towel by putting out a RFP that is doomed to failure? Good question. And the fact that the RFP reads like "deja vu all over again" ... nothing new here ... is not a good sign.

    But it just might need the SI community to step up and try to be creative on this one.

    Either way it's going to be interesting.

    Wednesday, November 29, 2006

    Oil and Water: Incentive System for Government Programs and Systems Integrators

    When contemplating the technologies and usage models for something like NCES, one has to consider two things. Firstly, why would someone want to utilize or consume a shared service? Where this seems almost a silly question on the surface, in the world of government procurement practice and specifically in the realm of DoD program management practices, there is some doubt and some explanation as to why many might not want to use shared services. We’ll ignore for the moment that the express goals and directives from the Secretary of Defense clearly declare Interoperability, Agility, Visibility and Transformation as a the tenets and mechanisms for the future US forces and their capabilities. Secondly, why would someone want to create a service that is then published and shared via a shared services infrastructure? Again, this seems a silly question, but a serious on nonetheless. The question at hand is: What is the incentive to either produce or consume a service? This author cannot find a compelling reason why anyone would take advantage of a shared services infrastructure given the current way business is done between the government and System Integrators. We’ll take each side of the question separately and we’ll construct a use-case that will illustrate how a shared services incentive system might work.

    We’ll start with the case of taking an existing capability, standards-based service-enabling it (e.g. with WebServices) and then registering it to be used as GFE or GOTS by other programs. In all likelihood, this service will be developed by one of many Systems Integration firms that do business with the government. A primary aspect of their business model is to leverage past performance on a program in order to acquire new development contracts where they basically get to build nearly the same thing yet again. Which begs the question: If the capability they just built as part of some traditionally operated contract vehicle is now generic, service-enabled and generally available as GOTS to the rest of the DoD (or even the entire government) then what incentive does a System Integrator have to build it if they are unable to leverage that direct experience and charge dollars for hours to build a very similar service for someone else? It’s a long-winded question to be sure, but one that needs to be addressed in the business model for a shared services world. This author could imagine the situation where System Integrators would go out of their way to make sure that they did not build a service that was capable of being used in a generic fashion on a shared services infrastructure. It would help them protect their current business model.

    Now imagine the same scenario with a slight twist. Imagine that once this fictional System Integrator built this fictional service and that service was registered as a shared service via NCES. Now, this service is then consumed by a number of other applications and programs providing greater efficiencies in time and cost. But, imagine that the System Integrator that originally built the service now receives continued revenue based on the usage of that service. This assumes that somehow there are mechanisms in place to allow a charge model for consumers of shared services. It is beyond the scope of this document to delve into the subject of how the government might craft such a “charge for the usage of shared services” model for the service consumers. Further imagine that that revenue gained from the consumers also benefited the original PEO that commissioned the shared service in the first place.

    The question of why a program would use a shared service is somewhat more complicated. It largely revolves around trust and control. If a C2, ISR or perhaps even a weapon fire control system utilizes some shared service to perform a critical task, it places an enormous amount of trust in that service, the infrastructure that provisions that service and the people that built it. It is not trite to say that many lives, soldier and civilian, young and old, are in harms way and at high risk with some of these systems. The financial and budgetary reasons why a shared service might need to be used are not nearly compelling enough for the PM of a C2 application to utilize some shared service.

    The most common and compelling reason why an application or program would want to utilize a shared service is because they cannot get the information otherwise. Examples of this include intelligence information, logistics information, as well as troop and equipment readiness data. What makes a shared services infrastructure such a compelling value proposition then is two fold. Firstly, the PM for this consuming application does not have to go out to each of the data sources and individually negotiate the details of how the integration between the two systems will operate. All they have to do is negotiate and encode the SLA and QoS with NCES. Secondly, there is a certain level of distrust between the consumer and the producer of information as there is no prior indication that the producer of the service will be able to deliver to the level of quality that is demanded by the consumer, and for very good reason as was discussed earlier; the service producer is not necessarily an expert nor has all of the appropriate infrastructure on hand to satisfy the consumer. This is where the NCES shared services infrastructure is strategically important. NCES should be able provide the appropriate level of assurance and trust that services it brokers will be able to satisfy the consumers SLAs and if they do not, then NCES should have the appropriate detection, governance and recourse in place to help the consuming application complete its mission.

    This type of incentive system is a critical success factor. The ability to provide trust, adjudication, consistency and the appropriate business model will accomplish far above and beyond what any technology base, standards or set of products can provide.

    Tuesday, October 24, 2006

    Making it easier for Organizations to Publish Services

    Publishing a service is far more complex an exercise then simply registering a WSDL in a UDDI registry. The technical problems that have to be addressed by a service provider are non-trivial. Equally important are the political, financial and governance issues that surround providing a service to a community like the DoD. The Federal Government and the DoD and its NCES program, in its implementation of a shared-services infrastructure needs to make it easy for a service to be published on the Global Information Grid (GIG).

    The technical tasks involved are only one aspect of service production. These generally include: Registering a service description in the service registry; making sure that the application infrastructure such as the application server hardware, network, etc. are scalable and up to the task of handing the kinds of loads that are expected by a DoD wide service; keeping track of service quality metrics and detecting and reporting violations in Service Level Agreements (SLAs). However, these are only a small part of the kinds of operations that should be required from a shared-services infrastructure.

    Setting up the mechanisms for a competitive services marketplace is a key element to allowing service providers a chance to get their service used by potential service consumers. The following is a representative (and certainly not exhaustive) list of possible service infrastructure capabilities. Some of these topics will be covered in more detail later posts on this blog.


  • Provide the means for suppliers of services to advertise their service. Simply relying on the pull model of existing service registries is not sufficient for service providers. There needs to be a well-known and standard push channel for service providers to advertise their service(s).

  • Provide the inherent capability for hosted services to automatically scale to meet demand. This should be based on how well Service Level Agreements are being kept. This is capability will assure service providers that their service will be able to continue to satisfy the contractual obligation for service while not burdening the service supplier PEO with this non-trivial requirement. It should be noted here that this type of capability will most likely be a critical success factor in the overall success of a Shared Service Infrastructure program. Having this capability inherent will greatly increase the trust factor of both consumers and producers of brokered services.

  • Provide real-time and historical Quality of Service reports regarding brokered services. This, in conjunction with associated service SLAs will give service providers the ability to compete on a quantitative basis where consumers are either making a choice between two or more possible services to include or in the situation where the service they are using is failing and they are shopping for another to use.



  • Another easily overlooked characteristic of the shared-services infrastructure is one that helps protect producers of services from becoming victims of abusive consumers. Service providers live and die by how well they satisfy their SLAs and Quality of Service objectives. For example, assume there is a consuming system that somehow utilizes a service in a manner not consistent with how it should be consumed (such as an inadvertent over-zealous invocation pattern … sort of a programming bug that causes a denial-of-service attack on a shared service). The infrastructure needs to provide a mechanism that helps protect the shared service and its all-important service metrics. In short, service providers need to trust that the shared services infrastructures (such as NCES) will provide a fair place to conduct business and provide technical and procedural mechanisms (a.k.a. governance) to help mitigate the risk of publishing a service.

    The lynchpin in the entire infrastructure effort is to provide a set of services that allow consumers and producers of services to work though a single trusted environment. The infrastructure must provide a consistent language for governance; for example, service level agreements, violation notifications as well as processes and policies for non-compliant providers and consumers to regain compliance. It must include a path for consumers to switch services if they feel the need. It must provide providers of services a set of mechanisms by which incentives for the service provider can be realized. This speaks directly to your question: “How can we maximize the flexibility and reusability of core services?” The core services will only be used if and when NCES can provide a consistent and trusted shared services environment.

    Service Infrastructure Value: Making SOA Easier

    Service-Oriented Architecture seems like a simple idea. Produce a bunch of services, publish them through some type of service infrastructure for governance, and then consuming these services will realize the obvious benefits of SOA. It seems pretty straightforward. There are lots of articles out there that tell you how to build shared services utilizing various protocols such as WebServices, REST, etc. There are many technologies you can use to consume this newly created plethora of shared services. The important part is that the service protocols are interoperable. As for the service infrastructure, there seems to be no end to the catalog of enterprise-class software products you can buy that will provide you all the services governance you could possibly need. At least this is the promise. The reality is that there are various road bumps along the road to SOA nirvana. Certainly there are some technological gaps in the whole SOA story; but those gaps will be filled and the technology will work. What problems I’m talking about are a bit squishier. There are business, organizational, political and financial issues that need to be addressed.

    Here’s the idea: shared services are published via a service broker via one or more interoperable interfaces that will be connected to and invoked by some consumer. There are some practical issues that surround this. The practical issues that I would like to focus on in the next several posts include what I call the “friction” involved in consuming or producing services through a shared services infrastructure. And the ideas here are more targeted toward those that are contemplating standing up a central service infrastructure or brokering services within your organization. Remember that simply because you have a service infrastructure, it doesn’t mean that service consumers and producers will come flocking. There is a level of responsibility with the service infrastructure provider that can significantly reduce the “friction” involved in getting producers and consumers to come together. As it turns out, it’s not quite as natural as you might think.

    Wednesday, October 11, 2006

    The Federal SOA Watershed.

    The Federal Government finds itself at a watershed. The old practices of procurement to satisfy government requirements are being squeezed by a strong need for agility, visibility of information and our ever decreasing ability to pay to “reinvent the wheel.” There is an old saying that goes like this: “It’s not that you have what you want, but do you want what you have.” When it comes to looking at how to create new systems and processes to handle new requirements, government agencies won’t be asking how to build or buy something new, but asking what exists now that can be reused and shared among programs to get the best value for government dollars spent. And the time for this is now, not 5 or 10 years from now; so we’d better learn to want what we have.

    With world events being what they are, there is an enormous pressure on federal budgets and organizations to not only do more with less, but share what they have so that their systems, organization and people have the greatest value. Service-Oriented Architectures (SOA) and Service Infrastructure software enable the Federal Government to take the critical steps toward sharing information and processes among other agencies. This article provides an overview of what a federal agency needs to think about both organizationally, politically and technologically with regards to SOA technologies and practices.

    There has been much hype about SOA in the last few years; however, the thing to understand about SOA technology is that it represents a significant leap in the maturity of distributed systems. Sharing resources is the foundation of a sound SOA strategy. These shared resources should be available to consumers with as little effort as possible. Organizations should expose their shared system via completely standards driven interfaces and does not require specialized software or hardware to be purchased by those wishing to use that shared service. The most important software category of SOA software with respect to strategic SOA initiatives is SOA Infrastructure software such as Enterprise Service Bus (ESB), Service Enablement Platforms and Data Service Platforms. The reason infrastructure software is so important to consider is because it provides the technology platform that enables proactive sharing of services while mitigating many of the risks and issues that have killed other distributed systems or data/service sharing efforts.

    Strategic SOA presents some significant challenges to the organization of any federal agency. Most significantly, shared services represent a completely new way of doing business. Instead of a “need to know doctrine” where agencies do not share information unless there is a specific need from some specific other agency. The doctrine now is better characterized as a “need to share” doctrine. Agencies need to proactively begin sharing data that is or maybe useful to other agencies. The organizational preparation that an agency needs to accomplish this reversal of doctrine should not be underestimated. Part of that preparation begins with selecting the correct SOA Infrastructure software but even more vital to this transition is partnering with a System Integrator that offers extensive consulting practices that that covers many organizational areas such as cost, budgeting, governance and business strategy.

    The politics of SOA will most likely be quite simple. Government agencies shall share data proactively; actively encourage other organizations to utilize their shared services and utilize the shared services of others to the greatest extent possible, or face budget cuts and poor performance reviews. The technology is there and is ready to be utilized today to get maximum value from the federal governments existing assets. The “need to share” doctrine can be realized and the momentum toward a shared service infrastructure across the government is growing and doesn’t look like it’s going to stop any time soon.

    Why write about SOA for the Federal Government?

    I suppose that it might be might have been easier simple to write about SOA in the abstract without framing it in some specific context. As I work in and around the US Federal Government I have some knowledge and insight as to how the government (Civilian, Military and Intelligence community) does business. Armed with that knowledge and my position as Federal SOA Architect for BEA, it's clear to me that the benefit of SOA in this space is simply enormous.

    Of significant note is the sheer size and mass of the US Federal Government. In the world of Enterprise Architecture and distribruted engineering, I would be hard pressed to find an organization larger, more complicated in scope or gravity nor one that can compare politically. The military alone is a vast organization that will benefit is ways I think even it does not quite understand. The Civilian side of the fed with its effort to accomplish its "line-of-business" consolidation dwarfs even the largest of private companies with the scope and complexities involved. The Intelligence Community has an obvious need to share information that can only be realized through SOA principles and a shared service infrastructure.

    In all, I think it's clear that the Federal Government crystallizes and clarifies the needs, challenges and solutions for which SOA was invented. This blog was meant as a place to describe the thoughts, challenges and great ideas that I've run across during my time practicing SOA in the Federal space.