Sunday, January 13, 2008
The Unintended User?
At any rate, I’m not sure what people are talking about with the term "Unintended User". After some thought about what the definition of what the "Unintended User" might be, I think people mean: The Unanticipated User
Some definitions may be in order:
The Unanticipated User:
A user that you simply did not expect to show up and use your service.
Of course, you may have certainly intended this user to show up and consume your service. You just don't have quite enough horsepower to support them. This problem is usually solved by adequately scaling your service. Usually, people never quite the all the funding that they ask for and end up deploying on "not quite enough" hardware to support the eventual load. Nonetheless, the basic solutions include running service in a Cluster or “Virtualize” the service; ergo, make the service appropriately scalable.
The Unknown User:
A user that shows up to use the service that are not authorized, regardless of whether you planned enough capacity for them.
This problem is solved by dynamically changing security profiles and access control.
The Unintended User (a.k.a. the Demanding User)
A user that is probably known, and authorized, but does not want to use your service the way it was intended to be used.
This is what I believe is truly meant by the phrase "The Unintended User." It's important to note that the notion of "unindented" should be a mutually understood characteristic of the consumer among the consumer and producer organizations. It not that there is some misconception about the consumer that was caused by short-sightedness on the part of the service provider. The service provider should know what the hell they are doing when they expose a service and be able to explain that to potential consumers. If there are users out there that want to use that service, but in a way that is inconsistent with how the service providers intended, then the user is, by definition, "unintended."
In this case the service provider may create a new service just for that consumer to align with what the consumers idea of "intended" usage. I would hope that the service provider would require the consumer to “pay a premium” to the services producer for that "new" service.
Or, the demanding (a.k.a unintended) consumer will just have to adjust their expectations and user the provided services and the service provider organization intended it to be used.
If the service provider somehow missed the mark with respect to what consumers really wanted, then shame on the service provider. If that is the case, then the provider should take it's medicine, re-evaluate the needs of the consumer and try again.
Tuesday, October 23, 2007
On Change and SOA
It is not the strongest of the species that survives, nor the most intelligent that survives. It is the one that is the most adaptable to change.
Charles Darwin
So, even Darwin himself has issue with the popular conception of “Darwinism” that only the strong survives. For anyone that has gone through a significant change, it’s certainly not easy. But what is the definition of change that Darwin is driving at? Is it more along the lines of a “catastrophic” change where the world changes in an instant and only those in the best position to adapt can survive? Or is the inability to catch-up and adapt to a multitude of small changes along the way over a life-time, or perhaps over several generations that makes the difference?
Change is a funny subject and certainly with respect to SOA. People have different ideas about the nature of change and how it applies to technology, their business and their overall world-view. However, since a major value proposition of SOA is “agility”, it’s important to define this to some degree. When we speak of agility we largely mean the ability to rearrange how distributed systems are integrated and how business processes are created and run. This is not the only definition but it’s the one, in my humble opinion, most relevant apropos SOA. This agility is important as it allow corporations and government entities to react quickly to market/competitive changes, new legislation or world events. And, of course, this agility is a major component is the Return-On-Investment (ROI) calculations of SOA.
From my experience, when the buyers of SOA think of change, they think of changing all the time. That is to say, change happens often, perhaps everyday, and it’s imperative to react accordingly or doom and gloom is the unfortunate result of sluggish action. Further the users of SOA think that when change occurs, it’s usually imperative to do something about it, if necessary, as soon as possible. The speed at which you are able to react to changes in your environment (and any context will do here: commercial, economic, competitive, military, legal, etc.) is just as important as how often change occurs. In fact, change might manifest itself as the realization that you’ve made a mistake in tactics or strategy and you need to fix it quickly. Obviously, if you can’t react to change faster than it happens, then you’ll be hopelessly left behind. For example, if the competitive landscape changes every 3 months, but you can only make substantive changes to how you do business in a 5 month window, then you’re in for a bumpy ride and perhaps ultimate doom. Therefore, I think that the fear factor of the rate of change dominates the thought process of decision makers when contemplating the subject. But we should remember that there are two aspects to change that we need to balance: A) how often change occurs and B) the speed at which we can react to the change. It’s my position that B is more important than A when thinking about SOA.
This SOA stuff is still not quite fully proven. And we’ve all seen silver bullets come and go and they have never quite lived up to their hype. Now were talking about ESB software and BPEL that will magically mitigate your enterprise integration woes and do it in minutes instead of months. As it turns out, what were talking about is far more than just the technology and network plumbing that connects our enterprise applications and business processes. We’re talking about the fundamental nature of business and complex distributed systems engineering. All of the SOA tools that are available on the market today simply do not do distributed systems engineering and design for you. Nor do they analyze the business implications of the way you’re integrating your components either internally or with outside business partners. The business problems you face with change happens, to me at least, seems to be the “long pole in the tent” with respect to agility. It used to be painfully true that IT department and all the consultants they could possibly hire could not effect a change to the IT infrastructure fast enough to satisfy the business owners. That is changing. Not changing quite as fast as the SOA vendors might lead you to believe, but it’s changing nonetheless. Soon months to minutes will be possible. Give it a couple more years before that kind of technology is really ready for primetime.
When you decide to make changes to react to some event or events in your world, it’s not how often those changes occur that is the issue. It’s how fast you can react to these changes when they do occur. That is what SOA agility is meant to convey. Very soon, it will be time to think about how your business decision making process will be able to keep up, not how fast your IT infrastructure can change.
Friday, October 19, 2007
I thought this was how The Matrix worked
The link in question was referenced by Slashdot.
Reducing Lag Time in Online Games
Predictions from a neural network could reduce characters' jerky movements.
This just floored me.
Here are my thoughts:
If you think about it, the computer-generated world made famous in the movie "The Matrix" is simply just a big multi-player online game (MMOG), if you will. The distributed computing problems of "latency jitter" have been around since one computer talked to another across a network. The Matrix would have suffered from the same problems. There are a couple of things that really jumped out at me when I saw the movie that made the possible story line, as I saw it, really interesting.
The "human battery" thing seems way too hokey for me and it doesn't make sense WRT the movie dialog either. In the exchange between Neo and The Architect:
Neo: You won't let it happen, you can't. You need human beings to survive.
Architect: There are levels of survival we are prepared to accept.
The whole dialog can be found here.
So, if all the humans die, the machines go on, just with really jittery interaction with their world. Painful and frustrating, but survivable indeed. Obviously they had this problem before and they understand it. Why else would they enslave the humans to be co-processors.
Therefore the real function of humans is as "Neuro-Reckoning" processors. So, the next question is: why destroy Zion and let Neo live and repopulate Zion all over again? At the risk of reading way too much into the dialog of a movie, let's look at the dialog one more time. Just before the "level of survival" comment, The Architect says:
Architect: But, rest assured, this will be the sixth time we have destroyed it. And we have become exceedingly efficient at it. The function of the One is now to return to the source allowing a temporary dissemination of the code you carry reinserting the prime program after which you will be required to select from the matrix 23 individuals, 16 female 7 male, to rebuild Zion.
What the heck does "temporary dissemination of the code you carry" mean? I wondered that. The only possible explanation of "code you carry" I can think of, because Neo is really a live human, is DNA. The "dissemination of the code" likely means making babies and spreading his genetic code. Why this? I think it might mean that the machines recycle the human population and seed it with the genetic code of the individual who as the best innate "Neuro-Reckoning Processor" based on predictive speed and accuracy. Neo was chosen because of an accident of his genetics. An accident of how his brain worked. His DNA contains the "prime program". Returning to "the source", I believe, means going to a special place in the physical world where he will deposit a portion of his DNA (or maybe a part of his brain) which will then be processed and injected into the next generation of human co-processors.
Holy shit. That's much cooler than whatever other story line The Wachowskis were trying follow (I totally did not get the ending of Matrix Revolutions ... maybe I'm just dumb). Maybe the end of the third movie was an indication that the "machine" entities thought it might be better to live in harmony with their "creator" race or begin to blend with then than continue to subjugate humans. Maybe. Sounds like a couple new Matrix movies are in order here. I'll have my people call the Wachowski's people and we'll do lunch.
Ok ... so why don't the machines just build neural network based "Neuro-Reckoning" co-processors and dispose of those pesky humans? Good question. Perhaps there is something special about how human brains work that the machines could not figure out. Perhaps they tried and failed.
The Architect: The first matrix I designed was quite naturally perfect; it was a work of art, flawless, sublime. A triumph equaled only by its monumental failure.
The "dead reckoning" algorithms and their variants just didn't work or the neural networks they tried made the latency jitter worse instead of better. There are some theories that there is something special in how our neurons are made that allow for something called Quantum Computing which would allow for hyper-speed computations that would be quite useful for things like "fast complex predictive algorithms." But that's just a hypothesis. (See http://www.iscid.org/arewespiritualmachines-chat.php)
If we figure this stuff out, then it will have a massive effect on how armed conflict would occur. If we can help fix the latency problem, then we could have real-world battles run in "remote-control." The autonomous Unmanned Aerial/Ground Vehicle (UAV/UGV) are not that smart. I'd rather have people run them. The F-35 was the last manned fighter to be designed and built. But how do you fly a remote-controlled jet at Mach 3 pulling 14Gs with a 1 second latency? Not well, actually. Not something you want to do with real-would consequences. Frankly, I would prefer a war of robots run by remote control rather "intelligent autonomous" robots. I would like more control over things with big guns. I'm just like that.
It's been a while
It's good to be back in action.
Monday, May 21, 2007
Federal SOA Watershed gets published in GCN
I had to edit it extensively but the final product came out well.
The link to the GCN website is:
http://www.gcn.com/online/vol1_no1/42609-1.html
Oil and Water post published in Government Procurement Magazine
The article can be found online at the following address: http://www.govpro.com/Issue/Article/52712/Issue
The editors took some liberties with the wording of the article that I was not all that thrilled with, but all in all it seems a good article.
Peter Bostrom, the Federal CTO of BEA and former Federal CTO of Tibco played editor on this article.
Here comes NCES SOAF
But it just might need the SI community to step up and try to be creative on this one.
Either way it's going to be interesting.
Wednesday, November 29, 2006
Oil and Water: Incentive System for Government Programs and Systems Integrators
We’ll start with the case of taking an existing capability, standards-based service-enabling it (e.g. with WebServices) and then registering it to be used as GFE or GOTS by other programs. In all likelihood, this service will be developed by one of many Systems Integration firms that do business with the government. A primary aspect of their business model is to leverage past performance on a program in order to acquire new development contracts where they basically get to build nearly the same thing yet again. Which begs the question: If the capability they just built as part of some traditionally operated contract vehicle is now generic, service-enabled and generally available as GOTS to the rest of the DoD (or even the entire government) then what incentive does a System Integrator have to build it if they are unable to leverage that direct experience and charge dollars for hours to build a very similar service for someone else? It’s a long-winded question to be sure, but one that needs to be addressed in the business model for a shared services world. This author could imagine the situation where System Integrators would go out of their way to make sure that they did not build a service that was capable of being used in a generic fashion on a shared services infrastructure. It would help them protect their current business model.
Now imagine the same scenario with a slight twist. Imagine that once this fictional System Integrator built this fictional service and that service was registered as a shared service via NCES. Now, this service is then consumed by a number of other applications and programs providing greater efficiencies in time and cost. But, imagine that the System Integrator that originally built the service now receives continued revenue based on the usage of that service. This assumes that somehow there are mechanisms in place to allow a charge model for consumers of shared services. It is beyond the scope of this document to delve into the subject of how the government might craft such a “charge for the usage of shared services” model for the service consumers. Further imagine that that revenue gained from the consumers also benefited the original PEO that commissioned the shared service in the first place.
The question of why a program would use a shared service is somewhat more complicated. It largely revolves around trust and control. If a C2, ISR or perhaps even a weapon fire control system utilizes some shared service to perform a critical task, it places an enormous amount of trust in that service, the infrastructure that provisions that service and the people that built it. It is not trite to say that many lives, soldier and civilian, young and old, are in harms way and at high risk with some of these systems. The financial and budgetary reasons why a shared service might need to be used are not nearly compelling enough for the PM of a C2 application to utilize some shared service.
The most common and compelling reason why an application or program would want to utilize a shared service is because they cannot get the information otherwise. Examples of this include intelligence information, logistics information, as well as troop and equipment readiness data. What makes a shared services infrastructure such a compelling value proposition then is two fold. Firstly, the PM for this consuming application does not have to go out to each of the data sources and individually negotiate the details of how the integration between the two systems will operate. All they have to do is negotiate and encode the SLA and QoS with NCES. Secondly, there is a certain level of distrust between the consumer and the producer of information as there is no prior indication that the producer of the service will be able to deliver to the level of quality that is demanded by the consumer, and for very good reason as was discussed earlier; the service producer is not necessarily an expert nor has all of the appropriate infrastructure on hand to satisfy the consumer. This is where the NCES shared services infrastructure is strategically important. NCES should be able provide the appropriate level of assurance and trust that services it brokers will be able to satisfy the consumers SLAs and if they do not, then NCES should have the appropriate detection, governance and recourse in place to help the consuming application complete its mission.
This type of incentive system is a critical success factor. The ability to provide trust, adjudication, consistency and the appropriate business model will accomplish far above and beyond what any technology base, standards or set of products can provide.
Tuesday, October 24, 2006
Making it easier for Organizations to Publish Services
Publishing a service is far more complex an exercise then simply registering a WSDL in a UDDI registry. The technical problems that have to be addressed by a service provider are non-trivial. Equally important are the political, financial and governance issues that surround providing a service to a community like the DoD. The Federal Government and the DoD and its NCES program, in its implementation of a shared-services infrastructure needs to make it easy for a service to be published on the Global Information Grid (GIG).
The technical tasks involved are only one aspect of service production. These generally include: Registering a service description in the service registry; making sure that the application infrastructure such as the application server hardware, network, etc. are scalable and up to the task of handing the kinds of loads that are expected by a DoD wide service; keeping track of service quality metrics and detecting and reporting violations in Service Level Agreements (SLAs). However, these are only a small part of the kinds of operations that should be required from a shared-services infrastructure.
Setting up the mechanisms for a competitive services marketplace is a key element to allowing service providers a chance to get their service used by potential service consumers. The following is a representative (and certainly not exhaustive) list of possible service infrastructure capabilities. Some of these topics will be covered in more detail later posts on this blog.
Another easily overlooked characteristic of the shared-services infrastructure is one that helps protect producers of services from becoming victims of abusive consumers. Service providers live and die by how well they satisfy their SLAs and Quality of Service objectives. For example, assume there is a consuming system that somehow utilizes a service in a manner not consistent with how it should be consumed (such as an inadvertent over-zealous invocation pattern … sort of a programming bug that causes a denial-of-service attack on a shared service). The infrastructure needs to provide a mechanism that helps protect the shared service and its all-important service metrics. In short, service providers need to trust that the shared services infrastructures (such as NCES) will provide a fair place to conduct business and provide technical and procedural mechanisms (a.k.a. governance) to help mitigate the risk of publishing a service.
The lynchpin in the entire infrastructure effort is to provide a set of services that allow consumers and producers of services to work though a single trusted environment. The infrastructure must provide a consistent language for governance; for example, service level agreements, violation notifications as well as processes and policies for non-compliant providers and consumers to regain compliance. It must include a path for consumers to switch services if they feel the need. It must provide providers of services a set of mechanisms by which incentives for the service provider can be realized. This speaks directly to your question: “How can we maximize the flexibility and reusability of core services?” The core services will only be used if and when NCES can provide a consistent and trusted shared services environment.
Service Infrastructure Value: Making SOA Easier
Here’s the idea: shared services are published via a service broker via one or more interoperable interfaces that will be connected to and invoked by some consumer. There are some practical issues that surround this. The practical issues that I would like to focus on in the next several posts include what I call the “friction” involved in consuming or producing services through a shared services infrastructure. And the ideas here are more targeted toward those that are contemplating standing up a central service infrastructure or brokering services within your organization. Remember that simply because you have a service infrastructure, it doesn’t mean that service consumers and producers will come flocking. There is a level of responsibility with the service infrastructure provider that can significantly reduce the “friction” involved in getting producers and consumers to come together. As it turns out, it’s not quite as natural as you might think.
Wednesday, October 11, 2006
The Federal SOA Watershed.
With world events being what they are, there is an enormous pressure on federal budgets and organizations to not only do more with less, but share what they have so that their systems, organization and people have the greatest value. Service-Oriented Architectures (SOA) and Service Infrastructure software enable the Federal Government to take the critical steps toward sharing information and processes among other agencies. This article provides an overview of what a federal agency needs to think about both organizationally, politically and technologically with regards to SOA technologies and practices.
There has been much hype about SOA in the last few years; however, the thing to understand about SOA technology is that it represents a significant leap in the maturity of distributed systems. Sharing resources is the foundation of a sound SOA strategy. These shared resources should be available to consumers with as little effort as possible. Organizations should expose their shared system via completely standards driven interfaces and does not require specialized software or hardware to be purchased by those wishing to use that shared service. The most important software category of SOA software with respect to strategic SOA initiatives is SOA Infrastructure software such as Enterprise Service Bus (ESB), Service Enablement Platforms and Data Service Platforms. The reason infrastructure software is so important to consider is because it provides the technology platform that enables proactive sharing of services while mitigating many of the risks and issues that have killed other distributed systems or data/service sharing efforts.
Strategic SOA presents some significant challenges to the organization of any federal agency. Most significantly, shared services represent a completely new way of doing business. Instead of a “need to know doctrine” where agencies do not share information unless there is a specific need from some specific other agency. The doctrine now is better characterized as a “need to share” doctrine. Agencies need to proactively begin sharing data that is or maybe useful to other agencies. The organizational preparation that an agency needs to accomplish this reversal of doctrine should not be underestimated. Part of that preparation begins with selecting the correct SOA Infrastructure software but even more vital to this transition is partnering with a System Integrator that offers extensive consulting practices that that covers many organizational areas such as cost, budgeting, governance and business strategy.
The politics of SOA will most likely be quite simple. Government agencies shall share data proactively; actively encourage other organizations to utilize their shared services and utilize the shared services of others to the greatest extent possible, or face budget cuts and poor performance reviews. The technology is there and is ready to be utilized today to get maximum value from the federal governments existing assets. The “need to share” doctrine can be realized and the momentum toward a shared service infrastructure across the government is growing and doesn’t look like it’s going to stop any time soon.
Why write about SOA for the Federal Government?
Of significant note is the sheer size and mass of the US Federal Government. In the world of Enterprise Architecture and distribruted engineering, I would be hard pressed to find an organization larger, more complicated in scope or gravity nor one that can compare politically. The military alone is a vast organization that will benefit is ways I think even it does not quite understand. The Civilian side of the fed with its effort to accomplish its "line-of-business" consolidation dwarfs even the largest of private companies with the scope and complexities involved. The Intelligence Community has an obvious need to share information that can only be realized through SOA principles and a shared service infrastructure.
In all, I think it's clear that the Federal Government crystallizes and clarifies the needs, challenges and solutions for which SOA was invented. This blog was meant as a place to describe the thoughts, challenges and great ideas that I've run across during my time practicing SOA in the Federal space.