How do you write an RFP when one vendor controls the interface?
Because the public internet is a network of networks, it relies on open standards. Each network operates independently. Independent networks make independent choices about what to implement, and they can work together as one, apparently seamless global network by all using the same standards.
This general pattern of independent implementation of open standards is an important part of what has made the success of the public internet. But there are two developments happening that may threaten the old model. These days, we may be facing a “post-standards” world.
The pressure for lock in
One part of the pressure on open standards comes from pressure in favour of vendor lock-in. Device manufacturers work hard to keep users attached to that kind of device. Without jailbreaking, it is essentially impossible to install an application on an Apple iOS device without using the App Store. Android systems are nearly as controlled.
Meanwhile, apps take users away from using general-purpose programs like web browsers, instead giving users an application that is purpose-built for the particular task. Often, users are only vaguely aware that they are using the network at all when interacting with the application, even though many applications are really basically special-purpose browsers that use the same basic hypertext transport protocol (http or https) used by web browsers. What is different about apps is how they use http.
The current paradigm in web services is mostly REpresentational State Transfer, or REST. REST relies on http. But the Application Program Interfaces (APIs) built using REST are themselves often not open. Purpose-built apps are needed because they are sometimes the only things that know how to interpret the custom REST API.
Other times, the API is published, so that outside developers can build atop the service. But because these are not open standards, there is always one organization that has control over the API. Organizations change both the APIs and the rules for their use according to the commercial needs of the organization. So, services built on top of those APIs are vulnerable to changes in a way that services built atop open standards are not.
“The code is the standard”
At the same time that vendors are working for lock-in, many (particularly in the free software world) are abandoning standards development in favor of shipping code.
The Internet Engineering Task Force (IETF) famously uses the slogan, “rough consensus and running code.” But its standards activities depend on both elements. Some kinds of projects in recent years have taken to relying exclusively on living documents in repositories (particularly at Github), or to using a project’s shipping code as the “standard”. Both of these approaches may cause long-run problems.
The difficulty with “living standards” is that there’s no stable thing to refer to. People doing purchases need to have stable documents to refer to in RFPs, and vendors are not going to agree to deliver a moving target. While living standards permit rapid iteration, they do not give the kind of stability that makes contract writing or enforcement possible.
The difficulty with using shipping code as a “standard” is subtler but in some ways more dangerous. A shipping-code standard means, in effect, that there is no standard at all: the software works as programmed. There is no way to measure whether it is doing what it is supposed to do. There is also no way for an alternative implementation to compete with it, because any incompatibilities are always a bug in the alternative.
Unlike some of the earlier posts in this blog, the bad news is that there is no clear answer to offer for this set of issues. The winner-takes-all nature of internet service offerings means that vendors will continue to try to lock people in. The fashion for living standards and shipping-code-as-standard both come from the rapid pace of change in technology, and until standards development organizations change their work methods the fashion will doubtless persist. But there are some mitigations available.
First, keep potential for interoperability in mind as much as possible when making service selections. When buying a service or selecting software, make your possible migration strategy one of the criteria you evaluate. Of course, don’t be unrealistic about this: giving up flexibility in some parts of the system for performance or convenience can be a good idea. But don’t accept lock-in by accident, because you are then accepting a future hidden cost. Also, keep on top of your vendors’ changing policies: just because you selected for standards conformance in the first place does not mean that feature will remain.
Second, if you write RFPs or evaluate responses, try to specify versions of specifications whenever possible. Stable documents are good, but even specifying particular versions of a specification will protect your contract language in the event the specification changes in a way that is bad for your service. At the same time, for living standards, require your vendors to keep up with the evolution, and do so in your own services. Otherwise, you could find yourself or your vendor conforming to a standard that nobody else uses any more, and you’ll be left behind. Internet software and systems are more plastic than ever, and we will have to adapt to that reality so we’re building better networks.