APIs Win Because They’re Easy

I went out to Jimmy John’s the other night, to get a sandwich with a good buddy of mine. It was pretty fast. I ordered a standard ham-and-cheese sandwich, and my buddy got an Italian hoagie with banana peppers. They were quite tasty! I guess we were in the store, conducting this transaction, for less than 5 minutes. Pretty fast. And easy.

Easy wins. People will always choose the easier option. People are willing to pay more, or receive less, in order to get something easy. “Fast food” is cheap, but it isn’t the cheapest. Often it’s even less expensive to buy ingredients at the grocery, and prepare a meal.

We paid about $USD 15 for 2 sandwiches. (We didn’t order drinks because soda is toxic sludge and doesn’t belong in the human diet.) Now, I could have gone to the grocery across the parking lot, purchased a pound of ham, a loaf of bread, and some lettuce and tomato for about $20, and had plenty of ingredients to make 5 sandwiches or more. Instead, we paid for the convenience and we were happy to do it.

But fast food does not always deliver the best nutrition. It’s easy, and it might even be “tasty”, but sometimes it’s not good. (and yes, I know that the film “Super Size Me” was criticized for being less than 100% truthful, and less than scientific about the analysis.) In a free economy, for every potential purchase, the consumer decides whether the combination of ease, quality, and the price of the thing on offer represents a good deal. Generally, “easy” is worth good money.

And that makes sense. People have things to do, and for some people, preparing food is not high on the priority list. Likewise, people will pay $50 to get the oil changed in their car, even though they could do it themselves with about $28 worth of supplies. People pay for convenience when maintaining their cars.

And the same is true when managing and operating information systems. People will often sacrifice quality in order to gain some ease. Or they may pay more for greater ease. That’s often the right choice.

The adoption of Windows Server in the enterprise, starting in 1996 or so, was a perfect example of that. People had so-called “Enterprise Grade” Unix-based options available. But Windows was simple to deploy, easy to configure for basic file and printer sharing, and database access. Who can argue with those choices? Sure, there were drawbacks, but even so, the preference for easier solutions continues.

As another example, these days, for interconnecting disparate information systems, REST APIs are much, much more popular than using SOAP. There are numerous indicators of this trend, but one of them is the Google Search Insights data:

(Can you guess which line in the above is “REST” and which represents the search volume on “SOAP”?) The popularity of REST may seem curious, to law-and-order IT professionals. SOAP provides a means to explicitly state the contract for interconnecting systems, in the WSDL document. And WSDL provides a way to rigorously specify the type of the data being sent and received, including the shape of the XML data, the XML namespaces, and so on. SOAP also provides a way to sign documents, as a way to provide non-repudiation. In the REST API model, none of that is available. You’d have to do it all yourself.

But apparently, people don’t want all that capability. Or better, they may want it, but they want ease of development more. When given the choice to opt for greater capability and more strictures (SOAP), versus a simpler, easier way to connect (REST APIs), people have been choosing the easy path, and the trend is strengthening.

APIs are winning because they’re easy.

Hardware is Dead! Tablets will Explode!

Jay Goldberg writing for VentureBeat reports that he purchased a 7″ no-name touchscreen tablet, with 4g ram, Wifi, Android Ice Cream Sandwich, for $45 without haggling in Shenzhen, China. A revelation, he says.

  • Hardware margins are under siege. Making money on hardware is not a long-term defensible position. Companies that hope to make money need to market an “experience”. IOS is one such “experience”.
  • The number of different types of tablets will explode, and the number of actual tablets will explode. Derivative special-purpose devices based on tablet hardware will also explode. Touch-screens on your fridge, that sort of thing.

By the way: Hardware is Dead! Tablets will explode! It all sounds so apocalyptic. Why do we use such terms when discussing the technology business. It’s almost like we’re trying to scare ourselves.

This latter prediction – that the number of computing devices in the tablet form-factor will explode – isn’t really new. Business Insider made the same prediction a month ago.

Similarly, Fortune magazine ran a headline in February of this year about the coming explosion in tablets. In April, Forrester Research predicted the explosion, too, though by estimating 760M tablets in use by 2016, Forrester appears to have actually underestimated the trend.

Mr Goldberg seems willing to make the easy predictions, echoing all the people who came before him. He also doesn’t offer any deeper insight. The rapid growth in the popularity of tablet-based hardware may be the interesting headline, but to me, the implications are much broader.

  • a huge rise in the demand for apps. I am not one who imagines that the touchscreen in the door of your mom’s fridge needs access to an App Store. There is no need for a general purpose computing experience embedded in refrigerator doors. On the other hand, computers in refrigerators needs to run an app, a very specific app. So the number of apps will explode.
  • Specialized apps must be created by specialized developers. Extrapolate from the refrigerator into all the other specialized embedded systems, for all the other specialized user experiences. The demand for talented application developers will also explode.
  • The complement to apps and developers of course is cloud APIs, compute, and storage. Expect huge demand in all of these pieces, in direct correlation to the number of tablets sold.

But, I would say that, wouldn’t I? I work for the leading API Management company. True enough – I am biased. But I had this view before beginning my job here. I knew the need for apps and storage and cloud compute was exploding. I am an investor, though not one with a particularly large store of liquid assets. What I invest is my time, and I chose to work for Apigee because I believe it’s a good investment of my valuable time.

How not to do APIs; and …My New Job

Having access to a quick way to get dictionary lookups of English words while using the computer has always been useful to me. I don’t like to pay the “garish dancing images” ad tax that comes with using commercial dictionary sites like merriam-webster.com; or worse, the page-load-takes-10-seconds tax. I respect Merriam-Webster and appreciate all they’ve done for the English language over the years, but that doesn’t mean I am willing to put up with visual torture today. No, in fact I do not want to download a free e-book, and I do not want to sign up for online courses at Capella University. I just want a definition.

Where’s the API?

I’m a closet hacker, so I figured there had to be an API out there that allowed me to build something that would  do dictionary lookups from … whatever application I was currently using.

And looking back, this is an ongoing obsession of mine, apparently. In the job with Microsoft about 10 years ago, I spent a good deal of my time writing documents using Microsoft Word.  I had the same itch then, and for whatever reason I was not satisfied with the built-in dictionary.  So I built an Office Research Plugin, using the Office Research Services SDK (bet you never heard of that), that would screen-scrape the Merriam-Webster website and allow MS-Word to display definitions in a “smart pane”.  The guts of how it worked was ugly; it was brittle in that every time M-W used a different layout, the service would break.  But it worked, and it displayed nicely in Word and other MS-Office programs.

The nice thing was that Microsoft published an interface that any research service could comply to, that would allow the service to “plug in” to the Office UI.  The idea behind Office Research Services was similar to the idea behind Java Portlets, or  Sharepoint WebParts, or even Vista Gadgets, though it was realized differently than any of those. In fact the Office Research Services model was very weird, in that it was a SOAP service, yet the payload was an encoded XML string.  It wasn’t even xsd:any. The Office team failed on the data model there.

It was also somewhat novel in that the client, in this case the MS-Office program, specified the interface over which the client and service would interact.  Typically the service is the anchor point of development, and the developer of the service therefore has the prerogative to specify the service interface. For SOAP this specification was provided formally in  WSDL, but these days the way to connect to a REST service can be specified in WADL or Plain-old-Documentation.  The Office Research Service was different in that the client specified the interface that research services needed to comply to.  To me, this made perfect sense: rather than one service and many types of clients connecting to it, the Office Research Service model called for one client (Office) connecting to myriad different research services. It was logically sensible that the client got to specify the interface.  But I had some discussions with people who just could not accept that.  I also posted a technical description showing how to build a service that complied with the contract specified by Office.

The basic model worked well enough, once the surprise of the-client-wears-the-pants-around-here wore off. But the encoded XML string was a sort of an ugly part of the design. Also, like the Portlet model, it mixed the idea of a service with UI; the service would actually return UI formatting instructions. This made sense for the specific case of an Office service, but it was wrong in the general sense. Also, there was no directory of services, no way to discover services, to play with them or try them out. Registration was done by the service itself. There are some lessons here in “How not to do APIs”.

Fast Forward 9 Years

We’ve come a long way.  Or have we?  Getting back to the dictionary service, WordNik has a nice dictionary API, and it’s free! for reasonable loads, or until the good people at WordNik change their minds, I guess.

It seemed that it would be easy enough to use. The API is clear; they provide a developer portal with examples of how to use it, as well as a working try-it-out console where you can specify the URLs to tickle and see the return messages. All very good for development.

But… there are some glaring problems. The WordNik service requires registration, at which point the developer receives an API key. That in itself is standard practice. But, confoundingly, I could not find a single document describing how to use this API key in a request, nor a single programming example showing how to use it. I couldn’t even find a statement explicitly stating that authentication was required. While the use of an API key is pretty standard, the way to pass the API key is not. Google does it one way, other services do it another. The simplest way to document the authentication model is to provide a short paragraph and show a few examples. Wordnik didn’t do this. Doesn’t do this. I learned how to use the WordNik service by googling and finding hints from 2009 on someone else’s site. That should not be acceptable for any organization offering a programmable service.

In the end I built the thing – it’s an emacs plugin if you must know, and it uses url.el and json.el to connect to Wordnik and display the definition of the word at point. But it took much longer, and involved much more frustration than was necessary. This is not something that API providers want to repeat.


The benefits of  interconnecting disparate piece-parts have long been tantalizingly obvious.  You could point to v0.9 of the SOAP spec in 1999 as a key point in that particular history, but IBM made a good business of selling integration middleware (MQ they called it) for at least a decade before that.  Even so, only in the past few years have we seen the state of the art develop to enable this on an internet scale.  We have cheap, easy-to-use server frameworks; elastic hosting infrastructure; XML and JSON data formats; agreement on pricing models; an explosion in mobile clients.

Technologies and business needs are aligning, and this opens up new opportunities for integration. To take advantage of these opportunities, companies need the right tools, architectural guidance, and solid add-on building blocks.

I’ve recently joined Apigee, the API Company, to help companies exploit those opportunities.  I’ll say more about my job in the future. Right now, I am very excited to be in this place.

 

Azure gets a well-deserved REST

In case you had any doubts of Programmable web’s data showing REST dominating other web API protocols, Microsoft, one of the original authors of SOAP, is fully embracing REST as the strategic web protocol for administering and managing Windows Azure services.

From Gigaom:

The new REST API that controls the entire system is completely rewritten, sources said.  ”Prior to this release, the Azure APIs were inconsistent. There was no standard way for developers to integrate their stuff in. That all changes now,” said one source who has been working with the API for some time and is impressed.

If you had 2 hours to spend learning stuff about web API protocols, spend 3 minutes understanding SOAP, and the balance on REST.

 

API Growth, SOAP v REST, etc

From HighScalability.com, John Musser’s GlueCon slides.  Interesting data pulled from ProgrammableWeb.com .   As I understand it, ProgrammableWeb is mostly a repository of APIs. It’s free to register and I don’t believe there is any sort of authentication – in other words anyone can post any web API.  Each listing includes an API endpoint, a service.

The main point Musser makes is that APIs are continuing to grow exponentially.  He didn’t go so far as to coin a new law (“Musser’s Law”) for the growth rate, but the trend is pretty certain.

Some other interesting takeaways from the slides:

  • REST v SOAP is one of the themes.  PW’s data shows REST waaay outpacing SOAP and other alternatives.


(For the REST purists who say REST is an architecture and SOAP is a set of envelope standards, my response is, yeah, but… we all know that’s not really true. ) 

  • Musser shows how gosh-darned simple it is to express “Get the current IBM share price” in REST, vs how complex it is in SOAP.  This is true, but misleading if you are not careful.  Lots of API calls require a small set of input parameters. Some do not.  For a counter example look at the complexity of posting a Twitter messaging using OAuth, which I wrote about previously.
  • There are companies doing more than a billion API transactions per day.  Twitter, Google, Facebook – you figured those. Also AccuWeather, Sabre, Klout, NetFlix.  Those are just the ones we know about.

I remember registering for Programmable Web about 10 years ago.  There were 4 or 5 online book-sellers in those days, each had different prices. I put together a book price quote service, which would accept an ISBN, then screen-scrape HTML from each of them, and respond with three or four prices – not all vendors listed all books.  It was just a novel example.  I used it in demonstrations to show what an API could do.  It was never used for real commerce.

I published it.  The screen scraping was a brittle joke.  Every time Amazon updated their HTML, the service would break, and people would email me complaining about it.  Though that service died a long while ago, it remains listed on ProgrammableWeb today. Because of that I think it’s important to consider the real implications of the PW numbers – not all the listed services are alive today, and many of them are not “real” even if they are alive.

Even with all that, the steepening upward trend does show the rapid ramp up of API development and usage.  Even if those APIs are not real, there are real developers building and testing them.  And real companies making a business of it (Apigee, to name one).  Also,  it’s pretty clear that JSON is winning.

One can gain only so much insight from viewing slides. It would have been nice to hear Mr Musser’s  commentary as well.

Side note – The term “API” used to denote a programmer’s interface exposed in a library.  Developers would use an API by linking with a library.  The term has silently expanded its meaning to include application message protocols – in other words, how to format and send messages to a service across a network.  These things are very different in my mind, but yet the same term, API, is used to describe them both, these days.