Job Postings are so 1990s

So says ongig.com regarding online job postings and online job applications.
I agree. Especially with this part:

Candidates often feel that there is a “black hole” in the application process.

I’ve electronically submitted applications to several larger companies; the experience has been pretty uniformly non-transparent.  Unsatisfying to say the least.  I understand why: it’s expensive wading through the stacks of online resume submissions. But if it’s so expensive, why solicit online applications at all?

As for e-hiring, jobs are not a commodity, and it’s inappropriate to advertise them in a catalog as if they were a book or a replacement part for my lawnmower.

I’m no longer comfortable  joining “a company”. I need to meet and evaluate the hiring manager, I want to know the team strategy, and how it synchronizes with the corporate strategy. I want a 360° view of the team, and I need to be confident that the team leaders understand the strategy and how they contribute to success. If key players on the team are not comfortable with strategic thinking or are reluctant to discuss the strategy options, it’s a red flag to me. All that is hard to do online. You can start online, but you need to progress quickly to richer conversations.

The internet is fascinating and wonderful but, in case you needed it, this is yet another illustration that it cannot replace all human interaction.

Azure gets a well-deserved REST

In case you had any doubts of Programmable web’s data showing REST dominating other web API protocols, Microsoft, one of the original authors of SOAP, is fully embracing REST as the strategic web protocol for administering and managing Windows Azure services.

From Gigaom:

The new REST API that controls the entire system is completely rewritten, sources said.  ”Prior to this release, the Azure APIs were inconsistent. There was no standard way for developers to integrate their stuff in. That all changes now,” said one source who has been working with the API for some time and is impressed.

If you had 2 hours to spend learning stuff about web API protocols, spend 3 minutes understanding SOAP, and the balance on REST.

 

Microsoft’s Meet Windows Azure event

Thursday last week, Microsoft launched some new pieces to its Azure cloud-based platform.

The highlights in order (my opinion):

  1. Virtual Machine hosting. Since 2010, Microsoft tried to differentiate their cloud offerings from EC2 from Amazon by providing “platform services” instead of infrastructure services (OS hosting). But I suppose in response to customer demand, they will now offer the ability to host arbitrary Virtual Machines, including Windows Server of course but also Linux VMs of various flavors (read the Fact Sheet for details). This means you will now be able to use Microsoft as a hoster, in lieu of Rackspace or Amazon, for arbitrary workloads. MS will still offer the higher-level platform services, but you won’t need to adopt those services in order to get value out of Azure.
  2. VPN – you can connect those hosted machine to your corp network via a VPN. It will be as if the machines are right down the hall.
  3. Websites – Microsoft will deliver better support for the most commonly deployed workload. Previously websites were supported through a convoluted path, in order to comply to the Azure application model (described in some detail in this 2009 paper from David Chappell). With the announced changes it will be much simpler. Of course there’s support for ASPNET, but also Python, PHP, Java and node.js.

As with its entry into any new venture, Microsoft has been somewhat constrained by its existing partners. Steve Ballmer couldn’t jump with both feet into cloud-based platforms because many existing MS partners were a.) dependent upon the traditional, on-premises model of software delivery, or b.) in the cloud/hosting business themselves.

In either case they’d perceive a shift by MS towards cloud as a threat, or at the very least, disruptive. Also Microsoft itself was highly oriented toward on-premises software licensing. So it makes sense that MS was initially conservative with its cloud push. With these moves you can see MS steadily increasing pressure on its own businesses and its partners to move with them into the cloud model. And this is inevitable for Microsoft, as Amazon continues to gain enterprise credibility with EC2 and its related AWS offerings.


The upshot for consumers of IT is that price pressure for cloud platforms will continue downward. Also look for broader support of cloud systems by tools vendors, which means,  cloud-based platforms will become mainstream more quickly, even for conservative IT shops.

 

Does Metcalfe’s Law apply to Cloud Platforms and Big Data?

Cloud Platforms and Big Data – Have we reached a tipping point?  To think about this, I want to take a look back in history.

Metcalfe’s Law was named for Robert Metcalfe, one of the true internet pioneers, by George Gilder,  in an article that appeared in a 1993 issue of Forbes Magazine, it states that the value of a network increases with the square of the number of nodes.  It was named in the spirit of “Moore’s Law” – the popular aphorism attributed to Gordon Moore that stated that the density of transistors on a chip roughly doubles every 18 months. Moore’s Law succinctly captured why computers grew more powerful by the day.

With the success of “Moore’s Law”, people looked for other “Laws” to guide their thinking about a technology industry that seemed to grow exponentially and evolve chaotically, and “Metcalfe’s Law” was one of them.  That these “laws” were not really laws at all, but really just arguments, predictions, and opinions, was easily forgotten. People grabbed hold of them.

Generalizing a Specific Argument

Gilder’s full name for the “law” was “Metcalfe’s Law of the Telecosm”, and in naming it, he was thinking specifically of the competition between telecommunications network standards, ATM (Asynchronous Transfer Mode) and Ethernet.  Many people were convinced that ATM would eventually “win”, because of its superior switching performance, for applications like voice, video, and data.  Gilder did not agree. He thought ethernet would win, because of the massive momentum behind it.

Gilder was right about that, and for the right reasons. And so Metcalfe’s Law was right!  Since then though, people have argued that Metcalfe’s Law applies equally well to any network.  For example, a network of business partners, a network of retail stores, a network of television broadcast affiliates, a “network” of tools and partners surrounding a platform.  But generalizing Gilder’s specific argument this way is sloppy.

A 2006 article in IEEE Spectrum on Metcalfe’s Law says flatly that  the law is “Wrong”, and explains why:  not all “connections” in a network contribute equally to the value of the network.  Think of Twitter – most subscribers publish very little information, and to very limited circles of friends and family.  Twitter is valuable, and it grows in value as more people sign up, but Metcalfe’s Law does not provide the metaphor for valuing it. Or think of a telephone network: most people spend most of their time on the phone with around 10 people. Adding more people to that network does not increase the value of the network, for those people. Adding more people does not cause revenue to rise according to the  O(n2) metric implicit in Metcalfe’s Law.

Clearly the direction of the “law” is correct – as a network grows, its value grows faster.  We all feel that to be implicitly true, and so we latch on to Gilder’s aphorism as a quick way to describe it. But clearly also, the law is wrong generally.

Alternative “Laws” also Fail

The IEEE article tries to offer other valuation formulae, suggesting that the true value is not O(n2), but instead O(n*log(n)), and specifically suggests this as a basis for valuation of markets, companies, and startups.

That suggestion is arbitrary.  I find the mathematical argument presented in the IEEE article to be hand-wavey and unpersuasive. The bottom line is that networks are different, and there is not one law – not Metcalfe’s, nor Reed’s nor Zipf’s as suggested by the authors of that IEEE article – that applies generally to all of them. Metcalfe’s Law applied specifically but loosely to the economics of ethernet, just as Moore’s Law applied specifically to transistor density. Moore’s Law was not general to any manufacturing process, nor is Metcalfe’s Law general to any network.

Sorry, there is no “law”; One needs to understand the economic costs and potential benefits of a network, and the actual conditions in the market, in order to apply a value to that network.

Prescription: Practical Analysis

Ethernet enjoyed economic advantages in terms of cost of production and generalization of R&D development. Ethernet reached an economic tipping point, and beyond that other factors like improved switching performance of alternatives, were simply not enough to overcome the existing investment in tools, technology, and understanding of ethernet.

We all need to apply that sort of practical thinking to computing platform technology. For some time now, people have been saying that Cloud Platform technology is the Next Big Thing. There have been some skeptics, notably Larry Ellison, but even he has come around and is investing.

Cloud Platforms will “win” over existing on-premises platform options, when it makes economic sense to do so. In practice, this means, when tools for building, deploying, and managing systems in the cloud become widely available, and just as good as those that exist and are in wide use for on-premises platforms.

Likewise “Big Data” will win when it is simply better than using traditional data analysis, for mainstream data analysis workloads. Sure, Facebook and Yahoo use MapReduce for analysis, but, news flash: Unless you have 100m users, your company is not like Facebook or Yahoo. You do not have the same analysis needs. You might want to analyze lots of data, even terabytes of it. But the big boys are doing petabytes. Chances are, you’re not like them.

This is why Microsoft’s Azure is so critical to the evolution of Cloud offerings  Microsoft brought computing to the masses, and the company understands the network effects of partners, tools providers, and developers. It’s true that Amazon has a lead in cloud-hosted platforms, and it’s true that even today, startups prefer cloud to on-premises. But EC2 and S3 are still not commonly considered as general options by conservative businesses. Most banks, when revising their loan processing systems, are not putting EC2 on their short list. Microsoft’s work in bringing cloud platforms to the masses will make a huge difference in the marketplace.

I don’t mean to predict that Microsoft will “win” over Amazon in cloud platforms; I mean only to say that Microsoft’s expanded presence in the space will legitimize Cloud and make it much more accessible. Mainstream. It remains to be seen how soon or how strongly Microsoft will push on Big Data, and whether we should expect to see the same effect there.

The Bottom Line

Robert Metcalfe, the internet pioneer, himself apparently went so far as to predict that by 2013, ATM would “prevail” in the battle of the network standards. Gilder did not subscribe to such views. He felt that Ethernet would win, and Metcalfe’s Law was why. He was right.

But applying Gilder’s reasoning blindly makes no sense. Cloud and Big Data will ultimately “win” when they mature as platforms, and deliver better economic value over the existing alternatives.

 

Windows Azure goes SSD

In a previous post I described DynamoDB, the SSD-backed storage service from Amazon, as a sort of half-step toward better scalability.

With the launch of new Azure services from Microsoft, it appears that Microsoft will offer SSD, too.  Based on the language used in that report — The new “storage hardware” is thought to include solid state drives (SSDs) —  this isn’t confirmed, but it sure looks likely.

I haven’t looked at the developer model for Azure to find out if the storage provisioning is done automatically and transparently, as I suggested it should be in my prior post.  I’ll be interested to compare Microsoft’s offering with DynamoDB in that regard.

In any case, notice is now given to mag disk drives: do not ask for whom the bell tolls.

API Growth, SOAP v REST, etc

From HighScalability.com, John Musser’s GlueCon slides.  Interesting data pulled from ProgrammableWeb.com .   As I understand it, ProgrammableWeb is mostly a repository of APIs. It’s free to register and I don’t believe there is any sort of authentication – in other words anyone can post any web API.  Each listing includes an API endpoint, a service.

The main point Musser makes is that APIs are continuing to grow exponentially.  He didn’t go so far as to coin a new law (“Musser’s Law”) for the growth rate, but the trend is pretty certain.

Some other interesting takeaways from the slides:

  • REST v SOAP is one of the themes.  PW’s data shows REST waaay outpacing SOAP and other alternatives.


(For the REST purists who say REST is an architecture and SOAP is a set of envelope standards, my response is, yeah, but… we all know that’s not really true. ) 

  • Musser shows how gosh-darned simple it is to express “Get the current IBM share price” in REST, vs how complex it is in SOAP.  This is true, but misleading if you are not careful.  Lots of API calls require a small set of input parameters. Some do not.  For a counter example look at the complexity of posting a Twitter messaging using OAuth, which I wrote about previously.
  • There are companies doing more than a billion API transactions per day.  Twitter, Google, Facebook – you figured those. Also AccuWeather, Sabre, Klout, NetFlix.  Those are just the ones we know about.

I remember registering for Programmable Web about 10 years ago.  There were 4 or 5 online book-sellers in those days, each had different prices. I put together a book price quote service, which would accept an ISBN, then screen-scrape HTML from each of them, and respond with three or four prices – not all vendors listed all books.  It was just a novel example.  I used it in demonstrations to show what an API could do.  It was never used for real commerce.

I published it.  The screen scraping was a brittle joke.  Every time Amazon updated their HTML, the service would break, and people would email me complaining about it.  Though that service died a long while ago, it remains listed on ProgrammableWeb today. Because of that I think it’s important to consider the real implications of the PW numbers – not all the listed services are alive today, and many of them are not “real” even if they are alive.

Even with all that, the steepening upward trend does show the rapid ramp up of API development and usage.  Even if those APIs are not real, there are real developers building and testing them.  And real companies making a business of it (Apigee, to name one).  Also,  it’s pretty clear that JSON is winning.

One can gain only so much insight from viewing slides. It would have been nice to hear Mr Musser’s  commentary as well.

Side note – The term “API” used to denote a programmer’s interface exposed in a library.  Developers would use an API by linking with a library.  The term has silently expanded its meaning to include application message protocols – in other words, how to format and send messages to a service across a network.  These things are very different in my mind, but yet the same term, API, is used to describe them both, these days.