Redmonk’s Analysis of Microsoft Surface is Naive

Stephen O’Grady of Redmonk, an analysis firm, looked at Microsoft Suface and concluded that the business model around software is in long-term decline.

…another indication that software on a stand alone basis is a problematic revenue foundation.

Mr O’Grady’s analysis is naive. His analysis casts software as a business, rather than a tool that large companies use in support of their business strategy.

Arthur C. Clarke, the sci-fi writer, is reputed to have observed that “Any sufficiently advanced technology is indistinguishable from magic.”

In that spirit, I observe that any sufficiently advanced technology company uses a unique combination of software, hardware, and services in pursuit of its business strategy.

Mr O’Grady is hung up on how a company monetizes its intellectual property. He distinguishes Google from Microsoft on the basis of their monetization strategy: Google makes most of its revenue and profit selling ads, while for Microsoft, the revenue comes primarily from software product licenses.

It’s a naive, shallow distinction.

For a while, the “technology space” was dominated by companies that produced technology and then tried to make money directly, by selling technology – whether that was hardware or software. But things have not been so simple, for a long while.

Mr O’Grady accurately points out that early on Microsoft chose to hedge across hardware technology companies, selling software and making money regardless of who won the hardware war.

Less famously, IBM tried competing in the high-volume hardware and software arenas (PCs, OS2, Lotus, VisualAge, etc) before adopting a similar zag-while-they-zig strategy. IBM chose to focus on business services back in the 90’s, steadily exiting the PC business and other hardware businesses, so that regardless which hardware company won, and regardless which software company won, IBM could always make money selling services.

Microsoft and IBM adopted very similar strategies, though the anchors were different. They each chose one thing that would anchor the company, and they each explicitly chose to simply float above an interesting nearby competitive battle.

This is a fairly common strategy. All large companies need a strategic anchor, and each one seeks sufficient de-coupling to allow some degree of competitive independence.

  • Cisco bet the company on networking hardware that was software and hardware agnostic.
  • Oracle bet on software, and as such has acted as a key competitor to Microsoft since the inception of hostilities in 1975. Even so, Oracle has anchored in an enterprise market space, while Microsoft elected to focus on consumers (remember the vision? “A PC in every home”), and later, lower-end businesses – Windows as the LOB platform for the proverbial dentist’s office.
  • Google came onto the scene later, and seeing all the occupied territory, decided to shake things up by applying technology to a completely different space: consumer advertising. Sure, Google’s a technology company but they make no money selling technology. They use technology to sell what business wants: measurable access to consumers. Ads.
  • Apple initially tried basing its business on a strategic platform that combined hardware and software, and then using that to compete in both general spaces. It failed. Apple re-launched as a consumer products company, zigging while everyone else was zagging, and found open territory there.

Mr O’Grady seems to completely misunderstand this technology landscape. He argues that among “major technology vendors” including IBM, Apple, Google, Cisco, Oracle, Microsoft, and others, software is in declining importance. Specifically, he says:

Of these [major technology vendors], one third – Microsoft, Oracle and SAP – could plausibly be argued to be driven primarily by revenues of software sales.

This is a major whiff.  None of the “major technology” companies are pure anything. IBM is not a pure services company (nor is it a pure hardware company, in case that needed to be stated).  Oracle is not a pure software company – it makes good money on hardware and services. As I explained earlier, these companies each choose distinct ways to monetize, and not all of them have chosen software licensing as the primary anchor in the marketplace. It would make no sense for all those large companies to do so.

Mr O’Grady’s insight is that a new frontier is coming:

making money with software rather than from software.

Seriously?

Google has never made money directly from software; it became a juggernaut selling ads.  Apple’s resurgence over the past 10 years is not based on making money from software; it sells music and hardware and apps. Since Lou Gerster began the transformation of IBM beginning in 1993,  IBM has used “Services as the spearhead” to establish a long-term relationship with a client.

All of these companies rely heavily on software technology; each of them vary on how to  monetize that software technology. Add Facebook to the analysis – at heart it is a company that is enabled and powered by software, yet it sells no software licenses.

Rip Van O’Grady is now waking up to predict a future that is already here. The future he foretells – where companies make money with software – has been happening right in front of him, for the past 20 years.

Not to mention – the market for software licenses is larger now than ever, and it continues to grow. The difference is that the winner-take-all dynamics of the early days is no longer here. There are lots and lots of successful businesses built around Apple’s AppStore.  The “long tail” of software, as it were.

Interestingly, IBM has come full circle. In the early 90’s, IBM bought Lotus for its desktop suite including WordPro, 123, Notes, and Freelance. Not long after, though, they basically exited the market for high-volume software, mothballing those products.  Even so they realized that a services play opens  opportunities to gain revenue, particularly in software. Clearly illustrating the importance of software in general, the proportion of revenue and profit IBM gains from software has risen from 15% and 20% respectively, about 10 years ago, to around  24% and  40% today.  Yes: the share of IBM’s prodigious profit from  software licensing is now about 40%, after having risen for 10 years straight. They don’t lead with software, but software is their engine of profit.

It’s not that “software on a standalone basis is a problematic revenue foundation” as Mr O’Grady has claimed. It’s simply that every large company needs a strategic position.The days of the wild west are gone; the market has matured. Software can be a terrific revenue engine, for a small company. Also, it works as a high-margin business for large companies, as IBM and Microsoft prove. But with margins as high as they are, companies need to invest in a defensible strategic position. Once a company exceeds a certain size, it can’t be software alone.

 

Impressive factoids on Facebook and Hadoop

It’s common knowledge that Facebook runs Hadoop. The largest Hadoop cluster on the planet.

Here are some stats, courtesy of HighScalability, which scraped them from twitter during the Velocity conference:

  • 6 billion mobile messages every 30 minutes
  • 3.8 trillion cache operations in 30 minutes
  • 160m newsfeeds, 5bln realtime msgs, 10bln profile pics, 108 bln queries on mysql, all in 30 minutes

Now, some questions of interest:

  1. How close is the typical enterprise to that level of scale?
  2. How likely is it that a typical enterprise would be able to take advantage of such scale to improve their core business, assuming reasonable time and money budgets?

Let’s say you are CIO of a $500M financial services company.  Let’s suppose that you make an average of $10 per business transaction; and further suppose that each business transaction requires 24 database operations, including queries and updates.

At that rate, you’d run 50M*24 = about 1.2B database transactions … per year.

Scroll back up. What does Facebook do?  3.8B in 30 minutes.  Whereas 1.2B per year works out to be about 68,000 in 30 minutes. Facebook does 55,000 times as many database transactions as the hypothetical financial services company.

Now, let me repeat those questions:

  • If you run that hypothetical company, do you need Hadoop?
  • If you had Hadoop, would you be able to drive enough data through it to justify the effort of adoption?

 

Why do I believe Hadoop is not yet ready for Prime Time?

I am bullish on Hadoop and other NoSQL technologies. Long-term I believe they will be instrumental in providing quantum leaps in efficiency for existing businesses. But even more, I believe that mainstream BigData will open up brand new opportunities that were simply unavailable before. Right now we focus on applying BigData to user activity and clickstream analysis. why? Because that’s where the data is. But that condition will not persist. There will be oceans of structured and semi-structured data to analyze.  The chicken-and-egg situation with the tools and the data will evolve, and brand new application scenarios will open up.

So I’m Bullish.

On the other hand I don’t think Hadoop is ready for prime time today. Why? Let me count the reasons:

  1. The Foundations are not finished. The Hadoop community is still expending significant energy laying basic foundations.  Here’s a blog post from three months ago detailing the internal organization and operation of Hadoop 2.0.  Look at the raft of terminology this article foists on unsuspecting Hadoop novices: Applications Master, Application Manager (different!), Node Manager, Container Launch Context, and on and on.   And, these are all problems that have been previously solved; we saw similar Resource Management designs with EJB containers, and before that with antecedents like Transarc’s Encina Monitor from 1992, with its node manager, container manager, nanny processes and so on.  Developers (the users of Hadoop) don’t want or need to know about these details.
  2. The Rate of Change is still very high.  Versioning and naming is still in high flux.  0.20?  2.0?  0.23?  YARN?  MRv2?  You might think that version numbers are a minor detail but until the use of terminology and version numbers converges, enterprises will have difficulty adopting. In addition, the actual application model is still in flux. For enterprise apps, change is expensive and confusing. People cannot afford to attend to all the changes in the various moving targets.
  3. The Ecosystem is nascent.  There aren’t enough companies that are oriented around making money on these technologies.  Banks – a key adopter audience – are standing back waiting for the dust to settle. Consulting shops are watching and waiting.  As a broader ecosystem of  companies pops up and invests, enterprises will find it easier to get from where they are into the land of Hadoop.

 

Curt Monash on the Enterprise-Readiness of Hadoop

Curt Monash, writing on The DBMS2 blog, addressed the enterprise readiness of Hadoop recently.

tl/dr:

 Hadoop is proven indeed, whether in technology, vendor support, or user success. But some particularly conservative enterprises may for a while disagree.

But is Mr Monash really of one mind on the topic? especially considering that he began the piece with this:

Cloudera, Hortonworks, and MapR all claim, in effect, “Our version of Hadoop is enterprise-ready, unlike those other guys’.” I’m dubious.

So the answer to “is it enterprise ready?” seems to be, clearly, “Well, yes and no.”

With my understanding of the state of the tools and technology, and the disposition of enterprises, unlike Mr Monash I believe most enterprises don’t have the capacity or tolerance to adopt Hadoop currently. It seems to me that immaturity still represents an obstacle to new Hadoop deployments.

The Hadoop vendor companies and the Hadoop community at large are addressing that. They’re building out features and Hadoop 2.0 will bring a jump in reliability, but there is still significant work ahead  before the technology becomes acceptable to the mainstream.

 

Enderle on Microsoft’s New Tack

Rob Enderle demonstrates his fondness for dramatic headlines with his piece, The Death and Rebirth of Microsoft.  A more conservative editor might headline the same piece, “Microsoft Steadily Shifts its Strategy.”

Last week, Microsoft (Nasdaq: MSFT) effectively ended the model that created it. This shouldn’t have been a surprise, as the model hasn’t been working well for years and, as a result, Microsoft has been getting its butt kicked all over the market by Apple (Nasdaq: AAPL).

Well Microsoft apparently has had enough, and it decided to make a fundamental change and go into hardware.

Aside from the hyperbole, Mr Enderle’s core insight is correct: Microsoft is breaking free of the constraints of its original, tried-and-true model, the basis of the company for years. Under than plan, Microsoft provided the software, someone else provided the hardware. Surface is different: it’s Microsoft hardware, and it signifies a major step toward the company’s ability to deliver a more integrated Microsoft experience on thin and mobile devices. This aspect of the Surface announcement was widely analyzed.

This is what you may not have noticed: Azure is the analogous step on servers. With Azure, Microsoft can deliver IT infrastructure to mid-market and enterprise companies, without the  dependence on OEM partners, nor on the ecosystem that surrounds the phenomenon of OEM hardware installation – the networking and cabling companies, the storage vendors, the management software vendors and so on.

Just as Surface means Microsoft is no longer relying upon HP or Acer to manufacture and market cool personal hardware, and the rumored Microsoft handset would mean that Microsoft won’t be beholden to Nokia and HTC, Azure means Microsoft will not need to rely on Dell or HP or IBM to produce and install server hardware.

That is a big change for a company that was built on a strategy of partnering with hardware vendors. But times are different now. Microsoft is no longer purely software. In fact it is outgrowing its name, just as “International Business Machines” as a name has lost its meaning for a company that brings in 57% of its revenue through services. But while this is a big step, it’s not an a black-and-white thing. Microsoft maintains relationships with OEMs, for PCs, laptops, mobile devices and servers, and that will continue.  Surface and Azure are just one step away from purity of that model.

Microsoft’s Azure,  and Amazon’s AWS too, presents the opportunity for companies to completely avoid huge chunks of capital cost associated to IT projects; companies can  pay a reasonable monthly fee for service, rather than undertaking a big investment and contracting with 4 or 5 different vendors for installation. That’s a big change.

Very enticing for a startup, or a small SaaS company.

Mark Russinovich #TechEd on Windows Azure VM hosting and Virtual Networking

A video of his one-hour presentation with slides + demo.

Originally Windows Azure was a Platform-as-a-Service offering; a cloud-hosted platform. This was a new platform, something like Windows Server, but not Windows Server. There was a new application model, a new set of constraints. With the recent announcement, Microsoft has committed to running arbitrary VMs. This is a big shift towards what people in the industry call Infrastructure-as-a-Service.

Russinovich said this with a straight face:

One of the things that we quickly realized as people started to use the [Azure] platform is that they had lots of existing apps and components that they wanted to bring onto the platform…

It sure seems to me that Russinovich has put some spin into that statement.  It’s not the case that Microsoft “realized” customers would want VM hosting.  Microsoft knew very well that customers,  enterprises in particular, would feel confidence in a Microsoft OS hosting service, and would want to evaluate such an offering as a re-deployment target for existing systems.

This would obviously be disruptive, both to Microsoft partners (specifically hosting companies) and to Microsoft’s existing software licensing business.  It’s not that Microsoft “realized” that people would want to host arbitrary VMs. They knew it all along, but delayed offering it to allow time for the partners and its own businesses to catch up.

Aside from that rotational verbiage, Russinovich gives a good overview of some of the new VM features, how they work and how to exploit them.

Microsoft’s Meet Windows Azure event

Thursday last week, Microsoft launched some new pieces to its Azure cloud-based platform.

The highlights in order (my opinion):

  1. Virtual Machine hosting. Since 2010, Microsoft tried to differentiate their cloud offerings from EC2 from Amazon by providing “platform services” instead of infrastructure services (OS hosting). But I suppose in response to customer demand, they will now offer the ability to host arbitrary Virtual Machines, including Windows Server of course but also Linux VMs of various flavors (read the Fact Sheet for details). This means you will now be able to use Microsoft as a hoster, in lieu of Rackspace or Amazon, for arbitrary workloads. MS will still offer the higher-level platform services, but you won’t need to adopt those services in order to get value out of Azure.
  2. VPN – you can connect those hosted machine to your corp network via a VPN. It will be as if the machines are right down the hall.
  3. Websites – Microsoft will deliver better support for the most commonly deployed workload. Previously websites were supported through a convoluted path, in order to comply to the Azure application model (described in some detail in this 2009 paper from David Chappell). With the announced changes it will be much simpler. Of course there’s support for ASPNET, but also Python, PHP, Java and node.js.

As with its entry into any new venture, Microsoft has been somewhat constrained by its existing partners. Steve Ballmer couldn’t jump with both feet into cloud-based platforms because many existing MS partners were a.) dependent upon the traditional, on-premises model of software delivery, or b.) in the cloud/hosting business themselves.

In either case they’d perceive a shift by MS towards cloud as a threat, or at the very least, disruptive. Also Microsoft itself was highly oriented toward on-premises software licensing. So it makes sense that MS was initially conservative with its cloud push. With these moves you can see MS steadily increasing pressure on its own businesses and its partners to move with them into the cloud model. And this is inevitable for Microsoft, as Amazon continues to gain enterprise credibility with EC2 and its related AWS offerings.


The upshot for consumers of IT is that price pressure for cloud platforms will continue downward. Also look for broader support of cloud systems by tools vendors, which means,  cloud-based platforms will become mainstream more quickly, even for conservative IT shops.

 

Does Metcalfe’s Law apply to Cloud Platforms and Big Data?

Cloud Platforms and Big Data – Have we reached a tipping point?  To think about this, I want to take a look back in history.

Metcalfe’s Law was named for Robert Metcalfe, one of the true internet pioneers, by George Gilder,  in an article that appeared in a 1993 issue of Forbes Magazine, it states that the value of a network increases with the square of the number of nodes.  It was named in the spirit of “Moore’s Law” – the popular aphorism attributed to Gordon Moore that stated that the density of transistors on a chip roughly doubles every 18 months. Moore’s Law succinctly captured why computers grew more powerful by the day.

With the success of “Moore’s Law”, people looked for other “Laws” to guide their thinking about a technology industry that seemed to grow exponentially and evolve chaotically, and “Metcalfe’s Law” was one of them.  That these “laws” were not really laws at all, but really just arguments, predictions, and opinions, was easily forgotten. People grabbed hold of them.

Generalizing a Specific Argument

Gilder’s full name for the “law” was “Metcalfe’s Law of the Telecosm”, and in naming it, he was thinking specifically of the competition between telecommunications network standards, ATM (Asynchronous Transfer Mode) and Ethernet.  Many people were convinced that ATM would eventually “win”, because of its superior switching performance, for applications like voice, video, and data.  Gilder did not agree. He thought ethernet would win, because of the massive momentum behind it.

Gilder was right about that, and for the right reasons. And so Metcalfe’s Law was right!  Since then though, people have argued that Metcalfe’s Law applies equally well to any network.  For example, a network of business partners, a network of retail stores, a network of television broadcast affiliates, a “network” of tools and partners surrounding a platform.  But generalizing Gilder’s specific argument this way is sloppy.

A 2006 article in IEEE Spectrum on Metcalfe’s Law says flatly that  the law is “Wrong”, and explains why:  not all “connections” in a network contribute equally to the value of the network.  Think of Twitter – most subscribers publish very little information, and to very limited circles of friends and family.  Twitter is valuable, and it grows in value as more people sign up, but Metcalfe’s Law does not provide the metaphor for valuing it. Or think of a telephone network: most people spend most of their time on the phone with around 10 people. Adding more people to that network does not increase the value of the network, for those people. Adding more people does not cause revenue to rise according to the  O(n2) metric implicit in Metcalfe’s Law.

Clearly the direction of the “law” is correct – as a network grows, its value grows faster.  We all feel that to be implicitly true, and so we latch on to Gilder’s aphorism as a quick way to describe it. But clearly also, the law is wrong generally.

Alternative “Laws” also Fail

The IEEE article tries to offer other valuation formulae, suggesting that the true value is not O(n2), but instead O(n*log(n)), and specifically suggests this as a basis for valuation of markets, companies, and startups.

That suggestion is arbitrary.  I find the mathematical argument presented in the IEEE article to be hand-wavey and unpersuasive. The bottom line is that networks are different, and there is not one law – not Metcalfe’s, nor Reed’s nor Zipf’s as suggested by the authors of that IEEE article – that applies generally to all of them. Metcalfe’s Law applied specifically but loosely to the economics of ethernet, just as Moore’s Law applied specifically to transistor density. Moore’s Law was not general to any manufacturing process, nor is Metcalfe’s Law general to any network.

Sorry, there is no “law”; One needs to understand the economic costs and potential benefits of a network, and the actual conditions in the market, in order to apply a value to that network.

Prescription: Practical Analysis

Ethernet enjoyed economic advantages in terms of cost of production and generalization of R&D development. Ethernet reached an economic tipping point, and beyond that other factors like improved switching performance of alternatives, were simply not enough to overcome the existing investment in tools, technology, and understanding of ethernet.

We all need to apply that sort of practical thinking to computing platform technology. For some time now, people have been saying that Cloud Platform technology is the Next Big Thing. There have been some skeptics, notably Larry Ellison, but even he has come around and is investing.

Cloud Platforms will “win” over existing on-premises platform options, when it makes economic sense to do so. In practice, this means, when tools for building, deploying, and managing systems in the cloud become widely available, and just as good as those that exist and are in wide use for on-premises platforms.

Likewise “Big Data” will win when it is simply better than using traditional data analysis, for mainstream data analysis workloads. Sure, Facebook and Yahoo use MapReduce for analysis, but, news flash: Unless you have 100m users, your company is not like Facebook or Yahoo. You do not have the same analysis needs. You might want to analyze lots of data, even terabytes of it. But the big boys are doing petabytes. Chances are, you’re not like them.

This is why Microsoft’s Azure is so critical to the evolution of Cloud offerings  Microsoft brought computing to the masses, and the company understands the network effects of partners, tools providers, and developers. It’s true that Amazon has a lead in cloud-hosted platforms, and it’s true that even today, startups prefer cloud to on-premises. But EC2 and S3 are still not commonly considered as general options by conservative businesses. Most banks, when revising their loan processing systems, are not putting EC2 on their short list. Microsoft’s work in bringing cloud platforms to the masses will make a huge difference in the marketplace.

I don’t mean to predict that Microsoft will “win” over Amazon in cloud platforms; I mean only to say that Microsoft’s expanded presence in the space will legitimize Cloud and make it much more accessible. Mainstream. It remains to be seen how soon or how strongly Microsoft will push on Big Data, and whether we should expect to see the same effect there.

The Bottom Line

Robert Metcalfe, the internet pioneer, himself apparently went so far as to predict that by 2013, ATM would “prevail” in the battle of the network standards. Gilder did not subscribe to such views. He felt that Ethernet would win, and Metcalfe’s Law was why. He was right.

But applying Gilder’s reasoning blindly makes no sense. Cloud and Big Data will ultimately “win” when they mature as platforms, and deliver better economic value over the existing alternatives.

 

API Growth, SOAP v REST, etc

From HighScalability.com, John Musser’s GlueCon slides.  Interesting data pulled from ProgrammableWeb.com .   As I understand it, ProgrammableWeb is mostly a repository of APIs. It’s free to register and I don’t believe there is any sort of authentication – in other words anyone can post any web API.  Each listing includes an API endpoint, a service.

The main point Musser makes is that APIs are continuing to grow exponentially.  He didn’t go so far as to coin a new law (“Musser’s Law”) for the growth rate, but the trend is pretty certain.

Some other interesting takeaways from the slides:

  • REST v SOAP is one of the themes.  PW’s data shows REST waaay outpacing SOAP and other alternatives.


(For the REST purists who say REST is an architecture and SOAP is a set of envelope standards, my response is, yeah, but… we all know that’s not really true. ) 

  • Musser shows how gosh-darned simple it is to express “Get the current IBM share price” in REST, vs how complex it is in SOAP.  This is true, but misleading if you are not careful.  Lots of API calls require a small set of input parameters. Some do not.  For a counter example look at the complexity of posting a Twitter messaging using OAuth, which I wrote about previously.
  • There are companies doing more than a billion API transactions per day.  Twitter, Google, Facebook – you figured those. Also AccuWeather, Sabre, Klout, NetFlix.  Those are just the ones we know about.

I remember registering for Programmable Web about 10 years ago.  There were 4 or 5 online book-sellers in those days, each had different prices. I put together a book price quote service, which would accept an ISBN, then screen-scrape HTML from each of them, and respond with three or four prices – not all vendors listed all books.  It was just a novel example.  I used it in demonstrations to show what an API could do.  It was never used for real commerce.

I published it.  The screen scraping was a brittle joke.  Every time Amazon updated their HTML, the service would break, and people would email me complaining about it.  Though that service died a long while ago, it remains listed on ProgrammableWeb today. Because of that I think it’s important to consider the real implications of the PW numbers – not all the listed services are alive today, and many of them are not “real” even if they are alive.

Even with all that, the steepening upward trend does show the rapid ramp up of API development and usage.  Even if those APIs are not real, there are real developers building and testing them.  And real companies making a business of it (Apigee, to name one).  Also,  it’s pretty clear that JSON is winning.

One can gain only so much insight from viewing slides. It would have been nice to hear Mr Musser’s  commentary as well.

Side note – The term “API” used to denote a programmer’s interface exposed in a library.  Developers would use an API by linking with a library.  The term has silently expanded its meaning to include application message protocols – in other words, how to format and send messages to a service across a network.  These things are very different in my mind, but yet the same term, API, is used to describe them both, these days.

 

Is Amazon DynamoDB the look of the future?

Amazon is now offering a key/value data store that relies on Solid state disc for storage. DynamoDB is the name, and it is intended to complement S3 as a lower-latency store. It’s higher cost, but offers better performance for those customers that need it.

Two things on this.

  1. The HighScalability blog calls Amazon’s use of SSD as a “radical step.”  That view may become antiquated rapidly.The one outlier in the datacenter of today is the use of spinning mechanical platters as a basis to store data. Think about that. There’s one kind of moving part in a datacenter – the disk.  It consumes much of the power and causes much of the heat. We will see SSD replace mag disk as a mainstream storage technology, sooner than most of us think.  Companies like Pure Storage will lead the way but EMC and the other big guys are not waiting to get beaten.Depending on manufacturing ramp-up, this could happen in 3-5 years. It won’t be radical. The presence of spinning platters in a datacenter will be quaint in 8 years.
  2. The exposure of the underlying storage mechanism to the developer is a distraction.  I don’t want to have to choose my data programming model based on my latency requirements.  I don’t want to know, or care, that the storage relies on an SSD. That Amazon exposes it today is a temporary condition, I think. The use of the lower-latency store ought to be dynamically determined by the cloud platform itself, based on provisioning configuration provided by the application owner.  Amazon will just fold this into its generic store. Other cloud platform providers will follow.The flexibility and intelligence allowed in provisioning the store is the kind of thing that will provide the key differentiation among cloud platforms.