Jackson and XmlMapper – reading arbitrary data into a java.util.Map

I like the Jackson library from FasterXML. Really handy for reading JSON, writing JSON. Or I should say “serialization” and “deserialization”, ’cause that’s what the cool kids say. And the license is right. (If you need a basic overview of Jackson, I suggest this one from Eugen at Stackify.)

But not everything is JSON. Sometimes ya just wanna read some XML, amiright?

I work on projects where Jackson is included as a dependency. And I am aware that there is a jackson-dataformat-xml module that teaches Jackson how to read and write XML, using the same simple model that it uses for JSON.

Most of the examples I’ve seen show how to read XML into a POJO – in other words “databinding”. If my XML doc has an element named “Fidget” then upon de-serialization, the value there is used to populate the field or property on the Java object called “Fidget” (subject to name remapping of course).

That’s nice and handy, but like I said, sometimes ya just wanna read some XML. And it’s not known what the schema is. And you don’t have a pre-compiled Java class to hold the data. What I really want is to read XML into a java.util.Map<String,Object> . Very similar to what I would do in JavaScript with JSON.parse(). How can I do that?

It’s pretty easy, actually.

This works but there are some problems.

  1. The root element is lost. This is an inadvertent side-effect of using a JSON-oriented library to read XML.
  2. For any element that appears multiple times, only the last value is retained.

What I mean is this:
Suppose the source XML is:

<Root>
  <Parameters>
    <Parameter name='A'>valueA</Parameter>
    <Parameter name='B'>valueB</Parameter>
  </Parameters>
</Root>

Suppose you deserialize that into a map, and then re-serialize it as JSON. The output will be:

{
  "Parameters" : {
    "Parameter" : {
      "name" : "B",
      "" : "valueB"
    }
  }
}

What we really want is to retain the root element and also infer an array when there are repeated child elements in the source XML.

I wrote a custom deserializer, and a decorator for XmlStreamReader to solve these problems. Using them looks like this:

String xmlInput = "<Root><Messages><Message>Hello</Message><Message>World</Message></Messages></Root>";
InputStream is = new ByteArrayInputStream(xmlInput.getBytes(StandardCharsets.UTF_8));
RootSniffingXMLStreamReader sr = new RootSniffingXMLStreamReader(XMLInputFactory.newFactory().createXMLStreamReader(is));
XmlMapper xmlMapper = new XmlMapper();
xmlMapper.registerModule(new SimpleModule().addDeserializer(Object.class, new ArrayInferringUntypedObjectDeserializer()));
Map map = (Map) xmlMapper.readValue(sr, Object.class);
Assert.assertEquals( sr.getLocalNameForRootElement(), "Root");
Object messages = map.get("Messages");
Assert.assertTrue( messages instanceof Map, "map");
Object list = ((Map)messages).get("Message");
Assert.assertTrue( list instanceof List, "list");
Assert.assertEquals( ((List)list).get(0), "Hello");
Assert.assertEquals( ((List)list).get(1), "World");

And the output looks like this:

{
  "Parameters" : {
    "Parameter" : [
      {
        "name" : "A",
        "" : "valueA"
      },{
        "name" : "B",
        "" : "valueB"
      }
    ]
  }
}

…which is what we wanted.

Find the source code here: https://github.com/DinoChiesa/deserialize-xml-arrays-jackson

Hat tip to Jegan for the custom deserializer.

RESTful is hardly harmful.

A provocative essay came up on Hacker News today, entitled RESTful considered harmful.

The summary of the essay:

  • JSON is bloated in comparison to protobufs and similar binary protocols
  • There are no interface contracts or data schema
  • HATEOAS doesn’t work
  • No direct support for batching, paging, sorting, etc – eg no SQL semantics
  • CRUD is too limited
  • No, really, CRUD is too limited
  • HTTP Status codes don’t naturally map to business semantics
  • there’s no queueing, or asynchrony
  • There are no standards
  • Backward compatibility is hard

Let’s have a look at the validity of these concerns.

1. JSON is bloated in comparison to protobufs

The essay cites “one tremendous advantage of JSON”: human readability, and then completely discounts this advantage by saying that it’s bloated. It really is a tremendous advantage, which is why XML won over MQ’s binary protocol and the XDR from Sun RPC, and the NDR from DCE RPC, and every other frigging binary protocol. And readability is why JSON displaced XML.

Ask yourself this: what is the value of readability versus the performance advantages of the alternatives, like Thrift or protobufs? Is readability worth 1x as much as the improved efficiency you might get with protobufs? 2x? I believe that for many people, its worth 100x. It trumps all other. For uber-experts, it’s deceptively attractive to wave away the advantage of human-readability. For the rest of the world, for 97% of developers, it’s a huge, Huge, HUGE advantage. For high speed financial trades, JSON is wrong. For Google’s internal interfaces, wrong. For most of the world, RIGHT.

AND as the essay notes, REST doesn’t prescribe JSON. Or XML. Or anything. There’s a content-type header, and clients and servers can negotiate it. If the client says Accept: application/x-protobuf, and the server can send it, bliss for you. So this point – “JSON is bloated” – is not only not valid (false) in the first place, it’s also not an argument against REST.

2. There are no interface contracts or data schema

This is a feature. OMG, have we not tried this enough times? Did this guy skip his “History of IDL compilers” course in the Computer History department at school? Sun RPC IDL. DCE RPC IDL. Corba IDL. WSDL, ferpeetsake! XML Schema!!

It’s pretty straightforward to deliver plain-old-XML over HTTP, which is quite RESTful. More popular is JSON-over-HTTP. Either of those have schema languages. Few people embrace them, though. Why? Because IDLs and Schema languages are too much structure, and they handcuff people more than help them. We have fortunately learned from the past. There are more tools coming in this area, for those who wish to embrace them. See apistudio.io .

3. HATEOAS doesn’t work

Mmmmm, yep. No argument here. In my experience, nobody really uses this, in practice. Pragmatic REST is what people do, and it generally does not use HATEOAS.

4. no SQL semantics

Uhhuh, true. This has been addressed with things like OData. If you want SQL Semantics, seek solutions, don’t just complain.

5. CRUD is too limited

Really? This is a problem? That you might need a switch statement in your code to handle different types of events? Really?

6. CRUD is really too limited

….

Mmmmm, sorry. I have to stop now. I’m completely bored of responding to this essay by now. Except for one more:

10. Backward compatibility is hard

This has NOTHING to do with REST. This is just true. Back compat in any interface is tricky.


In summary, I don’t find any of the arguments compelling.

Let me draw an analogy. The position in this essay is like saying “Oil is no good as a transportation fuel.” Now, Oil has it’s drawbacks! Oil is dirty. We can imagine alternatives that are better in theory. Even today, in specific local situations (daily use, short trips, urban travel) electric cars are better, MUCH better, than fossil-fuel based cars. (An bicycles are even better than electric cars) But gasoline-powered cars deliver massive utility to billions of people. Gasoline refueling stations are everywhere. The delivery system for gasoline is mature and redundant. The World RUNS, very effectively, on gasoline-powered transport, by and large. Objectively, Oil is VERY GOOD as a transportation fuel.

Sure, we’ll evolve better approaches in the future. That’s great. And sure, we can imagine a world with electric-powered vehicles. But today, in the world of reality, Oil wins.

And likewise Pragmatic REST, HTTP, JSON, and schema-less interfaces are winning. We’ll evolve better approaches. But today, This platform wins.

HTTP, HTML, Javascript, and JSON are ubiquitous, are the foundation of the web, and are not going anywhere. Any architect is free to choose other options, and they might have good reasons for doing so. On the other hand the vast majority of installations won’t benefit from using protobufs or thrift, or some non-HTTP protocol. Pragmatic REST, JSON and HTTP are very very safe choices in the vast majority of scenarios.

Cheers

MongoDB booster would prefer Cassandra, if only he could store JSON in it. Have I got a data store for you!

Interesting article at GigaOM interviewing MongoLab Founder and CEO Will Shulman. GigaOM reports:

MongoLab operates under a thesis that MongoDB is pulling away as the world’s most-popular NoSQL database not because it scales the best — it does scale, Shulman said, but he’d actually choose Cassandra if he just needed a multi-petabyte data store without much concern over queries or data structure — but because web developers are moving away from the relational format to an object-oriented format.

Interesting comment. My spin alarm went off with the fuzz-heavy phrasing “…operates under a thesis…” I’ll buy that developers are moving away from relational and towards simpler data storage formats that are easier to use from dynamic scripting languages. But there is no evidence presented in support of the conclusion that “MongoDB is pulling away.” GigaOM just says that this is MongoLab’s “thesis”.

In any case, the opinion of Shulman that Cassandra scales much better than MongoDB leads to this question: If the key to developer adoption is providing the right data structures, then why not just build the easy-to-adopt object store on the existing proven-to-scale backend? Why build another backend if that problem has been solved by Cassandra?

Choosing to avoid this question, the creators of MongoDB have only caused people to ask it more insistently.

The combination of developer-friendly data structure and highly-scalable backend store has been done. You can get the scale of Cassandra and the easy of use of a JSON-native object store. The technology is called App Services, and it’s available from Apigee.

In fact, App Services even offers a network interface that is wire-compatible with existing MongoDB clients (somebody tell Shulman); you can keep your existing client code and just point it to App Services.

With that you can get the nice data structure and the vast scalability.

Thank you, Ed Anuff.

How not to do APIs; and …My New Job

Having access to a quick way to get dictionary lookups of English words while using the computer has always been useful to me. I don’t like to pay the “garish dancing images” ad tax that comes with using commercial dictionary sites like merriam-webster.com; or worse, the page-load-takes-10-seconds tax. I respect Merriam-Webster and appreciate all they’ve done for the English language over the years, but that doesn’t mean I am willing to put up with visual torture today. No, in fact I do not want to download a free e-book, and I do not want to sign up for online courses at Capella University. I just want a definition.

Where’s the API?

I’m a closet hacker, so I figured there had to be an API out there that allowed me to build something that would  do dictionary lookups from … whatever application I was currently using.

And looking back, this is an ongoing obsession of mine, apparently. In the job with Microsoft about 10 years ago, I spent a good deal of my time writing documents using Microsoft Word.  I had the same itch then, and for whatever reason I was not satisfied with the built-in dictionary.  So I built an Office Research Plugin, using the Office Research Services SDK (bet you never heard of that), that would screen-scrape the Merriam-Webster website and allow MS-Word to display definitions in a “smart pane”.  The guts of how it worked was ugly; it was brittle in that every time M-W used a different layout, the service would break.  But it worked, and it displayed nicely in Word and other MS-Office programs.

The nice thing was that Microsoft published an interface that any research service could comply to, that would allow the service to “plug in” to the Office UI.  The idea behind Office Research Services was similar to the idea behind Java Portlets, or  Sharepoint WebParts, or even Vista Gadgets, though it was realized differently than any of those. In fact the Office Research Services model was very weird, in that it was a SOAP service, yet the payload was an encoded XML string.  It wasn’t even xsd:any. The Office team failed on the data model there.

It was also somewhat novel in that the client, in this case the MS-Office program, specified the interface over which the client and service would interact.  Typically the service is the anchor point of development, and the developer of the service therefore has the prerogative to specify the service interface. For SOAP this specification was provided formally in  WSDL, but these days the way to connect to a REST service can be specified in WADL or Plain-old-Documentation.  The Office Research Service was different in that the client specified the interface that research services needed to comply to.  To me, this made perfect sense: rather than one service and many types of clients connecting to it, the Office Research Service model called for one client (Office) connecting to myriad different research services. It was logically sensible that the client got to specify the interface.  But I had some discussions with people who just could not accept that.  I also posted a technical description showing how to build a service that complied with the contract specified by Office.

The basic model worked well enough, once the surprise of the-client-wears-the-pants-around-here wore off. But the encoded XML string was a sort of an ugly part of the design. Also, like the Portlet model, it mixed the idea of a service with UI; the service would actually return UI formatting instructions. This made sense for the specific case of an Office service, but it was wrong in the general sense. Also, there was no directory of services, no way to discover services, to play with them or try them out. Registration was done by the service itself. There are some lessons here in “How not to do APIs”.

Fast Forward 9 Years

We’ve come a long way.  Or have we?  Getting back to the dictionary service, WordNik has a nice dictionary API, and it’s free! for reasonable loads, or until the good people at WordNik change their minds, I guess.

It seemed that it would be easy enough to use. The API is clear; they provide a developer portal with examples of how to use it, as well as a working try-it-out console where you can specify the URLs to tickle and see the return messages. All very good for development.

But… there are some glaring problems. The WordNik service requires registration, at which point the developer receives an API key. That in itself is standard practice. But, confoundingly, I could not find a single document describing how to use this API key in a request, nor a single programming example showing how to use it. I couldn’t even find a statement explicitly stating that authentication was required. While the use of an API key is pretty standard, the way to pass the API key is not. Google does it one way, other services do it another. The simplest way to document the authentication model is to provide a short paragraph and show a few examples. Wordnik didn’t do this. Doesn’t do this. I learned how to use the WordNik service by googling and finding hints from 2009 on someone else’s site. That should not be acceptable for any organization offering a programmable service.

In the end I built the thing – it’s an emacs plugin if you must know, and it uses url.el and json.el to connect to Wordnik and display the definition of the word at point. But it took much longer, and involved much more frustration than was necessary. This is not something that API providers want to repeat.


The benefits of  interconnecting disparate piece-parts have long been tantalizingly obvious.  You could point to v0.9 of the SOAP spec in 1999 as a key point in that particular history, but IBM made a good business of selling integration middleware (MQ they called it) for at least a decade before that.  Even so, only in the past few years have we seen the state of the art develop to enable this on an internet scale.  We have cheap, easy-to-use server frameworks; elastic hosting infrastructure; XML and JSON data formats; agreement on pricing models; an explosion in mobile clients.

Technologies and business needs are aligning, and this opens up new opportunities for integration. To take advantage of these opportunities, companies need the right tools, architectural guidance, and solid add-on building blocks.

I’ve recently joined Apigee, the API Company, to help companies exploit those opportunities.  I’ll say more about my job in the future. Right now, I am very excited to be in this place.

 

PHP and JSON and Unicode, Oh my!

Today I had a bit of a puzzle.  The gplus widget that I wrote for WordPress was rendering Google+ activity with a ragged left edge.

Google+ Widget
The ragged edge is shown for *some* activities

As geeks know, html by default collapses whitespace, so in order for the ragged edge to appear there,  a “hard whitespace” must have crept in, somewhere along the line.   For example, HTML has entites like &nbsp; that will do this.  A quick look in the Chrome developer tool confirmed this.

I figured this was a simple fix:

$content = str_replace("&nbsp;"," ", $content);

Simple, right?

But that didn’t work.

Hmmm…. Next I added in some obvious variations on that theme:

$content = str_replace("&nbsp;"," ", $content);
$content = str_replace("\n"," ", $content);
$content = str_replace("\r"," ", $content);

That also did not work.

This data was simple json, coming from Google+, via their REST API. (But not directly. I built caching into the gplus widget so that it doesn’t hit the Google server every time it renders itself. It reads data from the cache file if it’s fresh, and only connects to Google+ if necessary). A call to json_decode() turns that json into a PHP object, and badda-boom. Right?

Turns out, no. The json had a unicode sequence 0xC2A0, which I guess is a non-breaking space if you speak Unicode. Not sure how that got into the datastream, I certainly did not put it in there myself, explicitly. And when WordPress rendered the widget, that sequence somehow, somewhere got translated into &nbsp;, but only after the gplus widget code was finished.

I needed to replace the unicode sequence with a regular whitespace. This did the trick.

$content = str_replace("\xc2\xa0"," ", $content);

PHP, JSON, HTTP, HTML – these are all minor miracle technologies. Sometimes when gluing them all together you don’t get the results you expect. In those cases I’m glad to have a set of tools at my disposal, so I can match impedances.