restclient.el – sending API Requests directly from within Emacs

Hey, something new! (to me!) the restclient.el library, for emacs. I tried it. I like it. I recommend it.

What does it do? Allows you to send REST requests (really just http requests) right from emacs, interactively. And then pretty-prints the results if possible (if XML or JSON or image). It includes a simple text mode that allows you to define a set of requests and some variables that can be used in those requests. The whole thing is simple, easy, handy.
Activate it with C-c C-c

Separately, I have a library that reads .netrc files from within elisp. It’s a natural complement to restclient.el , for API endpoints that require HTTP Basic authentication. That covers lots of API endpoints, including OAuth token dispensaries that require the client_id and client_secret to be passed in as an HTTP Basic authentication header. Here’s a simple example use:

Really nice. How did I not know about this elisp library?

One problem I had when using it: The restclient.el helpfully uses a function json-pretty-print-buffer to pretty-print the buffer containing the response, if the content-type of the response is application/json.

I don’t know that function, and it wasn’t defined on my emacs. This led to a runtime error, and a json buffer that was hard for me to visually parse.

But my emacs does have the similarly named json-prettify-buffer. So I used the following gist to get the restclient to succeed in its pretty-printing efforts.

The restclient.el module is not a huge thing, but it’s nice for us emacs people. I know about Postman, and use it. I know about Paw (but don’t use it). I know and use Fiddler. I am a big fan of curl, and someitmes curlish. This is a nice additional tool for the toolbox.  Really handy.

Thanks, Jake McCrary, for writing up your experience with Emacs and restclient.el; your blog post is how I discovered it.  And thanks of course to Pavel Kurnosov, the original author of the restclient.el library. Thanks for sharing.

EDIT – I made a change in restclient.el to fix an issue that causes an extra unintended newline to be appended to the last form parameter. This issue cost me about 90 minutes of debugging my JWT verification code, bummer! My change just trims trailing newlines from the entity being sent. This will be a problem for you if you want to send an entity that ends in several newlines. Find my fixed restclient.el here .

Pre-request script for Postman, to calculate HttpSignature

If you do REST, you probably have a favorite REST client testing tool.
Mine is Postman, a Google Chrome app.

Postman has a nifty feature called Pre-request scripts, which allows you to write some Javascript code that performs a calculation and then reads and writes the “environment” object for the request. This means you can calculate … hashes or digests or signatures or anything you like.

Here’s an example of a script that calculates an HMAC-SHA256 HttpSignature, using the keyId and secret-key embedded in the environment. It also computes a digest on the message payload. Postman helpfully includes CrytoJS in the JS sandbox, so it’s easy to do HMAC and SHA.

In this particular case, the HttpSignature verification on the server requires 2 headers (date and digest) plus the ‘(request-target)’ value. The digest is a SHA-256 of the payload, which is then base64 encoded.

Anyone can start with this and modify it to do other variations.
Good luck!

Addendum

I should have mentioned this: Postman, even the latest Chrome app version, uses XmlHTTPRequest to send out requests. XHR is purposefully limited, in the specification, to restrict some headers from being set explicitly on outbound requests. The list of restricted headers includes: Origin, Date, Cookie, Via, and others. The reason for this restriction: it is desired that the user-agent be fully in control of such request headers.

My example above uses an HttpSignature that signs the Date header. This means the code must also SET the Date header to a known value; in my case I am using a value generated by the pre-request script.

postman-set-headers

The value corresponds to “now”, the current moment. But the point is the value has to be absolutely known.

This is not possible in standard Postman, because of the XHR restriction. The effect you will see is that the intended Date header silently does not get sent with the request.

This may confound you if you don’t have an ability to trace the request on the receiving (server) side. In the context of a request that uses HttpSignature, the server will throw an error saying “Missing Date header”.

But! in Postman v0.9.6 and above, it is possible to configure Postman with something the Postman people call “the Intercptor plugin”, which then allows the lifting of this restriction. The Date header gets sent, and everything works.

If you don’t want to rely on the Interceptor plugin, and you want the HttpSignature to include the date value, then you’ll have to use a differently named header to hold the date. Use X-Date or anything other than “Date”. You need to change the client as well as the server of course, to make everything hold together.

Online calculator for SHA and HMAC-SHA

Here’s a thing I built. It’s just a webpage that calculates SHA-(1,224,256,384,512) and HMAC with the same algorithms.

I was using this to help with building a system that relies on HttpSignature. Developers need some help in constructing and validating their HMACs and SHAs.

The spec formerly known as Swagger is now OpenAPI

headline

Swagger has been renamed! Three weeks ago. I didn’t realize this, and (forgive me) I’ve been continuing to use the term “swagger” when I really should have been using “OpenAPI”, in the time since.

OAI Logo

Helpfully, Marsh, an esteemed colleague of mine, has produced a slackbot to remind me to use the world “OpenAPI” every time I type the… uh… old word… in slack chats. Now, I just need that slackbot to follow me around and remind me every time I *say* the old word.

There’s a new group, the OpenAPI Initiative, whose members include IBM, Google, Apigee, Intuit, Microsoft, Paypal… these members will govern the evolution of the spec.

REST Assured (hahahaha! ya get it?) that Apigee will be building some nice innovations on top of the OpenAPI spec. Exciting things coming soon. You can already see the beginnings at apistudio.io.

Apigee_API_Studio

It’s not difficult to imagine some interesting possible paths forward, from that tooling.

And, omigosh, I just realized that I haven’t posted an article here in about 6 months! Wow I must have been busy…

RESTful is hardly harmful.

A provocative essay came up on Hacker News today, entitled RESTful considered harmful.

The summary of the essay:

  • JSON is bloated in comparison to protobufs and similar binary protocols
  • There are no interface contracts or data schema
  • HATEOAS doesn’t work
  • No direct support for batching, paging, sorting, etc – eg no SQL semantics
  • CRUD is too limited
  • No, really, CRUD is too limited
  • HTTP Status codes don’t naturally map to business semantics
  • there’s no queueing, or asynchrony
  • There are no standards
  • Backward compatibility is hard

Let’s have a look at the validity of these concerns.

1. JSON is bloated in comparison to protobufs

The essay cites “one tremendous advantage of JSON”: human readability, and then completely discounts this advantage by saying that it’s bloated. It really is a tremendous advantage, which is why XML won over MQ’s binary protocol and the XDR from Sun RPC, and the NDR from DCE RPC, and every other frigging binary protocol. And readability is why JSON displaced XML.

Ask yourself this: what is the value of readability versus the performance advantages of the alternatives, like Thrift or protobufs? Is readability worth 1x as much as the improved efficiency you might get with protobufs? 2x? I believe that for many people, its worth 100x. It trumps all other. For uber-experts, it’s deceptively attractive to wave away the advantage of human-readability. For the rest of the world, for 97% of developers, it’s a huge, Huge, HUGE advantage. For high speed financial trades, JSON is wrong. For Google’s internal interfaces, wrong. For most of the world, RIGHT.

AND as the essay notes, REST doesn’t prescribe JSON. Or XML. Or anything. There’s a content-type header, and clients and servers can negotiate it. If the client says Accept: application/x-protobuf, and the server can send it, bliss for you. So this point – “JSON is bloated” – is not only not valid (false) in the first place, it’s also not an argument against REST.

2. There are no interface contracts or data schema

This is a feature. OMG, have we not tried this enough times? Did this guy skip his “History of IDL compilers” course in the Computer History department at school? Sun RPC IDL. DCE RPC IDL. Corba IDL. WSDL, ferpeetsake! XML Schema!!

It’s pretty straightforward to deliver plain-old-XML over HTTP, which is quite RESTful. More popular is JSON-over-HTTP. Either of those have schema languages. Few people embrace them, though. Why? Because IDLs and Schema languages are too much structure, and they handcuff people more than help them. We have fortunately learned from the past. There are more tools coming in this area, for those who wish to embrace them. See apistudio.io .

3. HATEOAS doesn’t work

Mmmmm, yep. No argument here. In my experience, nobody really uses this, in practice. Pragmatic REST is what people do, and it generally does not use HATEOAS.

4. no SQL semantics

Uhhuh, true. This has been addressed with things like OData. If you want SQL Semantics, seek solutions, don’t just complain.

5. CRUD is too limited

Really? This is a problem? That you might need a switch statement in your code to handle different types of events? Really?

6. CRUD is really too limited

….

Mmmmm, sorry. I have to stop now. I’m completely bored of responding to this essay by now. Except for one more:

10. Backward compatibility is hard

This has NOTHING to do with REST. This is just true. Back compat in any interface is tricky.


In summary, I don’t find any of the arguments compelling.

Let me draw an analogy. The position in this essay is like saying “Oil is no good as a transportation fuel.” Now, Oil has it’s drawbacks! Oil is dirty. We can imagine alternatives that are better in theory. Even today, in specific local situations (daily use, short trips, urban travel) electric cars are better, MUCH better, than fossil-fuel based cars. (An bicycles are even better than electric cars) But gasoline-powered cars deliver massive utility to billions of people. Gasoline refueling stations are everywhere. The delivery system for gasoline is mature and redundant. The World RUNS, very effectively, on gasoline-powered transport, by and large. Objectively, Oil is VERY GOOD as a transportation fuel.

Sure, we’ll evolve better approaches in the future. That’s great. And sure, we can imagine a world with electric-powered vehicles. But today, in the world of reality, Oil wins.

And likewise Pragmatic REST, HTTP, JSON, and schema-less interfaces are winning. We’ll evolve better approaches. But today, This platform wins.

HTTP, HTML, Javascript, and JSON are ubiquitous, are the foundation of the web, and are not going anywhere. Any architect is free to choose other options, and they might have good reasons for doing so. On the other hand the vast majority of installations won’t benefit from using protobufs or thrift, or some non-HTTP protocol. Pragmatic REST, JSON and HTTP are very very safe choices in the vast majority of scenarios.

Cheers

I don’t see the point in Revoking or Blacklisting JWT

I heard someone asking today for support for Revocation of JWT, and I thought
about it a little, and decided I don’t see the point.

Specifically, I don’t see the point of the process described in this post regarding “Blacklisting JWT in express-jwt“. I believe that it’s possible to blacklist JWT. I just don’t see the point.

Let’s take a step back and look at OAuth

For those unaware, JWT refers to JSON Web Token, which is a type of token that can be used in APIs. The format of JWT is self-describing.

Here’s the key problem tokens address: how does a server decide whether to honor or reject a request? It’s a matter of authorization. OAuthV2 has been proposed and is now being used by the industry as the model or framework for enabling authorization in API-oriented apps. Basically it says, “give apps tokens, then grant access based on the token.”

Often the way things work under the OAuth framework is:

  1. an app running on a mobile phone connects to a token dispensary (a server) to request a token
  2. the server requires the client (==app) to provide some credentials before generating and dispensing a token. Sometimes the server also requires user authentication before token delivering a token. (This is done in the Authorization Code grant or the password grant.)
  3. the client app then sends this token to a different server to ask for services.
  4. the API server evaluates the token before granting service. Often this requires contacting the original token dispensary to see if the token is good, and to see if the token should be honored for the particular service being requested.

You can see there are three parties in the game: the app, the token dispensary, and the API server.

One handy optimization is to put the API endpoint behind an OAuth-aware proxy server, like Apigee Edge. (Disclaimer: I work for Apigee). The app then contacts Edge for a token (via POST /token). If the credentials are good, Edge generates and stores an opaque token, which looks like n06ztxcf2bRpN42cDwVUNvroGOO6tMdt, and delivers it back to the app. The app then requests service (via GET /service, or whatever), passing the previously obtained token. Edge sees this request, extracts the token within it, evaluates whether the token is good, and either passes the request through to the API endpoint or rejects it based on the token status.

The key thing: these tokens are opaque. The app doesn’t know what that token is, beyond a string of characters. The app cannot tell what the token is good for, unless it asks the token dispensary, which is the final arbiter. Sometimes when dispensing the token, the token dispensary also delivers metadata about the token, like: expiry, scopes, and other attributes. But that is not required, and not always done. So, bearer tokens are often opaque, and they are opaque by default in Apigee Edge.

And by “Bearer”, we mean… an app that possesses a token is presumed to “own” the token, and should be granted service based on that token alone. In other words, the token is a secret. It’s like cash money – if you lose it, someone else can spend it. But not exactly like cash. An opaque token is more like a promissory note or an IOU; to determine if it’s worth anything you need to go back to the issuing party, to ask “are you willing to pay up on this note?”

How is JWT different?

JWT is a different kind of OAuth token. OAuth is just a framework, and does not stipulate exactly the kind of token that needs to be generated and delivered. One type of token is the opaque bearer kind. JWT is an alternative format. Rather than being an opaque string, JWT is a self-describing format for bearer tokens. Generally, a JWT includes an encoded payload that can be decoded and read by anyone, and that payload contains a bunch of claims. The standard set of claims includes: when the token was generated (“issued at”), who generated it (the “issuer”), the intended audience, the expiry, and other things. JWT can include custom claims, such as “the user is a good person”. But more often the custom claim is: “this user is authorized to invoke /serviceA at endpoint http://example.com”, although this kind of claim is shortened quite a bit and is encoded in JSON, rather than in English.

Optionally accompanying that payload with its claims is a signature, which can be verified by any party possessing the public key used to sign it, (or, when using secret key encryption, the secret key). This is what is meant by “self describing”. The self-describing nature of JWT is the opposite of opaque. [JWT can be unsigned, can be signed, or can be encrypted. The encryption part is an optional part of the spec.]

(Commercial message: I said above that Apigee Edge generates opaque bearer tokens by default. You can also configure Apigee Edge to generate signed JWT.)

Why Self-describing Tokens?

The main benefit of a model that uses self-describing tokens is that the API endpoint need not contact the token dispensary in order to determine if the token is good, not-expired, and if a request bearing such a token ought to be honored. In other words, JWT supports federation. One party issues the token, another party can verify it, without contacting the issuer. Remember, JWT is a bearer model, which means the possessor of the token is presumed to be authorized to get service based on what’s in the token. This is truly like cash money this time, because … when honoring a JWT, the API endpoint need not contact the issuer, just as when accepting a $20 bill, you don’t have to contact the US Treasury to see if the bill is worth $20.

So how ’bout Revocation of JWT?

This is a long story and I’m finally getting to the point: If you want JWT with powers to revoke the token, then you abandon the federation benefit.

Making the JWT self-descrbing means no honoring party needs to contact the issuer. Just verify the signature (verify the $20 bill is real), and then grant service. If you add in revocation as a requirement, then the honoring party then needs to contact the issuer: “I have $20 bill with serial number T128-DCQ-2872JKDJ; should I honor it?”

It means a synchronous call across the two parties. Which means federation is effectively broken. You abandon the federation benefit.

The corollary to the above is that you also still incur all the overhead of the JWT handling – the signing and verification. So you get all the costs of JWT and none of the benefits.

If revocation of bearer tokens is important to you, you could do the same thing with an opaque bearer token and eliminate all the fussy signature and validation stuff.

When you’re using an API Proxy server like Apigee Edge for both issuing and verifying+validating tokens, then there is no expensive additional remote call to check the revocation status. But you still lack the federation benefit, and you still incur this signing and verification nonsense.

I think when people ask for the ability to handle JWT with revocation, they don’t really understand what they’re asking.

Using the Drupal Services module for REST access to entities, part 3

What’s Going on Here?

In part 1 and part 2 of this series, I talked about Drupal REST services, and authenticating, and querying data. Be sure to review those before continuing with this post.

This article talks about how to create or update data on Drupal using REST APIs. It will use the same authentication foundation as described in Part 2.

Update All the Things!

What kinds of things can you create or update or delete with the Drupal REST API?

  • users
  • forum topics
  • articles
  • taxonomy categories
  • taxonomy terms
  • comments
  • and so on…

Pretty cool. Also, when creating entities, like users, all the normal drupal hooks will run. So that if you programmatically create a new user, and if you have a new-user hook that sends out an email…. then that hook will run and the email address for the newly-created user will get an email sent by Drupal. The API provides a nice way to provision a set of users all at one go, into Drupal, rather than asking each individual user to visit the site and self-register.

There are also special REST endpoints for doing things like resetting passwords or resending the welcome email.

So let’s look at some request payloads !

Modify an Existing Article

Request:

curl -i -X PUT \
  -H Cookie:SESS02caabc123=ShBy6ue5TTabcdefg \
  -H X-CSRF-Token:w98sdb9udjiskdjs \
  -H Accept:application/json \
  -H content-type:application/json \
  http://example.com/rest/node/4 \
  -d '{
  "title": "about multiple themes....",
  "body": {
    "und": [{
      "value": "how to demonstrate multiple themes?. ...",
      "summary": "multiple themes?",
      "format": "filtered_html",
      "safe_value": "themes",
      "safe_summary": "themes..."
    }]
  }
}'

Create a Forum Topic

To create a new Forum post (Drupal calls it a Forum topic):

Request:

curl -i -X PUT \
  -H Cookie:SESS02caabc123=ShBy6ue5TTabcdefg \
  -H X-CSRF-Token:w98sdb9udjiskdjs \
  -H Accept:application/json \
  -H content-type:application/json \
  http://example.com/rest/node \
  -d '{
    "type": "forum", 
    "title": "test post?", 
    "language": "und",
    "taxonomy_forums": { "und": "1" },
    "body": {
      "und": [{
        "value" : "This is the full text of the forum post",
        "summary": "this is a test1",
        "format": "full_html"
      }]
    }
  }'

This part…

        "taxonomy_forums": { "und": "1" },

…tells which forum to post to. Actually the “parent forum” is a taxonomy term, not a forum container. Nodes carry a taxonomy term on them, to identify which forum they belong to.

If you specify an invalid forum id, you will get this json error response:

406 Unacceptable
...
{
  "form_errors": {
    "taxonomy_forums][und": "An illegal choice has been detected. Please contact the site administrator.",
    "taxonomy_forums": "Select a forum."
  }
}

Here’s another “create forum topic” request, to a different forum:

curl -i -X POST \
  -H Cookie:SESS02caabc123=ShBy6ue5TTabcdefg \
  -H X-CSRF-Token:w98sdb9udjiskdjs \
  -H Accept:application/json \
  -H content-type:application/json \
  http://example.com/rest/node \
  -d '{
    "type": "forum", 
    "title": "test post #2", 
    "language": "und",
    "taxonomy_forums": { "und": "5" },
    "body": {
      "und": [{
        "value" : "This is a test post. please ignore.",
        "summary": "this is a test1",
        "format": "full_html"
      }]
    }
  }'

Notice the alternate forum id in that request, as compared to the prior one:

 "taxonomy_forums": { "und": "5" } 

Determine the available forums and ID numbers

Step 1: query the vocabulary that corresponds to “forums”:

curl -i -X GET  \
 -H accept:application/json \
 -H Cookie:SESS02caabc123=ShBy6ue5TTabcdefg \
 'http://example.com/rest/taxonomy_vocabulary?parameters\[machine_name\]=forums' 

Example Response:

[{
  "vid": "1",
  "name": "Forums",
  "machine_name": "forums",
  "description": "Forum navigation vocabulary",
  "hierarchy": "0",
  "module": "forum",
  "weight": "-10",
  "uri": "http://myserver/rest/taxonomy_vocabulary/1"
}]

The important part is the “vid” – which is the vocabulary ID.

Step 2: Query the terms for that vocabulary. This gives all forum names and IDs.

curl -i -X GET \
 -H Cookie:SESS02caabc123=ShBy6ue5TTabcdefg \
 -H Accept:application/json \
 -H content-type:application/json \
 'http://example.com/rest/taxonomy_term?parameters\[vid\]=1' 

Example response:

Response:

[{
  "tid": "8",
  "vid": "1",
  "name": "Getting Started",
  "description": "",
  "format": null,
  "weight": "0",
  "uuid": "7ff7ce10-0082-46f6-9edd-882410b7c304",
  "depth": 0,
  "parents": ["0"]
}, {
  "tid": "1",
  "vid": "1",
  "name": "General discussion",
  "description": "",
  "format": null,
  "weight": "1",
  "uuid": "dbf914e7-42c2-45f6-b77a-e66a0da72310",
  "depth": 0,
  "parents": ["0"]
}, {
  "tid": "4",
  "vid": "1",
  "name": "Security and Privacy Issues",
  "description": "",
  "format": null,
  "weight": "2",
  "uuid": "7496bfd7-2cb8-4f87-a1e4-f45b1956a01e",
  "depth": 0,
  "parents": ["0"]
}]

The tid in each array element is what you must use in the “taxonomy_forums”: { “und”: “4” }, … when POSTing a new forum node.

Delete a node

Deleting a node means removing an article, a forum topic (post), a comment, etc.

The request:

curl -i -X DELETE \
 -H Cookie:SESS02caabc123=ShBy6ue5TTabcdefg \
 -H X-CSRF-Token:w98sdb9udjiskdjs \
 -H Accept:application/json \
 http://example.com/rest/node/8

Example response:

  [true]

Weird response, but ok.

By the way, if the cookie and token has timed out, for any of these create, update, or delete calls you may see this response:

["Access denied for user anonymous"]. 

There is no explicit notice that the cookie has timed out. The remedy is
to re-authenticate and submit the request again.

Delete a taxonomy term

Deleting a taxonomy term in the taxonomy vocabulary for forums would imply deleting a forum.

curl -i -X DELETE \
  -H Cookie:SESS02caabc123=ShBy6ue5TTabcdefg \
  -H X-CSRF-Token:w98sdb9udjiskdjs \
  -H Accept:application/json \
  http://dev-wagov1.devportal.apigee.com/rest/taxonomy_term/7

Create a taxonomy term

Creating a taxonomy term in the taxonomy vocabulary for forums would imply creating a forum.

curl -i -X POST \
  -H Cookie:SESS02caabc123=ShBy6ue5TTabcdefg \
  -H X-CSRF-Token:w98sdb9udjiskdjs \
  -H Accept:application/json \
  -H content-type:application/json \
  http://dev-wagov1.devportal.apigee.com/rest/taxonomy_term \
  -d '{
    "vid": "1",
    "name": "Another Forum on the site",
    "description": "",
    "format": null,
    "weight": "10"
  }'

The UUID and TID for the forum will be generated for you. Unfortunately, the tid will not be returned for you to reference. You need to query to find it. Use the name of the forum you just created:

Request:

curl -i -X GET \
  -H Cookie:SESS02caabc123=ShBy6ue5TTabcdefg \
  -H Accept:application/json \
  'http://example.com/rest/taxonomy_term?parameters\[name\]=Another+Forum+on+the+site'

Example Response:

[{
  "tid": "36",
  "vid": "1",
  "name": "Another Forum on the site",
  "description": "",
  "format": null,
  "weight": "10",
  "uuid": "dcbe0118-c160-4556-b0b6-1813241bb851",
  "uri": "http://example.com/rest/taxonomy_term/36"
}]

Make sure you use unique names for these taxonomy terms.

Create a new user

curl -i -X POST \
    -H Cookie:SESS02caabc123=ShBy6ue5TTabcdefg \
    -H X-CSRF-Token:w98sdb9udjiskdjs \
    -H accept:application/json \
    -H content-type:application/json \
    http://example.com/rest/user -d '{
      "name" : "TestUser1",
      "mail" : "Dchiesa+Testuser1@apigee.com",
      "pass": "secret123",
      "timezone": "America/Los_Angeles", 
      "field_first_name": {
          "und": [{ "value": "Dino"}]
      },
      "field_last_name": {
          "und": [{ "value": "Chiesa"}]
      }
   }'

Response:

{"uid":"7","uri":"http://example.com/rest/user/7"}

Resend the welcome email

curl -i -X POST \
    -H Cookie:SESS02caabc123=ShBy6ue5TTabcdefg \
    -H X-CSRF-Token:w98sdb9udjiskdjs \
    -H accept:application/json \
    -H content-type:application/json \
    http://example.com/rest/user/7/resend_welcome_email -d '{}'

Reset a user password

curl -i -X POST \
    -H Cookie:SESS02caabc123=ShBy6ue5TTabcdefg \
    -H X-CSRF-Token:w98sdb9udjiskdjs \
    -H accept:application/json \
    -H content-type:application/json \
    http://example.com/rest/user/7/password_reset -d '{}'

Update a user

This shows how to set the user status to 0, in order to de-activate the user.

curl -i -X PUT \
    -H Cookie:SESS02caabc123=ShBy6ue5TTabcdefg \
    -H X-CSRF-Token:w98sdb9udjiskdjs \
    -H accept:application/json \
    -H content-type:application/json \
    http://example.com/rest/user/6 -d '{
      "status" : "0"
   }'

You could of course update any of the other user attributes as well.


That ought to get you started with creating and updating things in Drupal via the REST Server.

Remember, the basic rules are:

  • pass the cookie for each REST query call
  • Pass the cookie and X-CSRF-Token when doing create, update or
    delete
  • have fun out there!

Good luck. Contact me here if these examples are unclear.

Using the Drupal Services module for REST access to entities, part 2

Be sure to start with Part 1 of this series.

What’s Going on Here?

To recap: I’ve enabled the Services module in Drupal v7, in order to enable REST calls into Drupal, to do things like:

  • list nodes
  • create entities, like nodes, users, taxonomy vocabularies, or taxonomy terms
  • delete or modify same

Clear? The prior post talks about the preparation. This post talks about some of the actual REST calls. Let’s start with Authentication.

Authentication

These are the steps required to make authenticated calls to Drupal via the Services module:

  1. Obtain a CSRF token
  2. Invoke the login API, passing the CSRF token.
  3. Get a Cookie and new token in response – the cookie is of the form {{Session-Name}}={{Session-id}}. Both the session name and id are returned in the json payload as well, along with a new CSRF token.
  4. Pass the cookie and the new token to all future requests
  5. Logout when finished, via POST /user/logout

The Actual Messages

OK, Let’s look at some example messages.

Get a CSRF Token

Request:

curl -i -X POST -H content-type:application/json \ 
  -H Accept:application/json \ 
  http://example.com/rest/user/token  

The content-type header is required, even though there is no payload sent with the POST.

Response:

HTTP/1.1 200 OK
Cache-Control: no-cache, must-revalidate, post-check=0, pre-check=0
Content-Type: application/json
Etag: "1428629440"
Expires: Sun, 19 Nov 1978 05:00:00 GMT
Last-Modified: Fri, 10 Apr 2015 01:30:40 GMT
Vary: Accept
Content-Length: 55
Accept-Ranges: bytes
Date: Fri, 10 Apr 2015 01:30:51 GMT
Connection: keep-alive

{"token":"woalC7A1sRzpnzDhp8_rtWB1YlXBRalWMSODDX1yfUI"}

That’s a token, surely. I haven’t figured out what I need that token for. It’s worth pointing out that you get a new CSRF token when you login; see below. So I don’t do anything with this token. I never use the call to /rest/user/token .

Login

To do anything interesting, your app needs to login; aka authenticate. After login, your app can invoke regular transactions, using the information returned in that response. Let’s look at the messages.

Request:

curl -i -X POST -H content-type:application/json \
    -H Accept:application/json \
    http://example.com/rest/user/login \
    -d '{ 
     "username" : "YOURUSERNAME",
     "password" : "YOURPASSWORD"
    }'

Response:

HTTP/1.1 200 OK
Content-Type: application/json
Expires: Sun, 19 Nov 1978 05:00:00 GMT
Last-Modified: Fri, 10 Apr 2015 01:33:35 GMT
Set-Cookie: SESS02caabc123=ShBy6ue5TTabcdefg; expires=Sun, 03-May-2015 05:06:55 GMT; path=/; domain=.example.com; HttpOnly
...
{
  "sessid": "ShBy6ue5TTabcdefg",
  "session_name": "SESS02caabc123",
  "token": "w98sdb9udjiskdjs",
  "user": {
    "uid": "4",
    "name": "YOURUSERNAME",
    "mail": "YOUREMAIL@example.com",
    "theme": "",
    "signature": "",
    "signature_format": null,
    "created": "1402005877",
    "access": "1426280563",
    "login": 1426280601,
    "status": "1",
    "timezone": null,
    "language": "",
    "picture": "0",
    "data": false,
    "uuid": "3e1e948e-940e-4a05-bd7a-267c6671c11b",
    "roles": {
      "2": "authenticated user",
      "3": "administrator"
    },
    "field_first_name": {
      "und": [{
        "value": "Dino",
        "format": null,
        "safe_value": "Dino"
      }]
    },
    "field_last_name": {
      "und": [{
        "value": "Chiesa",
        "format": null,
        "safe_value": "Chiesa"
      }]
    },
    "metatags": [],
    "rdf_mapping": {
      "rdftype": ["sioc:UserAccount"],
      "name": {
        "predicates": ["foaf:name"]
      },
      "homepage": {
        "predicates": ["foaf:page"],
        "type": "rel"
      }
    }
  }
}

There are a few data items that are of particular interest.

Briefly, in subsequent calls, your app needs to pass back the cookie specified in the Set-Cookie header. BUT, if you’re coding in Javascript or PHP or C# or Java or whatever, you don’t need to deal with managing cookies, because the cookie value is also contained in the JSON payload. The cookie has the form {SESSIONNAME}={SESSIONID}, and those values are provided right in the JSON. With the response shown above, subsequent GET calls need to specify a header like this:

Cookie: SESS02caabc123=ShBy6ue5TTabcdefg

Subsequent PUT, POST, and DELETE calls need to specify the Cookie as well as the CSRF header, like this:

Cookie: SESS02caabc123=ShBy6ue5TTabcdefg
X-CSRF-Token: w98sdb9udjiskdjs

In case it was not obvious: The value of the X-CSRF-Token is the value following the “token” property in the json response. Also: your values for the session name, session id, and token will be different than the ones shown here. Just sayin.

Get All Nodes

OK, the first thing to do once authenticated: get all the nodes. Here’s the request to do that:

Request:

curl -i -X GET \ 
  -H Cookie:SESS02caabc123=ShBy6ue5TTabcdefg \ 
  -H Accept:application/json \ 
  http://example.com/rest/node 

The response gives up to “pagesize” elements, which defaults to 20 on my system. You can also append a query parameter ?pagesize=30 for example to increase this. To repeat: you do not need to pass in the X-csrf-token header here for this query. The CSRF token is required for Update operations (POST, PUT, DELETE). Not for GET.

Here’s the response:

[{
  "nid": "32",
  "vid": "33",
  "type": "wquota3",
  "language": "und",
  "title": "get weather for given WOEID (token)",
  "uid": "4",
  "status": "1",
  "created": "1425419882",
  "changed": "1425419904",
  "comment": "1",
  "promote": "0",
  "sticky": "0",
  "tnid": "0",
  "translate": "0",
  "uuid": "9b0b503d-cdd2-410f-9ba6-421804d25d4e",
  "uri": "http://example.com/rest/node/32"
}, {
  "nid": "33",
  "vid": "34",
  "type": "wquota3",
  "language": "und",
  "title": "get weather for given WOEID (key)",
  "uid": "4",
  "status": "1",
  "created": "1425419882",
  "changed": "1425419904",
  "comment": "1",
  "promote": "0",
  "sticky": "0",
  "tnid": "0",
  "translate": "0",
  "uuid": "56d233fe-91d4-49e5-aace-59f1c19fbb73",
  "uri": "http://example.com/rest/node/33"
}, {
  "nid": "31",
  "vid": "32",
  "type": "cbc",
  "language": "und",
  "title": "Shorten URL",
  "uid": "4",
  "status": "0",
  "created": "1425419757",
  "changed": "1425419757",
  "comment": "1",
  "promote": "0",
  "sticky": "0",
  "tnid": "0",
  "translate": "0",
  "uuid": "8f21a9bc-30e6-4232-adf9-fe705bad6049",
  "uri": "http://example.com/rest/node/31"
}
...
]

This is an array, which some people say should never be returned by a REST resource. (Because What if you wanted to add a property to the response? Where would you put it?) But anyway, it works. You don’t get ALL the nodes, you get only a page worth. Also, you don’t get all the details for each node. But you do get the URL for each node, which is your way to get the full details of a node.

What if you want the next page? According to my reading of the scattered Drupal documentation, these are the query parameters accepted for queries on all entity types:

  • (string) fields – A comma separated list of fields to get.
  • (int) page – The zero-based index of the page to get, defaults to 0.
  • (int) pagesize – Number of records to get per page.
  • (string) sort – Field to sort by.
  • (string) direction – Direction of the sort. ASC or DESC.
  • (array) parameters – Filter parameters array such as parameters[title]=”test”

So, to get the next page, just send the same request, but with a query parameter, page=2.

Get One Node

This is easy.

Request:

curl -i -X GET \ 
  -H Cookie:SESS02caabc123=ShBy6ue5TTabcdefg \ 
  -H Accept:application/json \ 
  http://example.com/rest/node/75 

Response:

HTTP/1.1 200 OK
Content-Type: application/json

...
{
  "vid": "76",
  "uid": "4",
  "title": "Embedding keys securely into the app",
  "log": "",
  "status": "1",
  "comment": "2",
  "promote": "0",
  "sticky": "0",
  "vuuid": "57f3aade-d923-4bb5-8861-1d2c160a9fd5",
  "nid": "75",
  "type": "forum",
  "language": "und",
  "created": "1427332570",
  "changed": "1427332570",
  "tnid": "0",
  "translate": "0",
  "uuid": "026c029d-5a45-4e10-8aec-ac5e9824a5c5",
  "revision_timestamp": "1427332570",
  "revision_uid": "4",
  "taxonomy_forums": {
    "und": [{
      "tid": "89"
    }]
  },
  "body": {
    "und": [{
      "value": "Suppose I have received my key from Healthsparq.  Now I would like to embed that key into the app that I'm producing for the mobile device. How can I do this securely, so that undesirables will not be able to find the keys or sniff the key as I use it?",
      "summary": "",
      "format": "full_html",
      "safe_value": "

Suppose I have received my key from Healthsparq. Now I would like to embed that key into the app that I'm producing for the mobile device. How can I do this securely, so that undesirables will not be able to find the keys or sniff the key as I use it?

\n", "safe_summary": "" }] }, "metatags": [], "rdf_mapping": { "rdftype": ["sioc:Post", "sioct:BoardPost"], "taxonomy_forums": { "predicates": ["sioc:has_container"], "type": "rel" }, "title": { "predicates": ["dc:title"] }, "created": { "predicates": ["dc:date", "dc:created"], "datatype": "xsd:dateTime", "callback": "date_iso8601" }, "changed": { "predicates": ["dc:modified"], "datatype": "xsd:dateTime", "callback": "date_iso8601" }, "body": { "predicates": ["content:encoded"] }, "uid": { "predicates": ["sioc:has_creator"], "type": "rel" }, "name": { "predicates": ["foaf:name"] }, "comment_count": { "predicates": ["sioc:num_replies"], "datatype": "xsd:integer" }, "last_activity": { "predicates": ["sioc:last_activity_date"], "datatype": "xsd:dateTime", "callback": "date_iso8601" } }, "cid": "0", "last_comment_timestamp": "1427332570", "last_comment_name": null, "last_comment_uid": "4", "comment_count": "0", "name": "DChiesa", "picture": "0", "data": null, "forum_tid": "89", "path": "http://example.com/content/embedding-keys-securely-app" }

As you know, in Drupal a node can represent many things. In this case, this node is a forum post. You can see that from the “type”: “forum”, in the response.

Querying for a specific type of node

Request:

curl -i -X GET \
  -H Cookie:SESS02caabc123=ShBy6ue5TTabcdefg \
  -H Accept:application/json \
  'http://example.com/rest/node?parameters\[type\]=forum'

Request:

curl -i -X GET \
  -H Cookie:SESS02caabc123=ShBy6ue5TTabcdefg \
  -H Accept:application/json \
  'http://example.com/rest/node?parameters\[type\]=faq

Request:

curl -i -X GET \
  -H Cookie:SESS02caabc123=ShBy6ue5TTabcdefg \
  -H Accept:application/json \
  'http://example.com/rest/node?parameters\[type\]=article

The response you get from each of these is the same as you would get from the non-parameterized query (for all nodes). The escaping of the square brackets is necessary only for using curl within bash. If you’re sending this request from an app, you don’t need to backslash-escape the square brackets.

Logout

Request:

curl -i -X POST \
    -H content-type:application/json \
    -H Accept:application/json \
    -H Cookie:SESS02caabc123=ShBy6ue5TTabcdefg \
    -H X-csrf-token:xxxx \
    http://example.com/rest/user/logout -d '{}'

Notes: The value of the cookie header and the X-csrf-token header are obtained from the response to the login call! Also, obviously don’t call Logout until you’re finished making API calls. After the logout call, the Cookie and X-csrf-token will become invalid; discard them.

Response:

HTTP/1.1 200 OK
...
[true]

Pretty interesting as a response.

More examples, covering creating things and deleting things, in the next post in this series.

Using the Drupal Services module for REST access to entities, part 1

drupal-logo

This is Part 1. See also, Part 2 and Part 3.

I’m not an expert on Drupal, but I do have some extensive experience designing and using APIs. (I work for Apigee.)

Recently I’ve been working with Drupal v7, and in particular, getting Drupal to expose a REST interface that would allow me to program against it. I want to write apps that read forum posts, write forum posts, read or post pages, create users, and so on.

Drupal is a server than manages entities, right? This function is primarily exposed via a web UI, but that UI is just a detail. Drupal should be able to expose an API that is similarly capable. Should be!

The bazaar is alive and well with Drupal. It seems that regardless what you want to do with Drupal, there are 13 different ways to do it. And exposing Drupal entities as resources in a RESTful interface, is no different. There are numerous modules designed to help in this task, some of which are complementary to each other, some of which are overlapping, and most of which are poorly documented. Every module has multiple different versions, and every module works with multiple different versions of drupal. So figuring out the best way to proceed, for a Drupal novice like me, is not easy.

Disclaimer: What follows is what I’ve found. If a Drupal expert reads this and thinks, “Dude, you’re way off!” I am very willing to accept constructive suggestions. I’d like to know the best way to proceed. This is what I tried.

The Services Module

I used the Services module. There are other options – restws is one of them. I didn’t have a firm set of criteria for choosing one versus the other, except that I fell into the pit of success more easily with the Services module. It seems to be more popular and has more examples available that I found via Google search.

Services 3.0 is now available. … Note that currently there is no upgrade path for Services 3, and it is not backwards compatible with older implementations of the API. Therefore some existing modules like JSON Server and AMFPHP will not work with it. …

Not that there aren’t problems with it. The lack of backwards compatibility on a programmable interface is a really bad sign (See the blockquote). That reflects poor planning on the part of the designers of that module. And then there is the lack of clear documentation for how to do most things.

Setup

The first thing: you need to obtain and activate the Services module. There’s a straightforward guide for doing this. I installed the module, then went to the Admin panel to insure the Rest Server was enabled. A screenshot is below.

screenshot-20150317-092348

More Setup

Next, you need to create a REST endpoint. To so so, still logged in as Admin, select Structure > Services. Click Add. Then specify rest, REST, and rest. Another screenshot.

screenshot-20150410-135352

That’s it. Your Drupal server is now exposing REST interfaces. You then need to click on “resources” to enable specific access to things like users, nodes, taxonomy, taxonomy terms, and so on. And you’re all set.

Retrieving Nodes is Easy

Once you have the Rest server enabled, getting an index of the nodes in a Drupal system is probably the most basic thing any programmer will want to do. And beyond that, creating a new node (posting a page or article), creating a user, and so on. For the Services module, there is a nice page that gives examples for doing this sort of basic thing. I’m not really a fan of the layout of that page of documentation; it seems to be all over the place, providing basic REST principles, describing REST testing tools, and then finally giving samples of messages. Those things seem like they all belong on separate, hyperlinked pages. But again, it’s the bazaar, and someone contributed that doc all by himself. If I think it could be better I am welcome to edit that page, I guess.

Here’s one example request from that page:

POST http://services.example.com/rest/user/register
    Content-Type: application/json
    {
        "name":"services_user_1",
        "pass":"password",
        "mail":"services_user_1@example.com"
    }

This is something I can understand. Many of the other doc pages give jQuery example code. Ummmm…..I don’t write in jQuery. Why not just show the messages that need to be sent to the Drupal server? and then let the jQuery people figure out how to type in their ajax methods? ….

The basic examples given there are good but you’ll notice there is nothing there about authentication. Nothing that shows how a developer needs to authenticate to Drupal via the Services module. That ought to be another hyperlinked page, no?

Authentication

There are multiple steps involved to authenticate:

  1. Obtain a CSRF token
  2. Invoke the login API, passing the CSRF token.
  3. Get a Cookie and new token in response – the cookie is of the form {{Session-Name}}={{Session-id}}. Both the session name and id are returned in the json payload as well, along with a new CSRF token.
  4. Pass the cookie and the new token to all future requests
  5. Logout when finished, via POST /user/logout

More detail on all of this in the next post.

Pretty psyched about Swagger Editor for APIs

I’m pretty excited about the Swagger editor. But to understand why, you first need to know what Swagger is all about.

Let’s take a step back. As of August 2014, total activity on smartphones and tablets accounted for ~60% of digital media time spent in the U.S. This unabated growth in mobile is driving the growth in enabling technologies: tools for developing apps, managing app communications, measuring app and data usage, analyzing usage and predicting behavior based on that usage. APIs are a key connective technology, allowing innovative mobile apps use APIs to access data and services from companies like Uber or Twitter, or from government bodies like the State of Washington. APIs provide the linkage.

APIs are not solely about mobile apps. They can be used to connect “any app” to “any service”; indeed this website uses code running on the server to invoke the Twitter API to display tweets on the right hand side of this blog. But mobile is the driver. Web is not driving the growth, nor is the Internet-of-Things; not in APIs, nor the growth in any of the other enabling technologies. In the 2000’s it was Web. Tomorrow will be IoT. Today, it is mobile.

Ok, so What is Swagger? Swagger is a way to define and describe APIs. A language for stating exactly what an API offers. The description language is analogous to Interface Definition Languages going back to Sun’s RPC IDL, Corba IDL, DCE IDL, or SOAP’s WSDL. Many of you reading this won’t recognize any of those names; it doesn’t matter. We don’t use most of those technologies any longer, more importantly we don’t utilize the metaphors those technologies imply: function shipping, remote procedure call, or distributed objects. While moving away from the tight coupling of binary protocols and towards technologies like REST and JSON and XML that enable more loosely-coupled interactions, we still recognize that it’s helpful to be able to formally describe programmable interfaces.

OK, so Swagger is at it’s heart, a way to describe a RESTful API. Many of you are Java developers and may be familiar with Swagger Annotations, which allows you to mark up JAX-RPC server application code, which then allows you to generate a Swagger definition from an implementation. Something like doxygen. This is cool, but is sort of a backwards approach. Getting the description of the API from the implementation is analogous to getting the blueprint for a building by taking pictures of the finished framing. Ideally you’d like to go in the other direction – first build the design (or blueprint, if you will) of the API, and then generate the implementation. My friend and colleague Marsh Gardiner discussed the design-first approach last year.

This is what Swagger can do. How does one produce a Swagger document? Well if you’re an old codger like me, you might resort to using a text editor like emacs and its yaml-mode to hand-code the yaml. But a much easier approach is to use The Swagger Editor.

The API Description is basically “a model” of the API. And with that model, one can do all sorts of interesting things: generate a client-side library in one of various languages. Generate a server-side implementation stub. Generate a test harness. Generate documentation. In fact the Swagger project has had a doc-gen capability, named swagger-ui, since the early days of the project.

So what’s the upshot? The result of better enabling tooling around APIs, tooling including Swagger Editor and Swagger UI, as well as an API management layer as provided by Apigee Edge (Disclaimer! I work for Apigee!), means that it is easier for companies to expose capabilities as easy-to-consume APIs, and that it is easier for developers to code against those APIs to build compelling experiences that run on mobile devices. So I’m pretty excited about the new tooling, and I am even more excited about the integration we will soon see between these new modeling tools and the existing API Management tools already available.

Good stuff!