Using the Drupal Services module for REST access to entities, part 3

What’s Going on Here?

In part 1 and part 2 of this series, I talked about Drupal REST services, and authenticating, and querying data. Be sure to review those before continuing with this post.

This article talks about how to create or update data on Drupal using REST APIs. It will use the same authentication foundation as described in Part 2.

Update All the Things!

What kinds of things can you create or update or delete with the Drupal REST API?

  • users
  • forum topics
  • articles
  • taxonomy categories
  • taxonomy terms
  • comments
  • and so on…

Pretty cool. Also, when creating entities, like users, all the normal drupal hooks will run. So that if you programmatically create a new user, and if you have a new-user hook that sends out an email…. then that hook will run and the email address for the newly-created user will get an email sent by Drupal. The API provides a nice way to provision a set of users all at one go, into Drupal, rather than asking each individual user to visit the site and self-register.

There are also special REST endpoints for doing things like resetting passwords or resending the welcome email.

So let’s look at some request payloads !

Modify an Existing Article

Request:

curl -i -X PUT \
  -H Cookie:SESS02caabc123=ShBy6ue5TTabcdefg \
  -H X-CSRF-Token:w98sdb9udjiskdjs \
  -H Accept:application/json \
  -H content-type:application/json \
  http://example.com/rest/node/4 \
  -d '{
  "title": "about multiple themes....",
  "body": {
    "und": [{
      "value": "how to demonstrate multiple themes?. ...",
      "summary": "multiple themes?",
      "format": "filtered_html",
      "safe_value": "themes",
      "safe_summary": "themes..."
    }]
  }
}'

Create a Forum Topic

To create a new Forum post (Drupal calls it a Forum topic):

Request:

curl -i -X PUT \
  -H Cookie:SESS02caabc123=ShBy6ue5TTabcdefg \
  -H X-CSRF-Token:w98sdb9udjiskdjs \
  -H Accept:application/json \
  -H content-type:application/json \
  http://example.com/rest/node \
  -d '{
    "type": "forum", 
    "title": "test post?", 
    "language": "und",
    "taxonomy_forums": { "und": "1" },
    "body": {
      "und": [{
        "value" : "This is the full text of the forum post",
        "summary": "this is a test1",
        "format": "full_html"
      }]
    }
  }'

This part…

        "taxonomy_forums": { "und": "1" },

…tells which forum to post to. Actually the “parent forum” is a taxonomy term, not a forum container. Nodes carry a taxonomy term on them, to identify which forum they belong to.

If you specify an invalid forum id, you will get this json error response:

406 Unacceptable
...
{
  "form_errors": {
    "taxonomy_forums][und": "An illegal choice has been detected. Please contact the site administrator.",
    "taxonomy_forums": "Select a forum."
  }
}

Here’s another “create forum topic” request, to a different forum:

curl -i -X POST \
  -H Cookie:SESS02caabc123=ShBy6ue5TTabcdefg \
  -H X-CSRF-Token:w98sdb9udjiskdjs \
  -H Accept:application/json \
  -H content-type:application/json \
  http://example.com/rest/node \
  -d '{
    "type": "forum", 
    "title": "test post #2", 
    "language": "und",
    "taxonomy_forums": { "und": "5" },
    "body": {
      "und": [{
        "value" : "This is a test post. please ignore.",
        "summary": "this is a test1",
        "format": "full_html"
      }]
    }
  }'

Notice the alternate forum id in that request, as compared to the prior one:

 "taxonomy_forums": { "und": "5" } 

Determine the available forums and ID numbers

Step 1: query the vocabulary that corresponds to “forums”:

curl -i -X GET  \
 -H accept:application/json \
 -H Cookie:SESS02caabc123=ShBy6ue5TTabcdefg \
 'http://example.com/rest/taxonomy_vocabulary?parameters\[machine_name\]=forums' 

Example Response:

[{
  "vid": "1",
  "name": "Forums",
  "machine_name": "forums",
  "description": "Forum navigation vocabulary",
  "hierarchy": "0",
  "module": "forum",
  "weight": "-10",
  "uri": "http://myserver/rest/taxonomy_vocabulary/1"
}]

The important part is the “vid” – which is the vocabulary ID.

Step 2: Query the terms for that vocabulary. This gives all forum names and IDs.

curl -i -X GET \
 -H Cookie:SESS02caabc123=ShBy6ue5TTabcdefg \
 -H Accept:application/json \
 -H content-type:application/json \
 'http://example.com/rest/taxonomy_term?parameters\[vid\]=1' 

Example response:

Response:

[{
  "tid": "8",
  "vid": "1",
  "name": "Getting Started",
  "description": "",
  "format": null,
  "weight": "0",
  "uuid": "7ff7ce10-0082-46f6-9edd-882410b7c304",
  "depth": 0,
  "parents": ["0"]
}, {
  "tid": "1",
  "vid": "1",
  "name": "General discussion",
  "description": "",
  "format": null,
  "weight": "1",
  "uuid": "dbf914e7-42c2-45f6-b77a-e66a0da72310",
  "depth": 0,
  "parents": ["0"]
}, {
  "tid": "4",
  "vid": "1",
  "name": "Security and Privacy Issues",
  "description": "",
  "format": null,
  "weight": "2",
  "uuid": "7496bfd7-2cb8-4f87-a1e4-f45b1956a01e",
  "depth": 0,
  "parents": ["0"]
}]

The tid in each array element is what you must use in the “taxonomy_forums”: { “und”: “4” }, … when POSTing a new forum node.

Delete a node

Deleting a node means removing an article, a forum topic (post), a comment, etc.

The request:

curl -i -X DELETE \
 -H Cookie:SESS02caabc123=ShBy6ue5TTabcdefg \
 -H X-CSRF-Token:w98sdb9udjiskdjs \
 -H Accept:application/json \
 http://example.com/rest/node/8

Example response:

  [true]

Weird response, but ok.

By the way, if the cookie and token has timed out, for any of these create, update, or delete calls you may see this response:

["Access denied for user anonymous"]. 

There is no explicit notice that the cookie has timed out. The remedy is
to re-authenticate and submit the request again.

Delete a taxonomy term

Deleting a taxonomy term in the taxonomy vocabulary for forums would imply deleting a forum.

curl -i -X DELETE \
  -H Cookie:SESS02caabc123=ShBy6ue5TTabcdefg \
  -H X-CSRF-Token:w98sdb9udjiskdjs \
  -H Accept:application/json \
  http://dev-wagov1.devportal.apigee.com/rest/taxonomy_term/7

Create a taxonomy term

Creating a taxonomy term in the taxonomy vocabulary for forums would imply creating a forum.

curl -i -X POST \
  -H Cookie:SESS02caabc123=ShBy6ue5TTabcdefg \
  -H X-CSRF-Token:w98sdb9udjiskdjs \
  -H Accept:application/json \
  -H content-type:application/json \
  http://dev-wagov1.devportal.apigee.com/rest/taxonomy_term \
  -d '{
    "vid": "1",
    "name": "Another Forum on the site",
    "description": "",
    "format": null,
    "weight": "10"
  }'

The UUID and TID for the forum will be generated for you. Unfortunately, the tid will not be returned for you to reference. You need to query to find it. Use the name of the forum you just created:

Request:

curl -i -X GET \
  -H Cookie:SESS02caabc123=ShBy6ue5TTabcdefg \
  -H Accept:application/json \
  'http://example.com/rest/taxonomy_term?parameters\[name\]=Another+Forum+on+the+site'

Example Response:

[{
  "tid": "36",
  "vid": "1",
  "name": "Another Forum on the site",
  "description": "",
  "format": null,
  "weight": "10",
  "uuid": "dcbe0118-c160-4556-b0b6-1813241bb851",
  "uri": "http://example.com/rest/taxonomy_term/36"
}]

Make sure you use unique names for these taxonomy terms.

Create a new user

curl -i -X POST \
    -H Cookie:SESS02caabc123=ShBy6ue5TTabcdefg \
    -H X-CSRF-Token:w98sdb9udjiskdjs \
    -H accept:application/json \
    -H content-type:application/json \
    http://example.com/rest/user -d '{
      "name" : "TestUser1",
      "mail" : "Dchiesa+Testuser1@apigee.com",
      "pass": "secret123",
      "timezone": "America/Los_Angeles", 
      "field_first_name": {
          "und": [{ "value": "Dino"}]
      },
      "field_last_name": {
          "und": [{ "value": "Chiesa"}]
      }
   }'

Response:

{"uid":"7","uri":"http://example.com/rest/user/7"}

Resend the welcome email

curl -i -X POST \
    -H Cookie:SESS02caabc123=ShBy6ue5TTabcdefg \
    -H X-CSRF-Token:w98sdb9udjiskdjs \
    -H accept:application/json \
    -H content-type:application/json \
    http://example.com/rest/user/7/resend_welcome_email -d '{}'

Reset a user password

curl -i -X POST \
    -H Cookie:SESS02caabc123=ShBy6ue5TTabcdefg \
    -H X-CSRF-Token:w98sdb9udjiskdjs \
    -H accept:application/json \
    -H content-type:application/json \
    http://example.com/rest/user/7/password_reset -d '{}'

Update a user

This shows how to set the user status to 0, in order to de-activate the user.

curl -i -X PUT \
    -H Cookie:SESS02caabc123=ShBy6ue5TTabcdefg \
    -H X-CSRF-Token:w98sdb9udjiskdjs \
    -H accept:application/json \
    -H content-type:application/json \
    http://example.com/rest/user/6 -d '{
      "status" : "0"
   }'

You could of course update any of the other user attributes as well.


That ought to get you started with creating and updating things in Drupal via the REST Server.

Remember, the basic rules are:

  • pass the cookie for each REST query call
  • Pass the cookie and X-CSRF-Token when doing create, update or
    delete
  • have fun out there!

Good luck. Contact me here if these examples are unclear.

Using the Drupal Services module for REST access to entities, part 2

Be sure to start with Part 1 of this series.

What’s Going on Here?

To recap: I’ve enabled the Services module in Drupal v7, in order to enable REST calls into Drupal, to do things like:

  • list nodes
  • create entities, like nodes, users, taxonomy vocabularies, or taxonomy terms
  • delete or modify same

Clear? The prior post talks about the preparation. This post talks about some of the actual REST calls. Let’s start with Authentication.

Authentication

These are the steps required to make authenticated calls to Drupal via the Services module:

  1. Obtain a CSRF token
  2. Invoke the login API, passing the CSRF token.
  3. Get a Cookie and new token in response – the cookie is of the form {{Session-Name}}={{Session-id}}. Both the session name and id are returned in the json payload as well, along with a new CSRF token.
  4. Pass the cookie and the new token to all future requests
  5. Logout when finished, via POST /user/logout

The Actual Messages

OK, Let’s look at some example messages.

Get a CSRF Token

Request:

curl -i -X POST -H content-type:application/json \ 
  -H Accept:application/json \ 
  http://example.com/rest/user/token  

The content-type header is required, even though there is no payload sent with the POST.

Response:

HTTP/1.1 200 OK
Cache-Control: no-cache, must-revalidate, post-check=0, pre-check=0
Content-Type: application/json
Etag: "1428629440"
Expires: Sun, 19 Nov 1978 05:00:00 GMT
Last-Modified: Fri, 10 Apr 2015 01:30:40 GMT
Vary: Accept
Content-Length: 55
Accept-Ranges: bytes
Date: Fri, 10 Apr 2015 01:30:51 GMT
Connection: keep-alive

{"token":"woalC7A1sRzpnzDhp8_rtWB1YlXBRalWMSODDX1yfUI"}

That’s a token, surely. I haven’t figured out what I need that token for. It’s worth pointing out that you get a new CSRF token when you login; see below. So I don’t do anything with this token. I never use the call to /rest/user/token .

Login

To do anything interesting, your app needs to login; aka authenticate. After login, your app can invoke regular transactions, using the information returned in that response. Let’s look at the messages.

Request:

curl -i -X POST -H content-type:application/json \
    -H Accept:application/json \
    http://example.com/rest/user/login \
    -d '{ 
     "username" : "YOURUSERNAME",
     "password" : "YOURPASSWORD"
    }'

Response:

HTTP/1.1 200 OK
Content-Type: application/json
Expires: Sun, 19 Nov 1978 05:00:00 GMT
Last-Modified: Fri, 10 Apr 2015 01:33:35 GMT
Set-Cookie: SESS02caabc123=ShBy6ue5TTabcdefg; expires=Sun, 03-May-2015 05:06:55 GMT; path=/; domain=.example.com; HttpOnly
...
{
  "sessid": "ShBy6ue5TTabcdefg",
  "session_name": "SESS02caabc123",
  "token": "w98sdb9udjiskdjs",
  "user": {
    "uid": "4",
    "name": "YOURUSERNAME",
    "mail": "YOUREMAIL@example.com",
    "theme": "",
    "signature": "",
    "signature_format": null,
    "created": "1402005877",
    "access": "1426280563",
    "login": 1426280601,
    "status": "1",
    "timezone": null,
    "language": "",
    "picture": "0",
    "data": false,
    "uuid": "3e1e948e-940e-4a05-bd7a-267c6671c11b",
    "roles": {
      "2": "authenticated user",
      "3": "administrator"
    },
    "field_first_name": {
      "und": [{
        "value": "Dino",
        "format": null,
        "safe_value": "Dino"
      }]
    },
    "field_last_name": {
      "und": [{
        "value": "Chiesa",
        "format": null,
        "safe_value": "Chiesa"
      }]
    },
    "metatags": [],
    "rdf_mapping": {
      "rdftype": ["sioc:UserAccount"],
      "name": {
        "predicates": ["foaf:name"]
      },
      "homepage": {
        "predicates": ["foaf:page"],
        "type": "rel"
      }
    }
  }
}

There are a few data items that are of particular interest.

Briefly, in subsequent calls, your app needs to pass back the cookie specified in the Set-Cookie header. BUT, if you’re coding in Javascript or PHP or C# or Java or whatever, you don’t need to deal with managing cookies, because the cookie value is also contained in the JSON payload. The cookie has the form {SESSIONNAME}={SESSIONID}, and those values are provided right in the JSON. With the response shown above, subsequent GET calls need to specify a header like this:

Cookie: SESS02caabc123=ShBy6ue5TTabcdefg

Subsequent PUT, POST, and DELETE calls need to specify the Cookie as well as the CSRF header, like this:

Cookie: SESS02caabc123=ShBy6ue5TTabcdefg
X-CSRF-Token: w98sdb9udjiskdjs

In case it was not obvious: The value of the X-CSRF-Token is the value following the “token” property in the json response. Also: your values for the session name, session id, and token will be different than the ones shown here. Just sayin.

Get All Nodes

OK, the first thing to do once authenticated: get all the nodes. Here’s the request to do that:

Request:

curl -i -X GET \ 
  -H Cookie:SESS02caabc123=ShBy6ue5TTabcdefg \ 
  -H Accept:application/json \ 
  http://example.com/rest/node 

The response gives up to “pagesize” elements, which defaults to 20 on my system. You can also append a query parameter ?pagesize=30 for example to increase this. To repeat: you do not need to pass in the X-csrf-token header here for this query. The CSRF token is required for Update operations (POST, PUT, DELETE). Not for GET.

Here’s the response:

[{
  "nid": "32",
  "vid": "33",
  "type": "wquota3",
  "language": "und",
  "title": "get weather for given WOEID (token)",
  "uid": "4",
  "status": "1",
  "created": "1425419882",
  "changed": "1425419904",
  "comment": "1",
  "promote": "0",
  "sticky": "0",
  "tnid": "0",
  "translate": "0",
  "uuid": "9b0b503d-cdd2-410f-9ba6-421804d25d4e",
  "uri": "http://example.com/rest/node/32"
}, {
  "nid": "33",
  "vid": "34",
  "type": "wquota3",
  "language": "und",
  "title": "get weather for given WOEID (key)",
  "uid": "4",
  "status": "1",
  "created": "1425419882",
  "changed": "1425419904",
  "comment": "1",
  "promote": "0",
  "sticky": "0",
  "tnid": "0",
  "translate": "0",
  "uuid": "56d233fe-91d4-49e5-aace-59f1c19fbb73",
  "uri": "http://example.com/rest/node/33"
}, {
  "nid": "31",
  "vid": "32",
  "type": "cbc",
  "language": "und",
  "title": "Shorten URL",
  "uid": "4",
  "status": "0",
  "created": "1425419757",
  "changed": "1425419757",
  "comment": "1",
  "promote": "0",
  "sticky": "0",
  "tnid": "0",
  "translate": "0",
  "uuid": "8f21a9bc-30e6-4232-adf9-fe705bad6049",
  "uri": "http://example.com/rest/node/31"
}
...
]

This is an array, which some people say should never be returned by a REST resource. (Because What if you wanted to add a property to the response? Where would you put it?) But anyway, it works. You don’t get ALL the nodes, you get only a page worth. Also, you don’t get all the details for each node. But you do get the URL for each node, which is your way to get the full details of a node.

What if you want the next page? According to my reading of the scattered Drupal documentation, these are the query parameters accepted for queries on all entity types:

  • (string) fields – A comma separated list of fields to get.
  • (int) page – The zero-based index of the page to get, defaults to 0.
  • (int) pagesize – Number of records to get per page.
  • (string) sort – Field to sort by.
  • (string) direction – Direction of the sort. ASC or DESC.
  • (array) parameters – Filter parameters array such as parameters[title]=”test”

So, to get the next page, just send the same request, but with a query parameter, page=2.

Get One Node

This is easy.

Request:

curl -i -X GET \ 
  -H Cookie:SESS02caabc123=ShBy6ue5TTabcdefg \ 
  -H Accept:application/json \ 
  http://example.com/rest/node/75 

Response:

HTTP/1.1 200 OK
Content-Type: application/json

...
{
  "vid": "76",
  "uid": "4",
  "title": "Embedding keys securely into the app",
  "log": "",
  "status": "1",
  "comment": "2",
  "promote": "0",
  "sticky": "0",
  "vuuid": "57f3aade-d923-4bb5-8861-1d2c160a9fd5",
  "nid": "75",
  "type": "forum",
  "language": "und",
  "created": "1427332570",
  "changed": "1427332570",
  "tnid": "0",
  "translate": "0",
  "uuid": "026c029d-5a45-4e10-8aec-ac5e9824a5c5",
  "revision_timestamp": "1427332570",
  "revision_uid": "4",
  "taxonomy_forums": {
    "und": [{
      "tid": "89"
    }]
  },
  "body": {
    "und": [{
      "value": "Suppose I have received my key from Healthsparq.  Now I would like to embed that key into the app that I'm producing for the mobile device. How can I do this securely, so that undesirables will not be able to find the keys or sniff the key as I use it?",
      "summary": "",
      "format": "full_html",
      "safe_value": "

Suppose I have received my key from Healthsparq. Now I would like to embed that key into the app that I'm producing for the mobile device. How can I do this securely, so that undesirables will not be able to find the keys or sniff the key as I use it?

\n", "safe_summary": "" }] }, "metatags": [], "rdf_mapping": { "rdftype": ["sioc:Post", "sioct:BoardPost"], "taxonomy_forums": { "predicates": ["sioc:has_container"], "type": "rel" }, "title": { "predicates": ["dc:title"] }, "created": { "predicates": ["dc:date", "dc:created"], "datatype": "xsd:dateTime", "callback": "date_iso8601" }, "changed": { "predicates": ["dc:modified"], "datatype": "xsd:dateTime", "callback": "date_iso8601" }, "body": { "predicates": ["content:encoded"] }, "uid": { "predicates": ["sioc:has_creator"], "type": "rel" }, "name": { "predicates": ["foaf:name"] }, "comment_count": { "predicates": ["sioc:num_replies"], "datatype": "xsd:integer" }, "last_activity": { "predicates": ["sioc:last_activity_date"], "datatype": "xsd:dateTime", "callback": "date_iso8601" } }, "cid": "0", "last_comment_timestamp": "1427332570", "last_comment_name": null, "last_comment_uid": "4", "comment_count": "0", "name": "DChiesa", "picture": "0", "data": null, "forum_tid": "89", "path": "http://example.com/content/embedding-keys-securely-app" }

As you know, in Drupal a node can represent many things. In this case, this node is a forum post. You can see that from the “type”: “forum”, in the response.

Querying for a specific type of node

Request:

curl -i -X GET \
  -H Cookie:SESS02caabc123=ShBy6ue5TTabcdefg \
  -H Accept:application/json \
  'http://example.com/rest/node?parameters\[type\]=forum'

Request:

curl -i -X GET \
  -H Cookie:SESS02caabc123=ShBy6ue5TTabcdefg \
  -H Accept:application/json \
  'http://example.com/rest/node?parameters\[type\]=faq

Request:

curl -i -X GET \
  -H Cookie:SESS02caabc123=ShBy6ue5TTabcdefg \
  -H Accept:application/json \
  'http://example.com/rest/node?parameters\[type\]=article

The response you get from each of these is the same as you would get from the non-parameterized query (for all nodes). The escaping of the square brackets is necessary only for using curl within bash. If you’re sending this request from an app, you don’t need to backslash-escape the square brackets.

Logout

Request:

curl -i -X POST \
    -H content-type:application/json \
    -H Accept:application/json \
    -H Cookie:SESS02caabc123=ShBy6ue5TTabcdefg \
    -H X-csrf-token:xxxx \
    http://example.com/rest/user/logout -d '{}'

Notes: The value of the cookie header and the X-csrf-token header are obtained from the response to the login call! Also, obviously don’t call Logout until you’re finished making API calls. After the logout call, the Cookie and X-csrf-token will become invalid; discard them.

Response:

HTTP/1.1 200 OK
...
[true]

Pretty interesting as a response.

More examples, covering creating things and deleting things, in the next post in this series.

Using the Drupal Services module for REST access to entities, part 1

drupal-logo

This is Part 1. See also, Part 2 and Part 3.

I’m not an expert on Drupal, but I do have some extensive experience designing and using APIs. (I work for Apigee.)

Recently I’ve been working with Drupal v7, and in particular, getting Drupal to expose a REST interface that would allow me to program against it. I want to write apps that read forum posts, write forum posts, read or post pages, create users, and so on.

Drupal is a server than manages entities, right? This function is primarily exposed via a web UI, but that UI is just a detail. Drupal should be able to expose an API that is similarly capable. Should be!

The bazaar is alive and well with Drupal. It seems that regardless what you want to do with Drupal, there are 13 different ways to do it. And exposing Drupal entities as resources in a RESTful interface, is no different. There are numerous modules designed to help in this task, some of which are complementary to each other, some of which are overlapping, and most of which are poorly documented. Every module has multiple different versions, and every module works with multiple different versions of drupal. So figuring out the best way to proceed, for a Drupal novice like me, is not easy.

Disclaimer: What follows is what I’ve found. If a Drupal expert reads this and thinks, “Dude, you’re way off!” I am very willing to accept constructive suggestions. I’d like to know the best way to proceed. This is what I tried.

The Services Module

I used the Services module. There are other options – restws is one of them. I didn’t have a firm set of criteria for choosing one versus the other, except that I fell into the pit of success more easily with the Services module. It seems to be more popular and has more examples available that I found via Google search.

Services 3.0 is now available. … Note that currently there is no upgrade path for Services 3, and it is not backwards compatible with older implementations of the API. Therefore some existing modules like JSON Server and AMFPHP will not work with it. …

Not that there aren’t problems with it. The lack of backwards compatibility on a programmable interface is a really bad sign (See the blockquote). That reflects poor planning on the part of the designers of that module. And then there is the lack of clear documentation for how to do most things.

Setup

The first thing: you need to obtain and activate the Services module. There’s a straightforward guide for doing this. I installed the module, then went to the Admin panel to insure the Rest Server was enabled. A screenshot is below.

screenshot-20150317-092348

More Setup

Next, you need to create a REST endpoint. To so so, still logged in as Admin, select Structure > Services. Click Add. Then specify rest, REST, and rest. Another screenshot.

screenshot-20150410-135352

That’s it. Your Drupal server is now exposing REST interfaces. You then need to click on “resources” to enable specific access to things like users, nodes, taxonomy, taxonomy terms, and so on. And you’re all set.

Retrieving Nodes is Easy

Once you have the Rest server enabled, getting an index of the nodes in a Drupal system is probably the most basic thing any programmer will want to do. And beyond that, creating a new node (posting a page or article), creating a user, and so on. For the Services module, there is a nice page that gives examples for doing this sort of basic thing. I’m not really a fan of the layout of that page of documentation; it seems to be all over the place, providing basic REST principles, describing REST testing tools, and then finally giving samples of messages. Those things seem like they all belong on separate, hyperlinked pages. But again, it’s the bazaar, and someone contributed that doc all by himself. If I think it could be better I am welcome to edit that page, I guess.

Here’s one example request from that page:

POST http://services.example.com/rest/user/register
    Content-Type: application/json
    {
        "name":"services_user_1",
        "pass":"password",
        "mail":"services_user_1@example.com"
    }

This is something I can understand. Many of the other doc pages give jQuery example code. Ummmm…..I don’t write in jQuery. Why not just show the messages that need to be sent to the Drupal server? and then let the jQuery people figure out how to type in their ajax methods? ….

The basic examples given there are good but you’ll notice there is nothing there about authentication. Nothing that shows how a developer needs to authenticate to Drupal via the Services module. That ought to be another hyperlinked page, no?

Authentication

There are multiple steps involved to authenticate:

  1. Obtain a CSRF token
  2. Invoke the login API, passing the CSRF token.
  3. Get a Cookie and new token in response – the cookie is of the form {{Session-Name}}={{Session-id}}. Both the session name and id are returned in the json payload as well, along with a new CSRF token.
  4. Pass the cookie and the new token to all future requests
  5. Logout when finished, via POST /user/logout

More detail on all of this in the next post.

Pretty psyched about Swagger Editor for APIs

I’m pretty excited about the Swagger editor. But to understand why, you first need to know what Swagger is all about.

Let’s take a step back. As of August 2014, total activity on smartphones and tablets accounted for ~60% of digital media time spent in the U.S. This unabated growth in mobile is driving the growth in enabling technologies: tools for developing apps, managing app communications, measuring app and data usage, analyzing usage and predicting behavior based on that usage. APIs are a key connective technology, allowing innovative mobile apps use APIs to access data and services from companies like Uber or Twitter, or from government bodies like the State of Washington. APIs provide the linkage.

APIs are not solely about mobile apps. They can be used to connect “any app” to “any service”; indeed this website uses code running on the server to invoke the Twitter API to display tweets on the right hand side of this blog. But mobile is the driver. Web is not driving the growth, nor is the Internet-of-Things; not in APIs, nor the growth in any of the other enabling technologies. In the 2000’s it was Web. Tomorrow will be IoT. Today, it is mobile.

Ok, so What is Swagger? Swagger is a way to define and describe APIs. A language for stating exactly what an API offers. The description language is analogous to Interface Definition Languages going back to Sun’s RPC IDL, Corba IDL, DCE IDL, or SOAP’s WSDL. Many of you reading this won’t recognize any of those names; it doesn’t matter. We don’t use most of those technologies any longer, more importantly we don’t utilize the metaphors those technologies imply: function shipping, remote procedure call, or distributed objects. While moving away from the tight coupling of binary protocols and towards technologies like REST and JSON and XML that enable more loosely-coupled interactions, we still recognize that it’s helpful to be able to formally describe programmable interfaces.

OK, so Swagger is at it’s heart, a way to describe a RESTful API. Many of you are Java developers and may be familiar with Swagger Annotations, which allows you to mark up JAX-RPC server application code, which then allows you to generate a Swagger definition from an implementation. Something like doxygen. This is cool, but is sort of a backwards approach. Getting the description of the API from the implementation is analogous to getting the blueprint for a building by taking pictures of the finished framing. Ideally you’d like to go in the other direction – first build the design (or blueprint, if you will) of the API, and then generate the implementation. My friend and colleague Marsh Gardiner discussed the design-first approach last year.

This is what Swagger can do. How does one produce a Swagger document? Well if you’re an old codger like me, you might resort to using a text editor like emacs and its yaml-mode to hand-code the yaml. But a much easier approach is to use The Swagger Editor.

The API Description is basically “a model” of the API. And with that model, one can do all sorts of interesting things: generate a client-side library in one of various languages. Generate a server-side implementation stub. Generate a test harness. Generate documentation. In fact the Swagger project has had a doc-gen capability, named swagger-ui, since the early days of the project.

So what’s the upshot? The result of better enabling tooling around APIs, tooling including Swagger Editor and Swagger UI, as well as an API management layer as provided by Apigee Edge (Disclaimer! I work for Apigee!), means that it is easier for companies to expose capabilities as easy-to-consume APIs, and that it is easier for developers to code against those APIs to build compelling experiences that run on mobile devices. So I’m pretty excited about the new tooling, and I am even more excited about the integration we will soon see between these new modeling tools and the existing API Management tools already available.

Good stuff!

Loving the simple API Design Guidelines from GoCardless

See here.

I like this for several reasons:

  • I like the simplicity and clarity of the guidelines.
  • I agree with all of their guidelines; nothing feels controversial there. Such as: Use JSON, and pretty print it. Be explicit with error messages. Use plural nouns for containers. Etc.
  • I like the fact that it is open sourced for the world to see, share and fork.

ps: My employer, Apigee, is still looking to hire SEs, and other API geeks.

SAML – the standard that wasn’t

OASIS

SAML – the Security Assertion Markup Language is quite successful. SAML was born in 2002 out of OASIS, the somnolent standards body that enjoyed its heyday in the 2000’s forming so many of the XML-oriented standards like WS-BPEL, UDDI, UBL, and ODF. Today SAML enjoys success satisfying a key need in enterprises: browser-based single-sign across origins. Sign into www.mycompany.com and then later visit www.serviceprovider.com and get automatically authorized. The benefit is: people type in their passwords, just once.

SAML wasn’t designed for just that problem, or anyway, not for that specific problem. SAML was designed to address the general problem of exchanging claims securely. The summary on the first page of the spec says that SAML “defines the syntax and semantics for XML-encoded assertions about authentication, attributes, and authorization, and for the protocols that convey this information.” Hence the name, “Security Assertion Markup Language”. But in actual use, SAML is heavily oriented towards browser-based SSO.

In SAML, the claims or assertions are statements about people. When people (via apps or browser pages) make requests of systems – like “let me see this file”, or “let me transfer funds” – the system that receives this request can use trusted claims about the requester to make authorization decisions. The key is that the system needs to trust the claims, and the claims need to be relevant.

An example: If I go to the grocery store, I can present my debit card to the checkout person, to pay for my groceries. The card is basically a set of claims “asserted” by a bank about me:

Debit card
  1. that the person named on the card is a customer of the bank,
  2. that the person named is authorized to use a particular account.

This set of assertions is also decorated with some other information, like valid dates, and the author of the claims. The author of the claims is a bank, and that bank is affiliated with a card payment network, in my case, Visa. Also: My debit card expires in a given month and year. The implicit rule is that all parties agree that the claims presented in the plastic card do not hold after that date. This card is good if the merchant trusts the bank, and Visa, and if the dates are valid. Some merchants want to insure that the person named on the card is the same as the person presenting the card, so they’ll ask for a government-issued picture ID that has the same name.

The SAML Analogue

SAML Assertions

SAML works in a similar way, except the set of claims is formatted digitally, in an XML document, rather than on a plastic card. The set of claims enclosed in a SAML token is general – it can be any set of claims about a person, or “subject”. Claims such as “Dino is male”, or “Dino has no tattoos”, or “Dino is of sound mind and body” are all acceptable. But more often the claims are statements that are relevant to information-processing organizations, such as “Dino is an employee of XYZ corp.” and then some detailed information such as “Dino’s email address is Dino@xyzcorp.com”, “Dino is a member of the Aviation group in XYZ”, and so on. In the general case these are claims about a person’s identity; SAML calls them “Attributes” of the subject. Statements not about the particular person don’t belong in SAML. “It is sunny today” may or may not be true but it is completely unrelated to me – the person in question aka “subject” – therefore not suitable as an attribute in a SAML assertion about me.

Such claims about me could be used by an organization or company to decide whether to grant service to me, when I request it. If my company, XYZ corp, has a partnership agreement with another company, LMN Corp, then when I present my claims to LMN, along with my request, LMN can take a decision on whether to grant my request.

How Trustworthy are your claims?

The claims in a SAML Assertion are just statements, coded in an XML document. Though SAML is a particularly florid and ornate language, it’s still XML, and anyone could create such a document. For a system to be able to rely on that information, to trust that information, there must be some assurance that the presented claims are bona-fide and originate from a trusted source, and also that they are valid at a given moment in time. At one point, “Dino is in the Eighth grade” was a true statement about me, but that statement is no longer true. SAML uses digital signatures based on public-key cryptography for the purpose of assurances of the author of the claims, and explicit time windows on claims (eg, NotBefore or NotOnOrAfter) to circumscribe the validity of such claims.

Key

The “Relying Party” or RP examining a SAML Assertion SAML must verify the signature on the XML document, to insure that the claims can be trusted. The relying party must also evaluate the time windows on the claims. And then finally, the RP must evaluate the claims themselves. It may be that “Dino is a member of the recreation committee” does not grant me permission to see the early draft of the company’s 10-K filing. On the other hand if I am a senior director at the auditing firm, maybe “Dino is an employee at XYZ Auditors” and “Dino is a senior director” is a good enough claim to allow me to see or edit the document.

Simple in Concept, Complex in Execution

SAML is simple enough in principle. I’ve explained the broad strokes here, in just a few paragraphs. Of course, it builds on a large stack of technologies, starting with XML, XML Schema, XML namespaces, URIs, XML digital signatures, and X.509. That alone is a daunting set of technologies, though there is some relief in the maturity of the relevant specifications.

But the details about SAML itself have lead to additional complexity. First, the SAML 2.0 spec is 86 pages. Even there, it is not self-contained. One example: SAML has an element called an AuthnContextClassRef I’m guessing this implies “Authentication Context Class Reference”. For those of you scoring at home, that’s four nouns in a row. What exactly is this thing?

Helpfully, the OASIS spec defines this thing as

A URI reference identifying an authentication context class that describes the authentication context declaration that follows.

All clear? We now interrupt this essay to present a completely unrelated Dilbert comic.

Dilbert

In addition, the SAML spec document suggests, “See the Authentication Context specification [SAMLAuthnCxt] for a full description of authentication context information.” That document is itself an additional 70 pages. Ready to dive in?

This kind of complexity and standards-speak lead, even early on in the life of SAML, to complaints of impracticality from the people who had hoped to be able to use it. Even as early as 2003, just a few months after SAML 1.0 was launched, IBM, one of the original authors of the spec, was employing its partners to bravely assert that the idea that SAML was complex was a myth.

The Wizard

I can hear the wizard inveighing: Pay no attention to that AuthnContextClassRef!!

But the complaints about complexity were not academic. They were based on real-world attempts to get disparate implementations of “the standard” to interoperate. Even today, connecting an Identity Provider and a Relying Party via SAML is a challenge worthy of a platoon of IBM consultants. Have we got a mismatch in the AuthnContextClassRef? Well, we’re gonna have to figure out how to persuade the Relying Party to allow it, or to persuade the IdP to provide a different one. Have you got the wrong NameID Format? Transient, Permanent or Unspecified? Which side needs to give ground in this negotiation?

That’s what I mean when I call SAML “the standard that wasn’t.” It’s a standard, all right, but there are so many different options that despite the rigor of the specification, getting compliant systems to interoperate is still a huge challenge. Despite the challenges, the standard IS valuable – it works mostly, and it solves specific problems that many companies have. But it isn’t automatic.

Lessons for History

SAML is designed to address much more than browser-based single-sign on. But the lion’s share of adopters use SAML for just that, and only that.

There’s a lesson here regarding over-reach of standards: SAML could have been simpler, quicker to get adopted, and easier to use, had its designers restricted their design goals to addressing what 90% of people use it for today, anyway.

Why Bother?

But why am I even talking about SAML? My passion and intention is to work on APIs and enabling new interconnections. That’s why I’m at Apigee today. APIs means “SOAP on Steroids” or if you like, “all the benefits of SOAP without that unsightly residue”. It means getting better connections, faster, and allowing new customer experiences, better mobile apps, better connections between customers and companies. So if I am all about connecting systems with APIs, why do I care about SAML? Have I been sucked into a time-portal and time-warped back into 2007?

Ah, but no! See, the thing is with large companies, they move deliberately. Many still use SAML and still need any systems they install to integrate with their SAML-based Enterprise Identity system. So if I want to work with enterprises in helping them adopt APIs to supercharge their businesses, I need to get SAML working with the various web apps that enable API management and adoption. Get SAML integration done, then the enterprise can innovate with APIs. See?

Comparing Graphite and Flot

Producing a dashboard with charts generated by Graphite using its render API I kinda liked the results:

Sample TP95 chart from Graphite

But then I thought, I wonder if I could produce the same dashboard from the raw data, using jquery flot?. And the answer is, of course:

Sample TP95 chart from flot

Which do you prefer? I kinda like the flot version better. As a developer, I have finer control over the look of the chart. This means, for one thing, the fonts are nicer. Also, I can move the title around, style things differently. And I can make the chart interactive, too. Using graphite involves significantly less labor, though. Less custom code.

I don’t really hate NodeJS

I don’t really hate NodeJS. Yes, a while ago, I said I hate NodeJS. But I didn’t really mean it. I was just suffering from unrealistic expectations. I learned to Let Go of my idea that JavaScript on MacOSX ought to be as easy as JS on Windows.

Since then I have adopted NodeJS pretty strongly. I use it for all sorts of tasks, from tools that automate Apigee Edge (nod to my employer) to API load generation utilities.

I write more code using NodeJS these days, than in anything else.

Recently I had occasion to write a little utility that computes TP99 for a set of API transactions being managed by Apigee Edge.

What it does is retrieve the transaction records logged from the Edge Analytics database, sorts them, and emits a computed TP99 (and TP95, TP90, TP50) to a Carbon server, which is backing Graphite, which then serves up charts of that data. A sample is below.

Sample TP95 chart from Graphite

For this tool, I chose NOT to use NodeJS but instead relied on good-old PERL. I didn’t want asynchrony, and I did want easy file I/O, pattern matching, and sorting. Also I wanted it to be maintainable by old-school sysadmins who no-doubt have not been following the finer points of using Q’s promises in NodeJS. Perl was the obvious choice. The sort required to compute TP99 will run on 60000 records or more, and needs to occur every minute, for all transactions logged during that minute. A cron job running a perl script was perfect for this.

But, recently I wrote another tool… this one automates the provisioning of EC2 instances in AWS, then installs Java and Apigee Edge on them, and configures them into a cluster. I wouldn’t want to do that in perl or Bash. NodeJS was the right tool for that much more complicated job. And of course there are NPM libraries for AWS, and for ssh and scp. Really helpful.

of nodejs and new clothes

A provocative post by Eric Jiang, entitles “The emperor’s new clothes were built with Node.js”, regarding the undeserved praise being heaped upon NodeJS. While I think he gets his analysis all right, he is still missing the forest for the trees.

Nodejs has grown the way it has grown for the same reason that Visual Basic and PHP grew the way they did: These things work well, and help people get things done quickly and relatively easily. And maybe, people even have fun doing it. The community support has been critical in all three cases.

Sure it’d be nice to have multi-threading like Go and a single set of known and blessed libraries like C# and performance like C. But we don’t have all those things in one package, not yet anyway.

There are weaknesses in NodeJS, just as there are in PHP and in VB. Despite that, JavaScript is effective for many many people, and will continue to be so. NodeJS makes JavaScript much, much more effective with the NPM and the vast set of downloadable modules.

No religion, yo: nodejs is not always the right choice. Of course it isn’t. I like nodejs and use it… daily. but I’m learning Go and am enjoying that as well.