It’s that time of year… when people think about exchanging JWT for opaque tokens

Yes, it’s that time of year when people think about RFC7523, which describes how to exchange JWT for opaque OAuth tokens.

Right?

If you’re like me, the waves of acronyms, jargon, and IETF RFCs (see what I did there?) seem to never end. OAuth, JWT, RFC 7523, JTI, claims, RS256, PBKDF2…? I feel your pain.

But there is some good news… here’s something that will help clarify the ideas and use cases around RFC7523. I wrote a quick article, and also created an Apigee Edge API Proxy, that implements this for you. It illustrates exactly how to exchange JWT for opaque OAuth tokens, and I even include some commentary int he readme explaining why you’d want to do it. (Spoiler alert: It’s faster to verify opaque OAuth tokens). All available on the Apigee community site.

The way I think about RFC7523 – it is an alternative to the client_credentials “grant type”, described in IETF RFC6749, which is the document that describes the OAuth v2.0 Framework.

OK, I hear you saying it: “back up, Dino… What is this client_credentials thing?” Yes, there is an underscore there. The client_credentials grant type is designed to allow a client app to identify itself to a token dispensary. The client says “here’s my ID, and here’s a secret that only I (the client app) should know.” And the token dispensary can then look at those two pieces of information, and if they are valid (the client_id is not expired or revoked), then the token dispensary can issue a token. It’s like username + password authentication for a person, but client_credentials is used for identifying a client app. This grant type mostly useful in server-to-server communications, when one service is being used by another service. BUT, some people use client_credentials grants in their mobile apps, so that the API service can trust that the mobile app is who it claims to be. (There are some problems with this; basically the client_secret needs to be embedded in the client code, therefore it is accessible to hackers, and therefore it is not truly “secret”. We can talk about mitigations for this in a future blog post.)

So that’s the client_credentials grant type. As I said, RFC7523 is an alternative to the client_credentials grant. Basically, instead of sending in a client_id and client_secret, under the RFC7523 flow (which has the helpful and easy-to-remember moniker of “JSON Web Token (JWT) Profile for OAuth 2.0 Client Authentication and Authorization Grants”, seriously) the client app self-signs a JWT which includes the client_id as the issuer. The app sends that to the token dispensary. The token dispensary verifies the signature, verifies that the client_id is valid, and then issues an opaque OAuth v2.0 token.

Now, there are some interesting implications to this model. Maybe these are obvious to some of you, but I will state them anyway:

  1. the token dispensary and the client app have to conform to the same JWT signing convention. JWT can be signed with shared-secret (HS256) or with public/private key (RS256). Either way is fine, but the two sides must agree.
  2. regardless of the signing convention, it must be possible for the token dispensary to verify the signature. If HS256 is the agreed convention, this means the token dispensary and the client app must share a secret. (This can be the client_secret! if it has sufficient entropy, or it can be a key obtained from PBKDF2) If RS256 is the signing convention, it means the two parties must have a shared trust relationship, where the token dispensary has access to the public key of the client app. Bottom line, there is a little bit more overhead for you, setting up an JWT-for-opaque-token exchange mechanism, if you use RS256: specifically you need to provision a new RSA public/private keypair for the client, and the client needs to make the public key available to the token dispensary.
  3. the client app needs some extra intelligence, specifically a library that allows it to create a signed JWT. There are myriad options available regardless of the app platform + language you use, so in practice, this won’t be an obstacle, but it does mean there will be new code you must include in your client.

Once you get past those implications and the extra set-up overhead, the model in RFC 7523 is really nice because it’s extensible. That’s because the request-for-token is encapsulated in a JWT, and the JWT itself is extensible. You, as an API designer, can stipulate any arbitrary (custom) claims that clients must include in the JWT, in order to compose a valid request-for-token. And you can include restrictions on the standard claims or custom claims. Some examples:

  1. a proof-of-work string, something like a HashCash string or similar. Including proof-of-work would be a discouragement for bots.
  2. As another example, you can stipulate that the JWT be short lived. Verification of the JWT might include a proviso that rejects tokens that have a lifetime beyond 180 seconds, for example.
  3. you could institute a one-use policy on such JWT.
  4. you could require a “scopes” claim and validate the strings contained in that claim against the issuer (==client_id)

BTW, the example API Proxy I shared on Github shows how to implement the lifetime and one-use-only controls. (As with everything I publish on github, pull requests are welcomed!) If the inbound JWT that comprises the request-for-opaque-token does not pass these checks, a 401 Unauthorized is sent back.

BTW #2, did you know that Google services like Stackdriver and cloud storage use JWT-for-opaque-token exchange in order to enable service-to-service integration? Google also institutes the lifetime and one-use-only controls. The lifetime of the JWT must be less than 300 seconds.

Say, that reminds me!, Speaking of Google, did I mention that Google has acquired Apigee? Yes, I work for Google now! Part of the Apigee team within Google. w00t! I’m pumped, psyched, charged up, amped, and very pleased about this development.

So far, minimal changes for me, except for me I got a Chromebook! And yes, I authored this post from that very same device.

As always, I’m interested to hear your feedback on this. Let me know in the comments section.

Finally, I would like to wish all of you a Merry RFC7523 Season; and I wish you many Happy short-lived OAuth Tokens in the new year.

Stackoverflow and the early mover advantage

Browsing HN this morning, I found an interesting piece discussing Stackoverflow. Apparently hackernoon wrote a piece entitled “The decline of Stackoverflow” and it got some attention on reddit.

Bozho’s perspective aligns pretty closely with mine. I’m in the top 0.1% overall on SO, but I have not contributed actively in years. Not because I perceived “a decline in the site” whatever that means, but because I got busy with other things, specifically Apigee.

I agree with Bozho that many of the easy questions have been answered. Sure, there are always new questions, and new technologies like Golang or some new iPhone feature, or a new version of Angular, will prompt a new class of questions. But, the basics around .NET GAC, or Java garbage collection, or how to do read-through caches in Java, or what is JSONP…. you know those are already answered.

And the “Early movers” on Stackoverflow – I guess I was among them – have garnered all the top scores, and continue to accrue points as new readers on stackoverflow upvote answers. So even though I haven’t posted a new, popular answer in a while, I still earn points every week. New arrivals to stackoverflow will probably never be able to attain the level of points I have. My point total grows more than theirs, even though I haven’t done any work, and they may be asking and answering new questions diligently. It’s very unequal.

Don’t get me wrong – I think Stackoverflow is super valuable. I agree with Bozho, that it serves its purpose well, which is to provide easily searchable answers to programming questions. But aside from that massive contribution of value to the programming community, if you just look at internet points, the points go to the early movers.

I’m looking forward to watching the first formal debate for the US presidential election, this evening. Coincidentally, last night I watched a Frontline episode from June, entitled Policing the Police – excellent work by Jelani Cobb btw. So in this moment the analogy that comes to my mind, for this phenomenon on stackoverflow, is economic.

Imagine the discovery and settlement of a new continent. The early movers come in and (after eradicating the natives, if any) lay claim to land. They work the land, and maybe expand. Later more people arrive, and they need to rent land from the earliest movers. It’s those early movers that continue to accrue $$ and interest over time. Later more people come in, and there is no opportunity to gain land. Their option is to work the land for someone else (upvote other questions and answers). I was born in the 1960’s, and all the land in the USA was already claimed. Large holders of assets including land at that time, have continued to benefit. (I am not complaining – I was born white, male, and healthy in one of the richest countries on the earth – I won the lottery at birth.) Or consider a real-estate marker in a geographically constrained area like Seattle or San Francisco. Early movers claim land and erect buildings, and later movers (like me) come in and rent space.

On Stackoverflow, it’s really not a big deal if a newcomer is blocked from attaining “internet points”. A newcomer can still benefit from the answers and discussions on the site. They’re digital assets! Sharing more does not decrease the value of the asset. Everyone can benefit. And internet points are worth approximately $0.00. In real life, it’s a huge deal if the top 1% of the population holds 40% of the wealth, and the trend is toward greater inequality. This wealth is tied to assets like land, housing, companies, etc., which act to extract more money from the people who are not in the 1%. Not sustainable, and more importantly, not moral.

I don’t think a massive, sudden transfer of wealth is the answer. That is the phenomenon we are watching in Iraq and Syria, where different parties are grabbing oil fields, or attempting to attain control of entire cities in order to tax the inhabitants. It is what happened when the Vikings invaded what is now France, and grabbed land (establishing Normandy) and started taxing citizens. Many people die in these sudden transfers of wealth. There is a great benefit in stability – it means people generally die of natural causes rather than war wounds.

On the other hand, substantive change is urgently necessary. Gradual, thoughtful, methodical change, ideally. Of the two candidates most likely to win the US presidential election, neither appear to be interested in changing things very much. Too bad.

Drupal 7, #states, and mutually exclusive checkboxes

This post will be a bit techy. I confronted and solved a minor problem yesterday, and in the spirit of the internet, thought I’d share the solution, in case anyone else tries something similar.

This is about Drupal forms, and specifically within forms, the #states capability, which is a way that form designers can tell Drupal to do jQuery magic things on the form elements, enabling or disabling some of them based on the state or value of others.

The typical example is a checkbox, that when checked, will either enable (css ‘disabled: false’) or make visible (css ‘display: block’) a dependent textbox. Simple enough, right? and for that kind of simple case, it works well.

Drupal’s Forms API is described here, and the related
drupal_process_states here.

This is what it looks like to configure a Form in Drupal:

That says, show the textfield only when the referenced checkbox is checked. The reference to the checkbox is with a jQuery selector. This one works, really straightforward. And, the state is managed by Drupal in both directions. When the referenced checkbox is checked, then the textfield is visible. When the referenced checkbox is unchecked, then the textfield becomes not visible.

But what if you want a set of mutually exclusive checkboxes?

Mutually Exclusive Checkboxes

One approach is to just use the above model, and have each checkbox depend on the other. In other words, something like this:

This will not work. The reason this does not work, is that the state is managed by drupal in both directions. When checkbox #1 is checked, then checkbox #2 becomes unchecked. Which means checkbox #1 gets checked. Which means checkbox #2 becomes unchecked. And if you turn on the Firebug debugger, you can see the logical loop going round and round, endlessly.

There was an approach described here that suggested using two conditions in the array. But that didn’t work for me; I still had the endless loop. After fiddling with this for an hour, searching around for hints, I decided to just do it myself with my own jQuery. The logic was simple to write. And, I didn’t want to fight the Drupal Forms API any longer.

So here’s the solution. Include this JavaScript in your module:

As you can see, it registers a ‘change’ hook for a specially-marked checkbox. And when the checkbox is affirmatively checked, it unchecks the other checkbox. When the checkbox is unchecked, it does nothing.

How does that JS get loaded? In the Drupal module code, do this:

And finally, how do we set up the checkboxes in the Forms API? Like this:

And that gets the desired behavior: It is possible for zero or one of those checkboxes to be checked, but not both.

It took more time to write this post than it took to build the solution shown here! And of course I never did manage to figure out how to do the same just using the Forms API. This is an example of an API, the Forms API in Drupal, that does some things well, and this one thing….? Not so well. Much easier to just jump out and solve it this way.

Maybe this will help some one else!

By the way, this is included in a Drupal module that allows administrators to verify / validate user registration.

Google Guava – sweet and succulent

I have a bit of java code that handles JWT. It generates a MACVerifier and then uses that to verify a signature. Someone commented that it was taking more time than they expected. I didn’t see a ton of opportunity for optimization, but I thought I might wrap the generation of the MACVerifier in a cache.

At first I tried EHCache. EHCache is the gold standard as far as Java caching. There are sooo many options, and there is sooo much flexibility. Write through caches, read-through caches, caches with persistence that is configurable in ways you had not imagined you needed. Java Attributes to add caching to servlets or JAX-RS. EHCache has it all.

Do One Thing Well

So I figured it would be a safe choice. But after a little bit of fiddling with it, I decided EHCache was too much. To me, EHCache violates the “do one thing well” principle of design, or if you like, the Single responsibility principle (As applied to the module, if not a particular class), or, just unsatisfying documentation which is a common problem even among “successful” open source projects.

Why is there a CacheManager? What if I create a Cache and don’t register it with a CacheManager – what happens? What do I lose? Why do I want a CacheManager? Why are there names for both managers and caches? What would happen if I registered a Cache with multiple managers? What if I don’t want persistence? What if the Cache itself goes out of scope – will it be garbage collected?

I couldn’t find ready answers to these questions and the whole experience left me lacking confidence whether the cache would do the right thing for me. In the end I concluded that EHCache was more, much more than I needed, and would require more time than I wanted to invest, to get a cache. I just wanted a simple in-memory Cache in Java with TTL support (where TTL also implies time-since-last-access or time-to-idle). And what do you know! Google Guava provides that!

Guava

Goooooooooogle

At first it was unclear how to best exploit it. But a little reading showed me that Guava has a clever design that allows the cache itself to load items into it. I don’t need to write MY code to check for existence, and then create the thing, and then put it into the cache. Guava has a LoadingCache that does all this for me. I just call cache.get() and if the item is present, it is dispensed. If it is not in the cache, then the cache loads it and gives it to me. Read-Through cache loveliness. So simple and easy.

This is my code to create the cache:

And to use the cache, I just call cache.get(). Really slick. Thanks, Google!

restclient.el – sending API Requests directly from within Emacs

Hey, something new! (to me!) the restclient.el library, for emacs. I tried it. I like it. I recommend it.

What does it do? Allows you to send REST requests (really just http requests) right from emacs, interactively. And then pretty-prints the results if possible (if XML or JSON or image). It includes a simple text mode that allows you to define a set of requests and some variables that can be used in those requests. The whole thing is simple, easy, handy.
Activate it with C-c C-c

Separately, I have a library that reads .netrc files from within elisp. It’s a natural complement to restclient.el , for API endpoints that require HTTP Basic authentication. That covers lots of API endpoints, including OAuth token dispensaries that require the client_id and client_secret to be passed in as an HTTP Basic authentication header. Here’s a simple example use:

Really nice. How did I not know about this elisp library?

One problem I had when using it: The restclient.el helpfully uses a function json-pretty-print-buffer to pretty-print the buffer containing the response, if the content-type of the response is application/json.

I don’t know that function, and it wasn’t defined on my emacs. This led to a runtime error, and a json buffer that was hard for me to visually parse.

But my emacs does have the similarly named json-prettify-buffer. So I used the following gist to get the restclient to succeed in its pretty-printing efforts.

The restclient.el module is not a huge thing, but it’s nice for us emacs people. I know about Postman, and use it. I know about Paw (but don’t use it). I know and use Fiddler. I am a big fan of curl, and someitmes curlish. This is a nice additional tool for the toolbox.  Really handy.

Thanks, Jake McCrary, for writing up your experience with Emacs and restclient.el; your blog post is how I discovered it.  And thanks of course to Pavel Kurnosov, the original author of the restclient.el library. Thanks for sharing.

EDIT – I made a change in restclient.el to fix an issue that causes an extra unintended newline to be appended to the last form parameter. This issue cost me about 90 minutes of debugging my JWT verification code, bummer! My change just trims trailing newlines from the entity being sent. This will be a problem for you if you want to send an entity that ends in several newlines. Find my fixed restclient.el here .

letsencrypt and NearlyFreeSpeech

I’ve been running this site on nearlyfreespeech for some time now.

Last week I created a cert using the tools and service made available by letsencrypt.org, and then configured my NFS server to use it. It was pretty easy, but not documented. I’ll share here what I did to make it work.

I am able to SSH into the nearlyfreespeech server. I can also perform a git clone from that server to get the letsencrypt tools. But when I ran the letsencrypt-auto tool from the server, it didn’t do what I wanted it to do. This was my first time with the tool, and I’m unfamiliar with the options, so maybe it was just pilot error.

In any case, I solved it by running the tool on my Mac OSX machine and transferring the generated PEM files to the server.

  1. I ran git clone on my local workstation (Mac OSX)
  2. from there, I ran the letsencrypt tool with these options:
    ./letsencrypt-auto certonly  --manual  \
       -d www.dinochiesa.net -d dinochiesa.net \
       --email dpchiesa@hotmail.com
    
  3. follow the instructions. I needed to create endpoints on my NFS server that responded with specific values.
  4. when that completed, I had the cert and keys in PEM format. I then copied them to /home/protected/ssl on the NFS server
  5. opened a service ticket on NFS as per This FAQ
  6. a couple hours later, the NFS people had completed the SSL config for me

Maybe this will help someone else.

It’s possible that I could have used the –manual option on the NFS Server, and avoided the need to transfer files. Not sure. If anyone else has done this, I’d like to know. I will need to renew my certs every couple months.

I’m really pleased about the letsencrypt service. I hope it gets used widely.

Update, 2017 December 7: I’ve updated my certs 3 or 4 times since I made this post. Now, this is what I do:

   sudo certbot certonly  \
     --authenticator manual  \
     --domain www.dinochiesa.net \
     --domain dinochiesa.net \
     --email dpchiesa@hotmail.com \
     --rsa-key-size 4096

I’ve automated the other parts – creating the right endpoints on the NFS server, and then copying the generated certs when they’re sent. Also NFS no longer requires a service ticket; it will automatically install certs when I update them. The change takes a minute or less. Super easy.

Use PHP code to make WordPress redirect to secure site

Lots of people use the .htaccess redirect rules to force their wordpress sites to load with the secure option.

It looks like this:

But if you have a hoster that does not provide you the ability to modify the .htaccess file, that won’t work. These hosters typically set up your server behind their load balancer which means the wordpress code sometimes cannot directly infer whether HTTPS is in use. In other words, the $_SERVER[‘HTTPS’] is not correct.

It is possible to introduce code into your theme that will do what you need. This is the PHP code:

Insert that in your theme header.php file. Or maybe the functions.php file. Invoke the maybe_redirect_to_ssl_site() function in the theme header before emitting any HTML.

Mac OSX users: update openssl. Also: openssl built-in to Mac OSX is different than brew version

If you use openssl on Mac OSX to maintain certs, etc, you should keep it up-to-date.

Worth knowing, courtesy of a comment by Gordon Davisson on THIS Stackoverflow question

…the major problem isn’t the openssl command, it’s the openssl libraries (which are used by other programs) — those aren’t API compatible between versions 0.9.x and 1.0.x, so you do not want to update the system-supplied openssl libraries!

Here’s how you get the latest openssl from brew. First, make sure you have brew installed and updated (per brew.sh):

If you already have brew installed, the output of that command will tell you.

OK, at this point you have brew installed. Then, update brew and update openssl:

If your brew is somehow broken, this command will give you lame messages. To fix, you can try “brew doctor”. (Once I resorted to simply re-installing brew. I ran the “brew install” command, which said “brew is already installed”, and then told me how to uninstall. I uninstalled, then ran the brew install again.)

Be aware that /usr/bin will probably be first on your path, so if you want to use the latest openssl you will have to explicitly request it with the fully qualified path name.

And also be aware that brew does not yet have openssl v1.0.2f, which includes the fix for CVE-2016-0701.