APIs, microservices, and the service mesh

Got some time and want to learn about APIs, microservices, the service mesh, and how these pieces interplay in an enterprise?

Here’s a session Greg Kuelgen and I delivered at Google Next 2019.
Hero image

Youtube video

Summary: If you’ve got more than a handful of services intercooperating, you’re gonna want a service mesh infrastructure. And you will want to use API Management to share APIs outside of the team that developed them.

Do you use curl? Stop using -u. Please use .netrc

An unsolicited tech tip.

Those of you who are API people, should exhibit good API hygiene.

One aspect of that is: “stop using curl -u” !!

Sometimes you have the urge to run a command like this:
curl -X POST -v -u 'yourusername:password' . https://foobar/slksls

Avoid this.

OK, ok, I know sometimes it’s necessary. But if you have an API endpoint that you often tickle with curl, and it accepts credentials via HTTP basic auth, you should be using .netrc to store the credentials.

The problem with using -u is that the password is shown in clear text on your terminal!

OK, I know, you’re thinking: but I’m the only one looking at my screen. . I can hear you thinking that right now. And that may be true, most of the time. But sometimes it’s not.

Sometimes you cut/paste terminal sessions into an email, or a blog post, or a bug report. And that’s when your password gets written down and shared with the world.

Treat Basic Authorization headers the same as passwords, because any observer can easily extract your password from that.

You might think that it’s ok to insert credentials in an email if it’s just being shared among your close work colleages. But that’s a bad idea also. Audit trails depend on the privacy of credentials. If you share them, the audit is gone. Suppose you have a disgruntled (ungruntled? never gruntled?) colleague who decides to take your creds and use them to recursively curl -X DELETE a whole bunch of resources. And the audit trail will show YOUR name on that act.

In short, it’s bad form. It could be forwarded or copy/pastad or it could leak into habit. It sets a terrible example for the children.

Here’s what I suggest:

Option 1: if you use curl

If you have a *nixy machine, create a ~/.netrc file and insert your creds there. See here for information.

chmod the file to 400. When you use the -n option, curl knows how to extract your creds from the file silently. You never have to type credentials on the command line again. I think you can do this on Windows too, but I don’t know curl on Windows.

If you build scripts that use curl, you should allow the user that same option. That way the user never keys in their creds to your script.

When you pass the -n option to curl, instead of -u USER:PASS, it tells curl, “if you ever connect with site.example.com, then use THESE creds” . This works with any HTTP endpoint curl can address via Basic Auth. I have creds for Jira, Heroku, and other systems all in my .netrc.

Hint: also don’t use curl -v, because that will show the basic auth header. You probably want -i anyway, which is less verbose than -v.

Option 2: don’t use curl

Use some other tool that hides the credentials completely.
I think Postman doesn’t quite hide the creds completely. So be careful!

Let’s all try to exemplify good security behavior.

It’s that time of year… when people think about exchanging JWT for opaque tokens

Yes, it’s that time of year when people think about RFC7523, which describes how to exchange JWT for opaque OAuth tokens.

Right?

If you’re like me, the waves of acronyms, jargon, and IETF RFCs (see what I did there?) seem to never end. OAuth, JWT, RFC 7523, JTI, claims, RS256, PBKDF2…? I feel your pain.

But there is some good news… here’s something that will help clarify the ideas and use cases around RFC7523. I wrote a quick article, and also created an Apigee Edge API Proxy, that implements this for you. It illustrates exactly how to exchange JWT for opaque OAuth tokens, and I even include some commentary int he readme explaining why you’d want to do it. (Spoiler alert: It’s faster to verify opaque OAuth tokens). All available on the Apigee community site.

The way I think about RFC7523 – it is an alternative to the client_credentials “grant type”, described in IETF RFC6749, which is the document that describes the OAuth v2.0 Framework.

OK, I hear you saying it: “back up, Dino… What is this client_credentials thing?” Yes, there is an underscore there. The client_credentials grant type is designed to allow a client app to identify itself to a token dispensary. The client says “here’s my ID, and here’s a secret that only I (the client app) should know.” And the token dispensary can then look at those two pieces of information, and if they are valid (the client_id is not expired or revoked), then the token dispensary can issue a token. It’s like username + password authentication for a person, but client_credentials is used for identifying a client app. This grant type mostly useful in server-to-server communications, when one service is being used by another service. BUT, some people use client_credentials grants in their mobile apps, so that the API service can trust that the mobile app is who it claims to be. (There are some problems with this; basically the client_secret needs to be embedded in the client code, therefore it is accessible to hackers, and therefore it is not truly “secret”. We can talk about mitigations for this in a future blog post.)

So that’s the client_credentials grant type. As I said, RFC7523 is an alternative to the client_credentials grant. Basically, instead of sending in a client_id and client_secret, under the RFC7523 flow (which has the helpful and easy-to-remember moniker of “JSON Web Token (JWT) Profile for OAuth 2.0 Client Authentication and Authorization Grants”, seriously) the client app self-signs a JWT which includes the client_id as the issuer. The app sends that to the token dispensary. The token dispensary verifies the signature, verifies that the client_id is valid, and then issues an opaque OAuth v2.0 token.

Now, there are some interesting implications to this model. Maybe these are obvious to some of you, but I will state them anyway:

  1. the token dispensary and the client app have to conform to the same JWT signing convention. JWT can be signed with shared-secret (HS256) or with public/private key (RS256). Either way is fine, but the two sides must agree.
  2. regardless of the signing convention, it must be possible for the token dispensary to verify the signature. If HS256 is the agreed convention, this means the token dispensary and the client app must share a secret. (This can be the client_secret! if it has sufficient entropy, or it can be a key obtained from PBKDF2) If RS256 is the signing convention, it means the two parties must have a shared trust relationship, where the token dispensary has access to the public key of the client app. Bottom line, there is a little bit more overhead for you, setting up an JWT-for-opaque-token exchange mechanism, if you use RS256: specifically you need to provision a new RSA public/private keypair for the client, and the client needs to make the public key available to the token dispensary.
  3. the client app needs some extra intelligence, specifically a library that allows it to create a signed JWT. There are myriad options available regardless of the app platform + language you use, so in practice, this won’t be an obstacle, but it does mean there will be new code you must include in your client.

Once you get past those implications and the extra set-up overhead, the model in RFC 7523 is really nice because it’s extensible. That’s because the request-for-token is encapsulated in a JWT, and the JWT itself is extensible. You, as an API designer, can stipulate any arbitrary (custom) claims that clients must include in the JWT, in order to compose a valid request-for-token. And you can include restrictions on the standard claims or custom claims. Some examples:

  1. a proof-of-work string, something like a HashCash string or similar. Including proof-of-work would be a discouragement for bots.
  2. As another example, you can stipulate that the JWT be short lived. Verification of the JWT might include a proviso that rejects tokens that have a lifetime beyond 180 seconds, for example.
  3. you could institute a one-use policy on such JWT.
  4. you could require a “scopes” claim and validate the strings contained in that claim against the issuer (==client_id)

BTW, the example API Proxy I shared on Github shows how to implement the lifetime and one-use-only controls. (As with everything I publish on github, pull requests are welcomed!) If the inbound JWT that comprises the request-for-opaque-token does not pass these checks, a 401 Unauthorized is sent back.

BTW #2, did you know that Google services like Stackdriver and cloud storage use JWT-for-opaque-token exchange in order to enable service-to-service integration? Google also institutes the lifetime and one-use-only controls. The lifetime of the JWT must be less than 300 seconds.

Say, that reminds me!, Speaking of Google, did I mention that Google has acquired Apigee? Yes, I work for Google now! Part of the Apigee team within Google. w00t! I’m pumped, psyched, charged up, amped, and very pleased about this development.

So far, minimal changes for me, except for me I got a Chromebook! And yes, I authored this post from that very same device.

As always, I’m interested to hear your feedback on this. Let me know in the comments section.

Finally, I would like to wish all of you a Merry RFC7523 Season; and I wish you many Happy short-lived OAuth Tokens in the new year.

restclient.el – sending API Requests directly from within Emacs

Hey, something new! (to me!) the restclient.el library, for emacs. I tried it. I like it. I recommend it.

What does it do? Allows you to send REST requests (really just http requests) right from emacs, interactively. And then pretty-prints the results if possible (if XML or JSON or image). It includes a simple text mode that allows you to define a set of requests and some variables that can be used in those requests. The whole thing is simple, easy, handy.
Activate it with C-c C-c

Separately, I have a library that reads .netrc files from within elisp. It’s a natural complement to restclient.el , for API endpoints that require HTTP Basic authentication. That covers lots of API endpoints, including OAuth token dispensaries that require the client_id and client_secret to be passed in as an HTTP Basic authentication header. Here’s a simple example use:

Really nice. How did I not know about this elisp library?

One problem I had when using it: The restclient.el helpfully uses a function json-pretty-print-buffer to pretty-print the buffer containing the response, if the content-type of the response is application/json.

I don’t know that function, and it wasn’t defined on my emacs. This led to a runtime error, and a json buffer that was hard for me to visually parse.

But my emacs does have the similarly named json-prettify-buffer. So I used the following gist to get the restclient to succeed in its pretty-printing efforts.

The restclient.el module is not a huge thing, but it’s nice for us emacs people. I know about Postman, and use it. I know about Paw (but don’t use it). I know and use Fiddler. I am a big fan of curl, and someitmes curlish. This is a nice additional tool for the toolbox.  Really handy.

Thanks, Jake McCrary, for writing up your experience with Emacs and restclient.el; your blog post is how I discovered it.  And thanks of course to Pavel Kurnosov, the original author of the restclient.el library. Thanks for sharing.

EDIT – I made a change in restclient.el to fix an issue that causes an extra unintended newline to be appended to the last form parameter. This issue cost me about 90 minutes of debugging my JWT verification code, bummer! My change just trims trailing newlines from the entity being sent. This will be a problem for you if you want to send an entity that ends in several newlines. Find my fixed restclient.el here .

Pre-request script for Postman, to calculate HttpSignature

If you do REST, you probably have a favorite REST client testing tool.
Mine is Postman, a Google Chrome app.

Postman has a nifty feature called Pre-request scripts, which allows you to write some Javascript code that performs a calculation and then reads and writes the “environment” object for the request. This means you can calculate … hashes or digests or signatures or anything you like.

Here’s an example of a script that calculates an HMAC-SHA256 HttpSignature, using the keyId and secret-key embedded in the environment. It also computes a digest on the message payload. Postman helpfully includes CrytoJS in the JS sandbox, so it’s easy to do HMAC and SHA.

In this particular case, the HttpSignature verification on the server requires 2 headers (date and digest) plus the ‘(request-target)’ value. The digest is a SHA-256 of the payload, which is then base64 encoded.

Anyone can start with this and modify it to do other variations.
Good luck!

Addendum

I should have mentioned this: Postman, even the latest Chrome app version, uses XmlHTTPRequest to send out requests. XHR is purposefully limited, in the specification, to restrict some headers from being set explicitly on outbound requests. The list of restricted headers includes: Origin, Date, Cookie, Via, and others. The reason for this restriction: it is desired that the user-agent be fully in control of such request headers.

My example above uses an HttpSignature that signs the Date header. This means the code must also SET the Date header to a known value; in my case I am using a value generated by the pre-request script.

postman-set-headers

The value corresponds to “now”, the current moment. But the point is the value has to be absolutely known.

This is not possible in standard Postman, because of the XHR restriction. The effect you will see is that the intended Date header silently does not get sent with the request.

This may confound you if you don’t have an ability to trace the request on the receiving (server) side. In the context of a request that uses HttpSignature, the server will throw an error saying “Missing Date header”.

But! in Postman v0.9.6 and above, it is possible to configure Postman with something the Postman people call “the Intercptor plugin”, which then allows the lifting of this restriction. The Date header gets sent, and everything works.

If you don’t want to rely on the Interceptor plugin, and you want the HttpSignature to include the date value, then you’ll have to use a differently named header to hold the date. Use X-Date or anything other than “Date”. You need to change the client as well as the server of course, to make everything hold together.

Online calculator for SHA and HMAC-SHA

Here’s a thing I built. It’s just a webpage that calculates SHA-(1,224,256,384,512) and HMAC with the same algorithms.

I was using this to help with building a system that relies on HttpSignature. Developers need some help in constructing and validating their HMACs and SHAs.

The spec formerly known as Swagger is now OpenAPI

headline

Swagger has been renamed! Three weeks ago. I didn’t realize this, and (forgive me) I’ve been continuing to use the term “swagger” when I really should have been using “OpenAPI”, in the time since.

OAI Logo

Helpfully, Marsh, an esteemed colleague of mine, has produced a slackbot to remind me to use the world “OpenAPI” every time I type the… uh… old word… in slack chats. Now, I just need that slackbot to follow me around and remind me every time I *say* the old word.

There’s a new group, the OpenAPI Initiative, whose members include IBM, Google, Apigee, Intuit, Microsoft, Paypal… these members will govern the evolution of the spec.

REST Assured (hahahaha! ya get it?) that Apigee will be building some nice innovations on top of the OpenAPI spec. Exciting things coming soon. You can already see the beginnings at apistudio.io.

Apigee_API_Studio

It’s not difficult to imagine some interesting possible paths forward, from that tooling.

And, omigosh, I just realized that I haven’t posted an article here in about 6 months! Wow I must have been busy…

RESTful is hardly harmful.

A provocative essay came up on Hacker News today, entitled RESTful considered harmful.

The summary of the essay:

  • JSON is bloated in comparison to protobufs and similar binary protocols
  • There are no interface contracts or data schema
  • HATEOAS doesn’t work
  • No direct support for batching, paging, sorting, etc – eg no SQL semantics
  • CRUD is too limited
  • No, really, CRUD is too limited
  • HTTP Status codes don’t naturally map to business semantics
  • there’s no queueing, or asynchrony
  • There are no standards
  • Backward compatibility is hard

Let’s have a look at the validity of these concerns.

1. JSON is bloated in comparison to protobufs

The essay cites “one tremendous advantage of JSON”: human readability, and then completely discounts this advantage by saying that it’s bloated. It really is a tremendous advantage, which is why XML won over MQ’s binary protocol and the XDR from Sun RPC, and the NDR from DCE RPC, and every other frigging binary protocol. And readability is why JSON displaced XML.

Ask yourself this: what is the value of readability versus the performance advantages of the alternatives, like Thrift or protobufs? Is readability worth 1x as much as the improved efficiency you might get with protobufs? 2x? I believe that for many people, its worth 100x. It trumps all other. For uber-experts, it’s deceptively attractive to wave away the advantage of human-readability. For the rest of the world, for 97% of developers, it’s a huge, Huge, HUGE advantage. For high speed financial trades, JSON is wrong. For Google’s internal interfaces, wrong. For most of the world, RIGHT.

AND as the essay notes, REST doesn’t prescribe JSON. Or XML. Or anything. There’s a content-type header, and clients and servers can negotiate it. If the client says Accept: application/x-protobuf, and the server can send it, bliss for you. So this point – “JSON is bloated” – is not only not valid (false) in the first place, it’s also not an argument against REST.

2. There are no interface contracts or data schema

This is a feature. OMG, have we not tried this enough times? Did this guy skip his “History of IDL compilers” course in the Computer History department at school? Sun RPC IDL. DCE RPC IDL. Corba IDL. WSDL, ferpeetsake! XML Schema!!

It’s pretty straightforward to deliver plain-old-XML over HTTP, which is quite RESTful. More popular is JSON-over-HTTP. Either of those have schema languages. Few people embrace them, though. Why? Because IDLs and Schema languages are too much structure, and they handcuff people more than help them. We have fortunately learned from the past. There are more tools coming in this area, for those who wish to embrace them. See apistudio.io .

3. HATEOAS doesn’t work

Mmmmm, yep. No argument here. In my experience, nobody really uses this, in practice. Pragmatic REST is what people do, and it generally does not use HATEOAS.

4. no SQL semantics

Uhhuh, true. This has been addressed with things like OData. If you want SQL Semantics, seek solutions, don’t just complain.

5. CRUD is too limited

Really? This is a problem? That you might need a switch statement in your code to handle different types of events? Really?

6. CRUD is really too limited

….

Mmmmm, sorry. I have to stop now. I’m completely bored of responding to this essay by now. Except for one more:

10. Backward compatibility is hard

This has NOTHING to do with REST. This is just true. Back compat in any interface is tricky.


In summary, I don’t find any of the arguments compelling.

Let me draw an analogy. The position in this essay is like saying “Oil is no good as a transportation fuel.” Now, Oil has it’s drawbacks! Oil is dirty. We can imagine alternatives that are better in theory. Even today, in specific local situations (daily use, short trips, urban travel) electric cars are better, MUCH better, than fossil-fuel based cars. (An bicycles are even better than electric cars) But gasoline-powered cars deliver massive utility to billions of people. Gasoline refueling stations are everywhere. The delivery system for gasoline is mature and redundant. The World RUNS, very effectively, on gasoline-powered transport, by and large. Objectively, Oil is VERY GOOD as a transportation fuel.

Sure, we’ll evolve better approaches in the future. That’s great. And sure, we can imagine a world with electric-powered vehicles. But today, in the world of reality, Oil wins.

And likewise Pragmatic REST, HTTP, JSON, and schema-less interfaces are winning. We’ll evolve better approaches. But today, This platform wins.

HTTP, HTML, Javascript, and JSON are ubiquitous, are the foundation of the web, and are not going anywhere. Any architect is free to choose other options, and they might have good reasons for doing so. On the other hand the vast majority of installations won’t benefit from using protobufs or thrift, or some non-HTTP protocol. Pragmatic REST, JSON and HTTP are very very safe choices in the vast majority of scenarios.

Cheers