letsencrypt and NearlyFreeSpeech

I’ve been running this site on nearlyfreespeech for some time now.

Last week I created a cert using the tools and service made available by letsencrypt.org, and then configured my NFS server to use it. It was pretty easy, but not documented. I’ll share here what I did to make it work.

I am able to SSH into the nearlyfreespeech server. I can also perform a git clone from that server to get the letsencrypt tools. But when I ran the letsencrypt-auto tool from the server, it didn’t do what I wanted it to do. This was my first time with the tool, and I’m unfamiliar with the options, so maybe it was just pilot error.

In any case, I solved it by running the tool on my Mac OSX machine and transferring the generated PEM files to the server.

  1. I ran git clone on my local workstation (Mac OSX)
  2. from there, I ran the letsencrypt tool with these options:
    ./letsencrypt-auto certonly  --manual  \
       -d www.dinochiesa.net -d dinochiesa.net \
       --email dpchiesa@hotmail.com
    
  3. follow the instructions. I needed to create endpoints on my NFS server that responded with specific values.
  4. when that completed, I had the cert and keys in PEM format. I then copied them to /home/protected/ssl on the NFS server
  5. opened a service ticket on NFS as per This FAQ
  6. a couple hours later, the NFS people had completed the SSL config for me

Maybe this will help someone else.

It’s possible that I could have used the –manual option on the NFS Server, and avoided the need to transfer files. Not sure. If anyone else has done this, I’d like to know. I will need to renew my certs every couple months.

I’m really pleased about the letsencrypt service. I hope it gets used widely.

Update, 2017 December 7: I’ve updated my certs 3 or 4 times since I made this post. Now, this is what I do:

   sudo certbot certonly  \
     --authenticator manual  \
     --domain www.dinochiesa.net \
     --domain dinochiesa.net \
     --email dpchiesa@hotmail.com \
     --rsa-key-size 4096

I’ve automated the other parts – creating the right endpoints on the NFS server, and then copying the generated certs when they’re sent. Also NFS no longer requires a service ticket; it will automatically install certs when I update them. The change takes a minute or less. Super easy.

Use PHP code to make WordPress redirect to secure site

Lots of people use the .htaccess redirect rules to force their wordpress sites to load with the secure option.

It looks like this:

But if you have a hoster that does not provide you the ability to modify the .htaccess file, that won’t work. These hosters typically set up your server behind their load balancer which means the wordpress code sometimes cannot directly infer whether HTTPS is in use. In other words, the $_SERVER[‘HTTPS’] is not correct.

It is possible to introduce code into your theme that will do what you need. This is the PHP code:

Insert that in your theme header.php file. Or maybe the functions.php file. Invoke the maybe_redirect_to_ssl_site() function in the theme header before emitting any HTML.

Mac OSX users: update openssl. Also: openssl built-in to Mac OSX is different than brew version

If you use openssl on Mac OSX to maintain certs, etc, you should keep it up-to-date.

Worth knowing, courtesy of a comment by Gordon Davisson on THIS Stackoverflow question

…the major problem isn’t the openssl command, it’s the openssl libraries (which are used by other programs) — those aren’t API compatible between versions 0.9.x and 1.0.x, so you do not want to update the system-supplied openssl libraries!

Here’s how you get the latest openssl from brew. First, make sure you have brew installed and updated (per brew.sh):

If you already have brew installed, the output of that command will tell you.

OK, at this point you have brew installed. Then, update brew and update openssl:

If your brew is somehow broken, this command will give you lame messages. To fix, you can try “brew doctor”. (Once I resorted to simply re-installing brew. I ran the “brew install” command, which said “brew is already installed”, and then told me how to uninstall. I uninstalled, then ran the brew install again.)

Be aware that /usr/bin will probably be first on your path, so if you want to use the latest openssl you will have to explicitly request it with the fully qualified path name.

And also be aware that brew does not yet have openssl v1.0.2f, which includes the fix for CVE-2016-0701.

Pre-request script for Postman, to calculate HttpSignature

If you do REST, you probably have a favorite REST client testing tool.
Mine is Postman, a Google Chrome app.

Postman has a nifty feature called Pre-request scripts, which allows you to write some Javascript code that performs a calculation and then reads and writes the “environment” object for the request. This means you can calculate … hashes or digests or signatures or anything you like.

Here’s an example of a script that calculates an HMAC-SHA256 HttpSignature, using the keyId and secret-key embedded in the environment. It also computes a digest on the message payload. Postman helpfully includes CrytoJS in the JS sandbox, so it’s easy to do HMAC and SHA.

In this particular case, the HttpSignature verification on the server requires 2 headers (date and digest) plus the ‘(request-target)’ value. The digest is a SHA-256 of the payload, which is then base64 encoded.

Anyone can start with this and modify it to do other variations.
Good luck!

Addendum

I should have mentioned this: Postman, even the latest Chrome app version, uses XmlHTTPRequest to send out requests. XHR is purposefully limited, in the specification, to restrict some headers from being set explicitly on outbound requests. The list of restricted headers includes: Origin, Date, Cookie, Via, and others. The reason for this restriction: it is desired that the user-agent be fully in control of such request headers.

My example above uses an HttpSignature that signs the Date header. This means the code must also SET the Date header to a known value; in my case I am using a value generated by the pre-request script.

postman-set-headers

The value corresponds to “now”, the current moment. But the point is the value has to be absolutely known.

This is not possible in standard Postman, because of the XHR restriction. The effect you will see is that the intended Date header silently does not get sent with the request.

This may confound you if you don’t have an ability to trace the request on the receiving (server) side. In the context of a request that uses HttpSignature, the server will throw an error saying “Missing Date header”.

But! in Postman v0.9.6 and above, it is possible to configure Postman with something the Postman people call “the Intercptor plugin”, which then allows the lifting of this restriction. The Date header gets sent, and everything works.

If you don’t want to rely on the Interceptor plugin, and you want the HttpSignature to include the date value, then you’ll have to use a differently named header to hold the date. Use X-Date or anything other than “Date”. You need to change the client as well as the server of course, to make everything hold together.

Online calculator for SHA and HMAC-SHA

Here’s a thing I built. It’s just a webpage that calculates SHA-(1,224,256,384,512) and HMAC with the same algorithms.

I was using this to help with building a system that relies on HttpSignature. Developers need some help in constructing and validating their HMACs and SHAs.

The spec formerly known as Swagger is now OpenAPI

headline

Swagger has been renamed! Three weeks ago. I didn’t realize this, and (forgive me) I’ve been continuing to use the term “swagger” when I really should have been using “OpenAPI”, in the time since.

OAI Logo

Helpfully, Marsh, an esteemed colleague of mine, has produced a slackbot to remind me to use the world “OpenAPI” every time I type the… uh… old word… in slack chats. Now, I just need that slackbot to follow me around and remind me every time I *say* the old word.

There’s a new group, the OpenAPI Initiative, whose members include IBM, Google, Apigee, Intuit, Microsoft, Paypal… these members will govern the evolution of the spec.

REST Assured (hahahaha! ya get it?) that Apigee will be building some nice innovations on top of the OpenAPI spec. Exciting things coming soon. You can already see the beginnings at apistudio.io.

Apigee_API_Studio

It’s not difficult to imagine some interesting possible paths forward, from that tooling.

And, omigosh, I just realized that I haven’t posted an article here in about 6 months! Wow I must have been busy…

RESTful is hardly harmful.

A provocative essay came up on Hacker News today, entitled RESTful considered harmful.

The summary of the essay:

  • JSON is bloated in comparison to protobufs and similar binary protocols
  • There are no interface contracts or data schema
  • HATEOAS doesn’t work
  • No direct support for batching, paging, sorting, etc – eg no SQL semantics
  • CRUD is too limited
  • No, really, CRUD is too limited
  • HTTP Status codes don’t naturally map to business semantics
  • there’s no queueing, or asynchrony
  • There are no standards
  • Backward compatibility is hard

Let’s have a look at the validity of these concerns.

1. JSON is bloated in comparison to protobufs

The essay cites “one tremendous advantage of JSON”: human readability, and then completely discounts this advantage by saying that it’s bloated. It really is a tremendous advantage, which is why XML won over MQ’s binary protocol and the XDR from Sun RPC, and the NDR from DCE RPC, and every other frigging binary protocol. And readability is why JSON displaced XML.

Ask yourself this: what is the value of readability versus the performance advantages of the alternatives, like Thrift or protobufs? Is readability worth 1x as much as the improved efficiency you might get with protobufs? 2x? I believe that for many people, its worth 100x. It trumps all other. For uber-experts, it’s deceptively attractive to wave away the advantage of human-readability. For the rest of the world, for 97% of developers, it’s a huge, Huge, HUGE advantage. For high speed financial trades, JSON is wrong. For Google’s internal interfaces, wrong. For most of the world, RIGHT.

AND as the essay notes, REST doesn’t prescribe JSON. Or XML. Or anything. There’s a content-type header, and clients and servers can negotiate it. If the client says Accept: application/x-protobuf, and the server can send it, bliss for you. So this point – “JSON is bloated” – is not only not valid (false) in the first place, it’s also not an argument against REST.

2. There are no interface contracts or data schema

This is a feature. OMG, have we not tried this enough times? Did this guy skip his “History of IDL compilers” course in the Computer History department at school? Sun RPC IDL. DCE RPC IDL. Corba IDL. WSDL, ferpeetsake! XML Schema!!

It’s pretty straightforward to deliver plain-old-XML over HTTP, which is quite RESTful. More popular is JSON-over-HTTP. Either of those have schema languages. Few people embrace them, though. Why? Because IDLs and Schema languages are too much structure, and they handcuff people more than help them. We have fortunately learned from the past. There are more tools coming in this area, for those who wish to embrace them. See apistudio.io .

3. HATEOAS doesn’t work

Mmmmm, yep. No argument here. In my experience, nobody really uses this, in practice. Pragmatic REST is what people do, and it generally does not use HATEOAS.

4. no SQL semantics

Uhhuh, true. This has been addressed with things like OData. If you want SQL Semantics, seek solutions, don’t just complain.

5. CRUD is too limited

Really? This is a problem? That you might need a switch statement in your code to handle different types of events? Really?

6. CRUD is really too limited

….

Mmmmm, sorry. I have to stop now. I’m completely bored of responding to this essay by now. Except for one more:

10. Backward compatibility is hard

This has NOTHING to do with REST. This is just true. Back compat in any interface is tricky.


In summary, I don’t find any of the arguments compelling.

Let me draw an analogy. The position in this essay is like saying “Oil is no good as a transportation fuel.” Now, Oil has it’s drawbacks! Oil is dirty. We can imagine alternatives that are better in theory. Even today, in specific local situations (daily use, short trips, urban travel) electric cars are better, MUCH better, than fossil-fuel based cars. (An bicycles are even better than electric cars) But gasoline-powered cars deliver massive utility to billions of people. Gasoline refueling stations are everywhere. The delivery system for gasoline is mature and redundant. The World RUNS, very effectively, on gasoline-powered transport, by and large. Objectively, Oil is VERY GOOD as a transportation fuel.

Sure, we’ll evolve better approaches in the future. That’s great. And sure, we can imagine a world with electric-powered vehicles. But today, in the world of reality, Oil wins.

And likewise Pragmatic REST, HTTP, JSON, and schema-less interfaces are winning. We’ll evolve better approaches. But today, This platform wins.

HTTP, HTML, Javascript, and JSON are ubiquitous, are the foundation of the web, and are not going anywhere. Any architect is free to choose other options, and they might have good reasons for doing so. On the other hand the vast majority of installations won’t benefit from using protobufs or thrift, or some non-HTTP protocol. Pragmatic REST, JSON and HTTP are very very safe choices in the vast majority of scenarios.

Cheers

Chrysler is Internet-enabling your car as a way to accelerate death

From the holy-shit-how-did-they-not-test-this department, Fox News tells us that it is possible for hackers to seize control of a moving Chrysler automobile, fiddle with the radio, turning on the windshield wipers, or more ominously, controlling the transmission and the brakes. Considering the source (Fox Newsertainment), I am unsure whether to believe this. But there is also a piece on Wired. If true, seriously, Holy Shit.

Yes, APIs are everywhere.

Here’s an idea for the API team at Chrysler that has made the driveline remotely programmable – you guys should talk to the security team at Chrysler.

Update:Chrysler is recalling 1.4 million cars over this.

Sane names for screenshots on Mac OSX

Back when I was a Windows user and a .NET developer, I used a tool called Cropper to grab screen shots. That tool is great alone, and there are plugins for various photo destinations. I wrote and maintained a few plugins.

The basic capability to grab a portion of the screen into an image file is built-in to OSX, which is nice. I learned about it here.

I use the Command + Option + 4 sequence daily to grab interesting bits of the screen, for bug reports, demonstrations, illustrations, sharing information with friends, posting to Twitter, and so on.

But I miss the flexibility of Cropper. In particular, WTF is with the filenames, OSX?

OS X saves each screenshot with the name Screen shot [date] at [time]. As an example, a screenshot taken on July 9th, at 7:21 AM will be saved as Screen shot 2015–07–09 at 7:21 AM.png

Ugly! Lots of people ask how to change the filenames, and the standard answer isn’t quite satisfactory for most people. I use the terminal often, in addition to dired mode in emacs; because I fiddle around with files and directories outside of Finder, I’d like filenames that:

  • allow lexicographic sort order to also deliver a time-based sort
  • do not include spaces in the name

Not so much to ask, eh?

But the basic options to configure the names of the files are really poor. Basically I can move around the date and time portions of the file name, but I cannot change their formats to something more like ISO-8601, which is sortable. I don’t need strict ISO-8601, I just want something sortable.

The way I did it: I created a script that runs periodically, via launchd, checks for screenshot files with the ugly names in the Desktop folder, and then renames them appropriately.

You need 2 files to make this happen. One is the plist for the LaunchAgent. Create a file in /Users/YOURSELF/Library/LaunchAgents, I called mine local.screenshot.fixup.plist. The contents should look like this:

The second is the bash script that renames the files. Put this file anywhere (but you must reference that location as the Program string in the plist file above), and chmod it to be executable. The contents are like this:

After creating these files, you can either:

  • logout and log back in…
  • OR, run this command from the terminal:
    launchctl load ~/Library/LaunchAgents/local.screenshot.fixup.plist

… in order to get the new launchd to start renaming files.

When the screenshot is initially saved, it will have the original ugly name. But in one or two seconds, the watcher will run, find your screenshot file, and rename it.

Nice.

More info: