Webdev, as well as many other teams at Mozilla, create awesome web services. I think we need to be creating even more services, so that we can isolate certain responsibilities and reuse them throughout our community and internal tools.
One missing piece is a consistent way to secure our web services.
This API allows you to lock down your website based on if someone is a vouched Mozillian. You can check to see if they are in the group “staff”, etc. Awesome stuff!
How does one get to use this API? How is it secured? The API uses a very simple authentication scheme.
You register an API key for your application
You send your App Name and App Key in your API call
You make your web service calls over HTTPS
Essentially, we’re sending the password over SSL. We can do better!
I try not to promote Golden Hammers, but Eran Hammer’s new protocol seems quite appropriate.
Eran has created an Authentication system called Hawk, that is exactly built for this use case. Erin lead the OAuth specification and is now developing a couple new authentication systems. Hawk and (the still under development) Oz.
Ben Adida suggested using Hawk in a recent bug related to PeterB‘s MozLDAP web service.
So, what is Hawk?
According to the README, Hawk is an HTTP authentication scheme using a message authentication code (MAC) algorithm to provide partial HTTP request cryptographic verification.
So Hawk is a standard way for clients to sign their requests and verify that the response is from the intended server. It allows servers a standard way to verify a request is authentic and to digitally sign it’s response.
Much like the Mozillians API, there is an App ID and a App Secret, but the secret is never sent over the wire. This makes Hawk much better than Basic Digest or an ad hoc method where the secret is exposed.
What’s this partial in “partial HTTP request cryptographic verification” business? Only certain aspects of the HTTP request / response are used in the signing process; making it compatible with HTTP proxies and the real world internet.
The cryptographic verification includes the HTTP method, request URI, host, and optionally the request payload.
Signing involves calculating a request MAC value. There are libraries that hide the complexities of signing and verifying requests.
In addition to not sending the secret, there are more defense in depth features we can use:
Inspired by Indie Web Camp as well as the Unhosted project, I now host my own Persona Identity. I believe that Persona protocol will be an important building block for the Federated Social Web.
What is Persona?
Persona is a the decentralized authentication method which is a competitor to Facebook Connect, Twitter sign-in, etc. It’s being built by Mozilla, but works out of the box across devices and browsers.
Persona hopes to get email providers, universities, companies, and other entities to implement the BrowserID protocol. I’ve implemented this for myself, so any email address on the ozten.com domain will use my service for authentication. All ozten.com users would basically include me and my cat
What does it look like?
In this video, I’m logging into The Times Crossword page via Persona using my own service. It shows my avatar and I can type in my password. Avatars are provided by Libravatar and it’s the same avatar I use across the web. Sensing a theme here?
How does Persona and my self-hosted service look across the web? In the next video I log in to my self-hosted blog’s admin page, again to the Times Crossword, and a several other sites… Notice I don’t have to re-enter my password. I can just pick an active identity and go.
On the 3rd site, I click "this is not me" to show my hostedpersona log in screen again. Then I log in to several other sites with various identities. For example on Bugzilla I use a gmail account to manage bugs, so email@example.com wouldn’t be a good identity there.
Wow… so easy to reuse my self hosted identity across all these sites. Once I’m logged in, the Persona UX manages my session, so I’m not bugged for a password.
I challenge the organizers of next year’s IndieWebCamp:
Offer Persona Log In as a way to authenticate to http://indiewebcamp.com
(I’m happy to help)
Wanna own your own Persona?
Definitely weigh the options… self-hosting an important identity (I’ve used firstname.lastname@example.org for 15 years) is kind of dangerous (especially for me as Mozilla web properties use it) and will become more risky as Persona gains adoption across the web.
Wanna hack without too much risk? Register a new domain name instead of using your actual personal domain. Okay, enough parenting…
So you want to run your own service… what is required?
Dynamic server side code for provisioning and authentication
To get an SSL cert I needed a static IP. Getting a static IP was 1 click and 3.95 per month.
Getting an SSL cert was one click, 12 hours, and 15 bucks a year from my website hosting provider. You can do it cheaper and better, but I wanted to see what this would be like for novice netizens. (Okay, and I’m lazy)
My personal website is mostly PHP and I didn’t want to re-invent the Persona crypto wheel. I’ve been writing more Node code lately, so I decided to:
Write a new Node based service
Host it on ec2
Reuse Mozilla’s browserid-certifier server to do all the crypto
The BrowserID protocol enables a domain to delegate authority to another domain. I had ozten.com delegate authority to hostedpersona.me.
Starting projects at 4:30am can lead to lame names… which is how I registered hostedpersona.me I picked this name as I might host friends’ Identities too, if they are dumb enough to trust them to me, since this is only a side-project.
Level of Difficulty
Implementing an Identity Provider service is much harder than adding Persona Log In to your website. If you haven’t played with that… do that first!
You are definitely better off trusting Mozilla or your email provider to manage your account’s security. Just say’n.
If you do want to do this… I think it’s about 10x as hard as adding Persona Log In to a website. Be sure to check out these tips.
So there you go, Persona has allowed me to have an easy to use, portable, self-hosted Identity on the web. I think Persona is going to be a key ingredient to the federated social web.
If Mario’s session with LinkedIn.com has timed out, he is prompted to login in a new window. This is the same login screen as he sees everyday
LinkedIn asks Mario if he wants to expose his contacts
Mario clicks yes
Gmail receives rich contact data for Al and Alice, including a picture, degrees of seperation from Mario, etc
Gmail uses this rich data to improve their contact picker
Mario clicks Alice and finishes writing his email
IUSEA – Intentional User Service Exposure Architecture
If your building a web application or service, you should continue to provide REST APIs for public data, but user data should be exposed via IUSEA where possible.
LinkedIn’s contacts_include.js defined a autocompleteContact function. How is it implemented?
It opens a hidden iframe to https://linkedin.com and uses a postMessage based protocol to ask for contacts that have a first, last name, or email that start with ‘Al’.
LinkedIn.com hosts a page that is the iframe’s target. It authenticates the user, makes sure the user wants to expose this information, and delivers the data back Gmail via postMessage.
This pattern is ready today and works across Browsers, OS, Desktop, and Mobile. Mozilla’s jschannels library is an easy way to rock this, even on IE.
Isn’t this just OAuth?
As a developer, your thinking “Congrats, you’ve re-invented the OAuth wheel.”
OAuth is about server to server data sharing:
Value in this diagram means value for the user and the service provider, such as a user’s contacts.
IUSEA puts the user back into the center. Value resides in each server but also is controlled by the user and flows through the user at their discretion.
Did you notice that all happened in the browser? Technically, Gmail doesn’t need to upload this new LinkedIn data to their servers, since all the user benefit was provided directly in the Gmail web UI.
Going one step further, LinkedIn could provide this API with a TOS that forbids Gmail from importing this user data… wow.
Google can fulfill it’s mission to to organize the world’s information and make it universally accessible and useful. LinkedIn can fulfill it’s mission of connecting the world’s professionals to make them more productive and successful. The user wins as they gain more control of their data, while being able to reap the benefits across the web.
IUSEA (a term I made up to help explain this concept) is the secret sauce we’re applying liberally, and you can start using this pattern in your products.
IUSEA and U should too’a.
Disclaimers, Image Credits
Nintendo, Google, and LinkedIn don’t have anything to do with each other and the example is hypothetical.
What changed between before and after BrowserID? The authentication mechanism. What usually doesn’t change? The authorization and business logic of an application. Both sections have a "More stuff happens" section which means the goodness your app does and the permissions your users have, usually just continue to work.
We do not have to re-write our internal apps to take advantage of BrowserID.
sasl-browserid is getting a thorough review from the platform security team. In particular, David Chan has been doing a kick-ass job of finding flaws and ways to improve it.
Once it is ready, we’ll deploy it to development stage servers (next week or two).
But which app should we start with?
Our MoCo LDAP powers many, many Mozilla tools. This is beyond just web applications and bleeds into Desktop, command-line, and other uses; Reed’s biological functions are literally regulated by MoCo LDAP. It’s got both critical and sensitive information, so we have to be careful, but luckily we’ve also resently launched…
Mozillians.org also has a LDAP server backend. It has a lower risk profile, so we’ll deploy to it first. It’s lower risk for a bunch of reasons, including:
Smaller, newer, fresher
If it falls over, it won’t block Firefox development or othe critical Mozilla processes
It has a dedicated team to monitor, diagnose, and fix any issues sasl-browserid introduces
The current plan is to BrowserID enable mozillians.org in it’s 1.2 release (3-5 weeks out). This will give us experience with sasl-browserid. Once it has proven itself production ready…
We’ll add the plugin to MoCo LDAP. We’ll update the internal phonebook’s PHP code. We’ll test, deploy, wait and see.
If that goes well, then all the other LDAP backed webapps can follow.
and gentleman… I use the acronym LDAP with all due respect for the Fear and Loathing I’m sure it provokes in each and every one of you…
We’ve made an organizational commitment (training, hiring, etc) to improving our ability to maintain our aging LDAP infrastructure. Jabba and Corey have done a top-notch job of revamping the MoCo LDAP, so it’s a sustainable sub-system.
We can adopt BrowserID without throwing away MoCo’s LDAP backend by utilizing sasl-browserid.
This will not eliminate passwords from LDAP. Someone figure that part out, please.
This plugin doesn’t automagically convert your webapp to BrowserID. Some additional coding is required
Technically, this is not Single Sign On, but has some of the same useful properties
Yes, I’m happy to help you design/setup/whatever your BrowserID solution. Ping me in #identity.
Yes, this plugin might be reusable by large organizations such as schools, businesses, and pillow fight flash mobs. Please spread the word.
Technically sasl-browserid should work with any Cyrus SASL enabled client or server. OpenLDAP and Postfix are two known servers, are there more? Clients?
Let’s figure out what works well and where we can improve the BrowserID project. If we can do LDAP, we can do anything
* Some programming languages lack whoami bindings, search can be used to emulate this check.
Since Google released the Dart spec today, I wanted to share my speculation on the reason Dart exists: Dart is a new technology to solve a cross-organization engineer people problem, internal to Google.
This is not surprising. Google is a massive, incredible engineering organization. They are creating cutting edge solutions to some of the world’s hardest problems. I would expect them to have huge people challenges; sudo herd cats still doesn’t work.
Classic engineering problems
Cost of shipping a project, etc.
Standards (Code Quality, Security, etc)
A classic mistake is to solve these things with technology first, before solving the actual people-oriented problems. Technology can help enforce standards and make code reuse easier, but tech alone will fail.
As I day dream, I tried to imagine problems that could result in Dart as a (flawed) solution…
Perhaps Google spun up 6 teams to investigate and solve 6 problems (Why isn’t project X using GWT?, Why did project Y fail? Why did project Z take 6 extra months and 30% more $$$ to ship?).
Being an awesome company, these postmortems may have bubbled up into a more core question: How can we solve this once and for all?
One of many possible analysis:
An example of flawed logic that could lead to project Dart:
We couldn’t get everyone on board with GWT, so
We employee some key language pioneers in enterprise languages, so
Google needs a ‘programming in the large’ friendly platform like Java was for the last decade.
The end goal is enticing
Lower power usage and data center costs, be greener
Ramp up new engineers quickly
All projects reusing standard language and APIs
Lower cost of engineers switching teams
Avoid legally tainted platforms (Java)
Provide puppies and rainbows for all
Some of these are real issues. Some of these are nice to haves. Just technology isn’t the right solution to all of these problems.
Some, definitely not all, but some people at Google have this perspective:
Google hires all the smartest people of the world. We will figure out how to solve the world’s problems; you will live in our resulting products. You’re welcome.
Sometimes being the smartest person can blind oneself to the actual root problem. Having an engineering bent causes one to look at fixing tools instead of process.
In 5 years, various hypothetical teams across my imaginary Google will have drifted back into a Dart-based Babylon, if hypothetical Google doesn’t fix the core communication and standardization issues which may have prompted Dart’s creation.
These are not fun decisions to make or communicate, but that is the point of project and engineering management.
Creating Dart is expensive (language, tools, runtime, etc), but may give them the social framework to communicate people oriented engineering changes (employees will all use Dart) that address the real core issues. So ultimately Dart might not be a loss for Google internally.
Dart alone will do nothing to solve the root problems that prompted Dart.
Again, this post is based on a hypothetical situation internal to Google. I’ve pieced together this day dream from my own “Enterprise” experiences, various Google posts over the years, etc to resolve why Google would create the Dart platform.
I hope I haven’t offended any of my friends at Google and my speculation is probably wildly inaccurate. If nothing else this post is a parable to talk about solving people problems directly before building tools.
But the real world announcement has some real world impact…
Collateral Damage from the Dart Launch
An unintended consequence of the Dart launch is that it may weaken Google’s efficiency in promoting web standards. It is a very confusing story to explain how promoting the Dart platform and working towards next generation, open web standards are not conflicting Google projects.
Example Dart versus CoffeeScript:
Early adopters in the webdev community don’t see Dart as more attractive than CoffeeScript which they see as having a superior syntax to Dart. Since both must compile down to JS, they are equally viable for experimenting with language semantics. So on first blush, Dart is a meh to CoffeeScript commentators on Hacker News and Reddit.
I’m probably wrong, as I don’t have much insight into Google’s engineering organizations, but it is quite possible that Dart is a facile throw at the wrong target.