How do We Make Encryption More Accessible?

This post is about a program I wrote to publish public keys in gravatar images. You can try it (if you’ve got a gravatar account) here or look at the source here.

250 Hello, I am glad to meet you

When I was at university, discovering that I could easily have a conversation with the computer that managed mail was great fun. There were rules for our conversation of course, but they all seemed very civilized; I had to start by saying HELO, and it would give me some equally polite response. After that though, it quickly became obvious that it was a little too trusting. If I said that I was giving it mail from, or some lecturer or even some cute girl that my friend liked and that the mail was for my friend, it would just say Ok, Bye and then go ahead and deliver it.

It doesn’t take a genius to see that there’s quite a lot of potential for mischief there. Things have moved on a little since then, but probably less than you think. Email for most people is still not much more advanced than passing crumpled notes around a classroom. It has many of the same features – anyone on the path from the originator to the recipient can read it, the originator can pretend they never sent it if it turns out that their affection is not reciprocated, or worse, it’s easy for someone to forge one and create all kinds of opportunity for comedy. I suppose the main difference is that instead of it being a classroom, we’re talking about the world, and instead of it being about whether Emma likes Richard or not (she did), it’s about every aspect of our lives, including money, job, family, etc.

PRISM logo

This has been in peoples minds recently with news that various governments are logging every word, not just of email, but of everything we do online. This of course opens up even more potential for mischief. Normal individuals have already used the no fly list as a way of punishing people they didn’t like, but given the NSA and others, a few well placed forged emails might be easier than the phone calls and less risky.

Anyway, I’ve got some great news. A bunch of guys have been working on this, and they’ve come up with a really clever system that makes it basically impossible for anyone you didn’t intend to look your messages to read them. I was telling my Dad about it recently, and he found it hard to believe, but it’s just a bunch of maths, not even as complicated as you might expect and it more or less completely solves this problem.

There is of course a catch. The catch is that this scheme was invented independently at least twice in the 1970s, predates the use of the word ‘internet’ yet for some reason normal people are still using systems that are much less deserving of trust than they appear.

One of the problems is that of ‘key management’. You usually use a pair of computer ‘keys’ to enable this kind of security and you need a way to make one as available as possible, and the other as secret as possible. This is not really within the realm of possibility for normal humans, who regularly lose their physical keys or ‘hide’ them in obvious places despite the fact that they secure a majority of their worldly possessions. That’s a simple metal token that can easily be carried everywhere with you and is always around when you need it, and can’t be copied quickly without you noticing (generally speaking). The computer keys are harder to change if stolen, and easier to lose. Basically I shouldn’t be trusted with anything as precious as my own private key.

The next problem is that so few people are currently using any better system that it becomes difficult to find people to communicate with. This is probably partially because of the third problem:

The third problem is probably just software. There is no software that doesn’t make sending and receiving encrypted messages a massive pain. On top of that it’s hard (but not impossible) to enable important features like search on encrypted data.

I came up with a few ideas of my own to try to make this more reasonable. I wanted to use images with public keys embedded into them, so that when you see the image of someone in your address book, you also have the means to send them private messages, and possibly little seal images that are your private key, so you drag them onto data to sign the data. I think a much more visual, drag and drop system stands a much better chance of getting people using it. I also wanted to build on top of services that people already know about. Bitcoin is an encryption based currency system that lots of people are interested in, and Gravatar provides a sort of directory of email addresses to images.

The perfect system that I imagine is something that is easy enough to use for people who aren’t very technical, builds on top of things like Bitcoin and Gravatar, and probably provides a mail service that can’t read your stored history of emails but allows you to search them quickly and automatically encrypts messages to people or alternatively sends them a link that enables them to retreive the message over https. I didn’t create that, but I have created a proof of concept to show that some of these ideas are pretty doable. It’s still very rough and ready, so it’s not something for the non technical yet, but I think it shows one approach we could try using to bring private communications to everyone. You can try it (if you’ve got a gravatar account) here or look at the source here.

Dynamic Source Maps

I’ve been looking at source maps recently. The reason for this is that I want to be able to pass some code through a pipeline of transformations, but still maintain sourcemaps so that you get the nice debugging behaviour.

The specification for source maps talks about VLQ as if it’s a well known thing, and perhaps it is – apparently it’s part of the MIDI spec and the wikipedia page is entirely focussed on that implementation. I finally found a bit more detail from Peter van der Zee on exactly how it works and was able to create my own decoder.

As part of that, I got into a discussion about whether it was possible to use dynamically generated source maps. The result of my investigation is on github here.

Our Quest to Recruit People Who Can Actually Code

It seems that everyone who has ever been involved in hiring programmers has been amazed at how many people there are out there who are unable to solve even basic problems with code. The most rigorous certifications and famous names in the employment history are no sure sign. Even an interview with a manager can often let through people who should not be allowed anywhere near an IDE. Given this, it’s amazing that some companies are still basing their programmer hiring around CV + interview with a manager.

We’ve been working on our hiring process for a few years now, and while I’m sure it’s far from perfect, we’ve had some good outcomes.


The initial problem is that typically the first thing you see of a candidate is their CV. Now, I don’t want to give the wrong idea; CVs do sometimes have useful information on them, but I have found them to be very poor predictors of whether or not we want to hire someone. If you imagine having a default belief about whether you want to hire someone, an amazing CV doesn’t shift that belief much.

For this stage, I generally have a spreadsheet with the following headings:

  1. Academic Qualification / From where (if any)
  2. Evidence that they are smart.
  3. Evidence that they are enthusiastic.
  4. Relevant experience in the technologies (beyond what is expected, or note here if their experience is less than expected).
  5. How they did on the technical test we sent them.
  6. General comments

I summarize the relevant information into the columns. Of course, this means that while a CV may strike me with its beauty, unless I think it’s relevant, it won’t make it into the spreadsheet and it’s unlikely to affect my decision.

Disappointingly, many CVs don’t give me much to go on with headings 2 and 3. I absolutely am interested in your blog (if you ever write about technical topics), your github account, your stackoverflow account, if you’ve done any side projects, if you’ve submitted changes to significant projects that have been accepted (no matter how small), but CVs tend to be straightforward listings of previous work.

One part of the CV that I put absolutely no weight on at all is the ‘Skills’ section. A good candidate can be productive in an unfamiliar technology very quickly (especially if given good guidance), while a poor candidate will not contribute much even if they have years of experience. But usually I don’t even get as far as worrying about that, because although the ‘skills’ section is not usually entirely dishonest, it is normally a a grab bag of acronyms that the candidate has ever so much as smelled and speaks more about their desire to avoid being kicked out by someone who doesn’t understand the job and is making their decision based on keyword matches than on how much experience they actually have in an area.

Technical Test

Given that the CV is usually fairly useless in making a decision about whether to proceed, we need something else, that is relatively low cost to administer but gives us a bit more information. When we first receive a CV, we set up a technical test, which involves us emailing the test to the candidate at a particular time, and we expect to receive a response within just over an hour. We have a few different tests depending on the seniority of the position being applied for. It’s difficult to get it all done in an hour, but that tests a candidates ability to prioritize (some questions are worth a lot more than others, a fact that is indicated on the test). We usually ask about agile, testing, data structures, design patterns, that kind of thing. The only single question that you can immediately fail on is marked as such and is a simple programming problem. The rubric makes a point of allowing the candidate to challenge the questions. We’ve done our best to make the questions unambiguous and fair, but I’m immediately put off a place to work by stupid questions (e.g. ‘continue the sequence’), so making it clear that we don’t mind being challenged is an investment in keeping bright people interested.

When I’m marking the programming problem, I’m looking to weed out two kinds of candidates – the ones who can’t actually code at all and the ones who do not have the right instincts for a developer. Generally speaking, there is no way to tell if someone can code or not without actually seeing them code. There are people with top flight academic qualifications and people with 10 years of experience in industry (and stellar CVs) who can’t actually solve problems with code. If we can spot these people quickly and get them out of the process, that’s a massive win.

There are also people who have a moderate ability to solve problems with code, but do not have the right instincts. For example, if someone starts writing out a 100 line switch statement with no awareness that mechanically repeating steps is actually what computers are for, that indicates someone who doesn’t get it, and while we don’t mind teaching technologies (especially to juniors or graduates), we don’t usually want to have to actually teach developers what programming is for.

Given that the test is not done in a controlled environment we also need to, as far as possible make it so that copying an answer from google without understanding is not a valuable way to approach it. There are various things you can do with the questions, but assuming their test was interesting enough we also check people have actually understood what they have written by asking them about it.

Phone Screen

From a technical point of view, we use the Phone Screen to validate the test and the CV. We talk to them about some of the decisions they made on the test, perhaps challenge them on weak areas to see their response and see if they recognize a better approach if it’s suggested. If they’ve got something publicly visible on their CV, I normally pull up a browser and ask them to take me through it.

This is also an opportunity to talk to them a bit about the role too. People often apply for a job because they need a job, but we ideally want to hire someone who is also interested in the technical area, and is aware upfront about the different responsibilities they will have to take on. We also want to find out a little about their career plans. Are we likely to be able to accommodate them within the company? Are they likely to clear off after a year or less? We’ve hired extremely valuable people who then didn’t stay very long because of travel, so if they do not plan to relocate and have an unusually long commute, that might be relevant too.

Pairing Interview

The next stage is to actually get them to come in and spend some time with us. We usually kick off with a 1 hour ‘pairing test’. Obviously it’s not true pairing, because the pair is evaluating you as well as helping you. Lots of people haven’t coded in front of other people before and it can be very difficult to think under pressure, so we try to be very non-confrontational and put people at their ease as much as possible. We choose pairing tests that provide opportunity to talk about decisions to see why people are doing what they are doing, and we encourage the pair to talk about their thoughts as they work. I usually don’t mind if the candidate has strong opinions that I disagree with – the fact that they care enough to have formed opinions counts in their favor, especially if the opinions are fact based and they show that they have considered a number of possibilities. People often find it difficult to get started, and freak out that they might not be approaching things in the optimal manner, but that isn’t so important; I’m always happy to start with something suboptimal and iterate, or talk through possibilities.

How a candidate uses the tools can also be quite informative. How good are they at searching for answers to their questions online? Do they use the IDE well (I try to give people the environment they’re used to)? If you suggest a better way of doing a task, do they use it later?


A recent innovation for us is to get the candidate to give a 10 minute presentation on a subject of their choice. We instituted this partly because many of our developers will, at some point, have to work at a customer site and present to customers, but it’s a valuable skill even if they never leave the office. I’ve found this to be very useful so far and there have been a number of very interesting presentations. We generally get most of the people involved in the interview process to come along for it, or pull anyone else in we think might be interested.

I’m looking for people who can communicate and talk coherently to a group of people, express their enthusiasm for a topic, and answer any questions that come up.

Technical Interview

The technical interview is with two developers, with at least one of them an Architect or Tech Lead. We give them a free rein, but usually the candidate will be asked to talk about some of their recent projects, discuss the more theoretical side of things and solve a simple coding problems on a whiteboard. If they’ve indicated a particular interest or expertise in say MVC, they might be asked to discuss it and related patterns in depth.

For more senior roles we’re also looking for the extent to which they’ve been able to organize and lead teams, mentor others, take responsibility for getting things done and make process improvements.

Development Manager Interview

At this point, we usually have a good idea about what’s going to happen to them. If a candidate has failed miserably at the pairing test, we’ll send them home immediately, but assuming they got through to the Technical Interview, there’s a checkpoint with the result being either send them home, we think we want them, or they’re completely amazing and we must have them.

If we’ve decided to send them home, I try to give as much feedback as possible so that they know what to work on. If we’ve decided that the we absolutely must have them, the formal part of this interview is often cut short and we end up with something close to a sales pitch for the company and position.

Otherwise we go ahead, with possibly another code problem on the white board and more general discussion of their background. The Development Manager will usually round this interview off by trying to get a feel for how likely they are to say yes and when they can start if we do want to hire them since that helps our planning. This is usually around working out what other interviews / offers they have, and whether they would accept an offer from us immediately or wait for results from other companies.

For an exceptional candidate we may be prepared to offer more money than planned, but often a budget will have been preagreed and if we blow that budget, we need to get approval.

Finally, if everything is still looking good, I’ll walk the candidate around the office, pointing out some of the perks and try to give them a sense of how the teams work.

After I show them to the lift, I then walk around everyone who has been involved in the interview process and get more detailed feedback. A single strong negative from anyone is usually enough to reject a candidate. People are often nervous of giving strong opinions if they know that it might be acted on, so some form of synthesis of opinions is normally left to me and the Development Manager. We try to get back to the candidate or their agent the same or the next day.


I’ve come to the conclusion over the last few years that our industry has changed significantly. While I used to think of it as an engineering discipline where people would get qualifications that would prove they could do the job, it’s become increasingly obvious that there are no qualifications that can do that for software development. Indeed, having a certification on your CV isn’t always regarded as a good thing by interviewers and while University degrees have value, they’re by no means a guarantee that the person in question can actually solve problems with code.

I think we’ve moved to a different place, more like art and design, where the portfolio is king. I often moan about how useless CVs are for making decisions – an interesting portfolio (a well organised github account is often enough) is a much stronger advert for a candidate than the most beautiful CV.

Modern Times: A New Clock

If you’re a regular reader, you might know that I occasionally play with different ways of representing time. I’ve come up with my latest version which I call UIT, and you can see it on github.

Clock Difficulties

Why do we need a new clock? Well, you could ask William Burke. On the 31st of October 1828, William Burke and his co-conspirators murdered Mary Docherty, the last in a line of 16 murders they’d committed in order to sell the bodies to the medical profession. When the law finally caught up with them, Burke and his partner, Helen McDougal had already disposed of the body and agreed their story; Burke did indeed meet Mary Docherty in a pub and invited her back to stay with them, but she’d left before 7 and they had no idea where she’d gone. When the police questioned them separately, Burke insisted that she’d left at 7 in the morning while Helen swore she’d left at 7 in the evening.

It can be difficult to get your story straight when you’re dealing with time, but Burke and McDougal got off lightly. In September 1999, three terrorists died at the same time in two different cities. Both groups were killed by the bombs they were transporting which were set to go off at 17:30. The confusion was simply that the bombs were set to go off according to Palestinian Daylight Savings Time while the guys driving the cars had made their plans on Israel Standard Time.

Even if you’ve never found yourself hanged or blown up over a clock confusion if you’re anything like me you’ve still made plenty of mistakes with clock arithmetic and daylight savings time.


So why is the clock the way it is? Dividing a ‘day’ into 12 parts was something that the ancient Egyptians started. They liked to set their sundials up with a dawn and a dusk hour, and ten full daylight hours. Since it was based on the hours of sunlight, the length of an hour would change with the seasons. In 1500 BC, the Egyptians had already invented a portable shadow clock, the spiritual precursor to the modern watch.

The Egyptians were not alone in preferring to divide things into twelves. The Babylonians had a similar obsession inherited from the Sumerians. The Sumerians were one of a group of peoples who independently invented writing around 3500 BC. The Sumerians pretty much invented civilisation as we know it, but they had inherited the Wheel and a number of other important advances from a culture that was lost to climate change (the 5.9 kiloyear event), the Ubaid culture. Anyway, for the Sumerians, counting to 60 seems to have been as natural as counting to 10 is for us. One suggestion for how they may have done it is by using five fingers from one hand to point to one of the 12 finger joints on the other hand, which perhaps explains why they divided the whole day into 12 parts (each around 2 of our modern hours).

When you’re wondering when to get up, when to eat lunch and when to feed the animals, measuring time according to the amount of light left in the day is a good idea, but when you’re trying to formalise systems and work with people in different parts of the world it becomes awkward. The ancient Greeks decided that the Egyptian system of variable length daytime hours was no good, and started splitting the day into 24 equal length hours. This was quite a big change and was by no means universally accepted until the advent of mechanical clocks made it by far the easiest way to measure time.

But why have we kept such an ancient system? After all, the Babylonians tended to measure things based on the lengths of various body parts or the weight of a grain of Barley. We don’t do that any more, but we seem to have carried their predilection for 60s into an age where it’s pretty inconvenient.

A New System

When you’re trying to come up with a new system, nature throws a couple of roadblocks in the way. There are two extremely useful natural time periods – the solar day, which is a nicely predictable cycle of light and dark, and the solar year which is a larger cycle of weather and temperature. Unfortunately in absolute terms, both vary annoyingly making them unsuitable for use in science and engineering, and even worse, the solar year is not an integer multiple of the day.

I think that while science and engineering may need a fixed length period of time, perhaps based on a fraction of the speed of light, this is not particularly useful for day-to-day use. So I wanted to choose a humane system based on either the length of a year or the length of a day. Ultimately I choose day, because I find it more useful to have lunch at the same time every day than to buy a winter coat at the same time every year.

Astronomical Clock Face

The obvious choice therefore is to measure everything in fractions of a solar day. I’d already hit on this by accident – if you create time units based on tens and hundreds, what you are actually doing is just naming some of the different places in the fraction of a day decimal. The places that relate most closely to what we currently do are to have hour-equivalents which are tenths of a day, minute-equivalents which are hundredths of an hour-equivalent, and second-equivalents which are hundredths of a minute-equivalent. Interestingly, the second equivalents come out fairly close to old-style seconds, and calculations become incredibly easy – 8:30 plus 5:80 becomes 1 day 4:10 (or 0.83 + 0.58 = 1.41).

But solving my difficulties with Dr. Kawashima’s Brain Training was not the extent of my ambition. I’ve also had to work extensively with people in distant timezones, and it can be very confusing to know when you’re actually going to have a meeting, so I wanted a time that was numerically the same everywhere. We could do this already – everyone could use GMT for example, but that’s awkward, because it’s not obvious what time midnight or noon is where you are. Actually, it’s currently not obvious when noon is, because you probably don’t live at the exact center of your time zone. For example Oxford Time should be 5 minutes and 2 seconds behind Greenwich Time, but with the old system, this is too difficult to represent. These days, we can calculate this kind of thing exactly, so my new clock requires access to your GPS coordinates. While the numbers stay the same for everyone, their position on the clock face always ensures that your local solar noon is at the top of the clock face and solar midnight is at the bottom of the clockface, with high accuracy. Another benefit of having the whole day be one complete rotation of the clock is that I can draw on the dark and light times, based on your location with sunrise and sunset marked on every day.

Of course, choosing the day to have your IM meeting can be difficult too, since days start at different times for different people, so I’ve invented a new set of days, with a base 10 as well, going Nullday, Unday, Duoday, Triday, Quadday, Pentday, Hexday, Heptday, Octday, Nonday. These days form a new 10 day week. Now, I don’t want to replace the 7 day work week (well actually I do, I’d like 4 days on, 3 days off), so I’ve made sure my clock shows the old weekdays too, but when organising meetings, you should use the new weekdays. Instead of months, they get numbered in the year, so the first day of the year is the zeroth Nullday, and the last day of most years is the 36th Pentday, although some years will have a 36th Hexday.

Sadly, I can’t really fix the years, but once everyone has adopted this system of time, perhaps we can work on modifying the Earths orbit to make the years more sensible too.

In order to make the transition easy, I’ve made a canvas based app that should run in your browser (even on mobile devices), which is available here. It can hook in to Google Calendar and draw your meetings on the clock face, it displays sunrise and sunset and shows noon at the top and midnight at the bottom based on your location. It even shows you legacy time too, so that you can still talk to any luddites you might know.

But I can’t be responsible for bad outcomes if you make arrangements to plant bombs or evade justice with anyone still using the old time system.


I’ve ported some C code designed to calculate the suns position into javascript. It’s all part of my plan to make a clock that shows the local sunrise and sunset times on its face.

Given a bewildering number of parameters (most of which you can just guess), it will tell you the time of sunrise, sunset and solar noon.

It also has a few methods for calculating time based on fractions of days, and the Julian Day (a commonly used system in astronomy).

You can try it out here.

Money: Where it comes from and where it might go

Everyone eventually wonders why people use money when it doesn’t have any ‘intrinsic’ value. It can feel like a house of cards built over an abyss.

Of course the answer is that people want money because they think that other people want money. That means that the value of money is mainly speculative. You value it because you speculate that you’ll be able to offload it to someone else for something you actually want later.

It’s only mostly speculative though. There are two mechanisms by which it acquires a small amount of nonspeculative value which is then magnified by its usefulness.

Firstly, the government demands that individuals give it some of the money it has issued back in taxation. That means that there will always be at least some demand for money. You probably know already that the government is going to demand some of that money from you this year, so you know you need some. That tiny kernel of certainty (death and taxes) is enough to bootstrap it as a medium of exchange, which means that there’s more and more you can do with these tokens, which means that people value them more and more.

If that isn’t enough then the government has another trick up its sleeve – they will not provide the power of the justice system to enforce collection of a debt in anything except the government issued tokens when the debtor has offered to pay in that (that’s the meaning of legal tender). Ebay used to do something similar with paypal; they would insure transactions made with paypal against fraud to a higher value than those made in any other way. By using government issued money, you get to rely on extra government backing. And making sure you have a stock or income of government tokens is a way of protecting yourself against unreasonable demands by creditors.

It can go wrong though. The ability to trade and enforce debts and pay taxes in a currency is valuable in its own right, however if a government is too weak or corrupt or economically pressed to guarantee the payment of debts, or perceived to be increasing the supply of tokens too quickly causing the value of the money you have to fall quicker than it provides value or if it simply becomes too difficult for people to get hold of the government tokens (perhaps they’ve all gone to Germany), then they will start to create their own tokens.

Nevertheless, much of what people think of as money with intrinsic value isn’t in a very different position. Why would a normal person want gold? What can they do with it? The only reason I would want gold is if I believed that other people in the future would want gold which would allow me to swap it for something I really wanted. This is just as speculative as government issued money, assuming the government provides those selfsame two guarantees with gold, otherwise it could be even more speculative. The argument that it’s useful in electronics is valid, but only to the same extent that if the worst comes to the worst I can burn my government issued banknotes for heat. Ultimately, there are few currencies that can’t be converted into the base currency of the universe: joules.

Anything near the bottom of Maslow’s hierarchy is much less speculative. I can eat a hamburger to get rid of my hunger. That’s real, nonspeculative value right there. There’ll be time in anyone’s life where they’ll take a mess of pottage now over a hypothetical fortune in gold later. Indeed, the ancient currency unit ‘shekel’ was originally a measure of barley (180 standard grains). The first metal currencies were actually tokens which represented stored (and could be converted into) actual grain. Hygenie is an essential human need, so perhaps it’s not surprising that tide detergent is being used as currency amoung drug dealers in the US.

But who determines the distribution of the government tokens in society? Well there are a few ways. Obviously we got here from a system of barter, where you would swap one tangible good (a cow) for another tangible good (some gold or silver). When we eventually switched the gold and silver for intangible ‘money’, the government tokens were distributed to people according to how the gold and silver had previously been distributed. The original distribution was not completely fair, but it had a history of at least sometimes rewarding hard work and wealth creation. The government can also modify the distribution by giving people tokens in exchange for work hopefully benefitting the community (employing them), or by paying them interest on a bond or by disbursing some as welfare or grants, not to mention adjusting the amount it takes off different kinds of people in tax.

Some systems try to equate currency with labour. There are time banks where if you labour for an hour you are provided with 1 hours worth of tokens which you can then redeem against someone else prepared to labour for an hour. Since almost everything we want requires labour (either to acquire the raw materials or to work them), labour is in some ways the human equivalent of the joule; an energy based currency.

Early Chinese Tool MoneyIn China, I saw tool based money for the first time. Small trowel heads and knives were used as currency during the Zhou dynasty (although they ultimately became very stylised). Tools are labour multipliers. There’s a lot to recommend tool based money – you can actually use a tool to create wealth directly. Use the trowel to plant, and the knife to hunt or butcher. They can be stored easily and are durable. It seems that there are some people who could survive almost anywhere in the world given a decent machete. It’d be interesting to see what an economy based on survival tools would look like.

Naturally anyone thinking along these lines for the modern age will be considering the ultimate human tool; the computer. There are a number of computer based currencies, some backed by boring old gold (the USA has a history of jailing and shutting down e-gold operations, although pecunix seems to have survived better than many), some just a score in a computer that you can pay to have incremented (and increment others at the cost of decrementing your own). The popularity of these is driven by a distrust in the governments that control their currencies but also by the friction and pain that moving small amounts of money between individuals and sometimes across borders entails at the moment.

Amazingly corporations like Visa can charge an insane percentage on nearly every transaction in our economy. By requiring that merchants offer the same price to those paying with and without credit and debit cards, it means that anyone paying cash is subsidizing the cost to the merchant of everyone else that is paying by card. All this for something (debit) that in the age of the computer should be free.

In a more rational system, banks would be required to provide every current account with an incoming and an outgoing account number (so that there is no danger in sharing your incoming account number), and then an api that would allow me to push money from my account to some arbitrary account number. I imagine that every payment, whether giving a friend 50 cents or paying hundreds of dollars of electricity bill would become something like scanning the QR code of the receiving bank account with a mobile phone. And there’s no reason it should cost anything beyond a normal current account.

Interestingly, removing the 4% rent that the card companies charge would result in a massive stimulus to the economy at a time when it could really do with it. (Reducing VAT in the UK made a big difference and was smaller, and more painful to the government finances than this would be). Beyond the fact of the stimulus though, there are a million interesting business models that the internet is crying out to implement, but can’t because of the needless cost of transactions.

Some startup companies are struggling to address this already. Flattr is a nice idea, but takes a 10% cut, which I can’t countenance. Gittip seems more equitable.

One great hope is bitcoin. It’s decentralised, so there is no need to worry about governments undermining its value (although also of course there is no base value or justice system guaranteed by government either…), semi-anonymous (everyone knows the wallet ids that own the bitcoins, but they don’t know which human owns those ids without further investigation) so you don’t have to worry so much about intrusive marketeers (or governments) snooping into your transactions, and because it’s modern and thoroughly electronic, the transaction cost is vanishingly small.

There are two things I don’t like about bitcoin though. Firstly the initial distribution of coins. While with real currencies, the initial distribution was bootstrapped on top of centuries of barter, bitcoin has a concept of ‘mining’ which is guaranteed to get much harder over time until it ultimately becomes impossible. This means that the people who originally joined the network were easily able to amass millions of dollars worth of bitcoins for very little work, while now to ‘mine’ a single bitcoin requires an amount of computer power outside the reach of casual users. This kind of initial distribution is strikingly unfair and leaves a bad taste.

Secondly bitcoin is not backed by anything. You can’t redeem your bitcoin for anything except its speculative value – even the computer power that went into ‘mining’ it originally wasn’t actually solving a problem that is valued for anything except its effect on bitcoins. You can’t burn it for its joules or get back those computer cycles for something useful.

I would much prefer a system where all grid computing systems offer certificates to certify that you’ve done some useful work (electronic labour, perhaps on protein folding, or on some problem that other people would be prepared to pay the grid providers for). The issuing grid network could guarantee to redeem them for some percentage of the work that you’ve done, and then allow them to be freely exchanged. The decentralisation would come from the fact that anyone could run these grid computing networks, so there would be many authorities. That way the currency is backed by something, has a real value and furthermore is doing something more useful than just bringing closer the heat death of the universe.

A system like this could fix spam fairly neatly by adding a step to email exchange where a server receiving an email for delivery requires a small (e.g. 5 seconds worth) payment in certified grid computing work before passing the email on (potentially smaller or waived fees for mail signed by people you know). That way, to send a spam email to a large number of people would require a correspondingly large investment in solving useful problems, and anyone who can solve protein folding deserves to be able to send a few spam emails in my book.

Regardless of what kind of approach wins, we are being held back by lack of trust and transaction costs more appropriate for yesterdays world. Unless governments can move with the times and give us more value from our currencies, more and more alternatives will be tried and eventually one will stick.


Particularly when I start a new web project, I often find myself wanting a javascript console that I can embed in a page. The consoles built into modern browsers are usually very good, but it’s also nice to have something useful for logging and visible on the screen as soon as the page is loaded.

As it happens, the excellent CodeMirror implements a lot of the functionality I needed and is easy to extend, so I added a few small pieces of javascript and ended up with SoleMirror, a CodeMirror javascript console.

You can use it at the SoleMirror console page or get the source at the github page. There’s even a bookmarklet that will install it into other pages.

This was just something that I put together over a couple of hours, so there are certainly inelegancies and I’ve not tested it in browsers that aren’t ‘fun’. Maybe you are just the person to improve it…

The Madness of the HTML5 FileSystem API

I’m quite excited about the possibilities that the FileSystem API opens up to web applications. At the moment it’s only implemented in Chrome, but the w3c has a spec and I hope to see it in other browsers soon. I’ve been working on a library to make it easier to use.

The FileSystem API is asynchronous, so you’ll notice quickly from the html5rocks guide that even simple tasks involve many many levels of nesting of callbacks to get them sequenced correctly. Take a look at this piece of code to create a file and then list all the files in the root folder:

window.webkitStorageInfo.requestQuota(PERSISTENT, 1024*1024, function(grantedBytes) {
  window.webkitRequestFileSystem(PERSISTENT, grantedBytes, onInitFs, errorHandler);
}, errorHandler);

function onInitFs(fs) {
  fs.root.getFile('log.txt', {create: true}, function(fileEntry) {
        fileEntry.createWriter(function(fileWriter) {

          fileWriter.onwriteend = function(e) {

                function listResults(entries) {
                    // finished!  Output the directory entries

                var dirReader = fs.root.createReader();
                var entries = [];

                // Call the reader.readEntries() until no more results are returned.
              var readEntries = function() {
                 dirReader.readEntries (function(results) {
                  if (results.length < 1) {
                  } else {
                    entries.push.apply(entries, results);
                }, errorHandler);

              readEntries(); // Start reading dirs.

          fileWriter.onerror = errorHandler;

          // Create a new Blob and write it to log.txt.
          var bb = new WebKitBlobBuilder();
          bb.append('Hello FileSystem API');

        }, errorHandler);
  }, errorHandler);

function errorHandler(err) {
    console.log("ERROR", err);

While this code can be tidied up a little, compare it with the bash commands to do the same thing:

echo Hello FileSystem API > log.txt

There’s quite a difference in length and complexity. You may not have noticed because some of it is split out, but by the time we get to the listResults function and the end of the task, we are 5 levels deep, mainly in closures.

Here’s the equivalent code using this library:

var fs = FS.request(window.PERMANENT, 1024*1024);
    FS.getFile("log.txt", true),
    FS.write("Hello FileSystem API"),
).then(function(entries) {
    // we're done!
}, function(err) {
    console.log("ERROR", err);

It’s still not as simple as the bash code, but the nesting has been turned into sequential calls, and this gives us the power to compose and reuse pieces of functionality much more easily. It’s actually an implementation of the Monad Design Pattern; Promises that gives us the ability to unwrap all of those nested calls.

Design Pattern: Wrapper with Composable Actions

Name And Classification

Wrapper with Composable Actions. An extension to the Wrapper pattern that allows operations to be composed on wrapped objects.

This pattern is well known in another context, but its usual name and presentation is not very descriptive so I’m avoiding it deliberately until the end.


An object has been placed in a wrapper but it must still be possible to use it as input to a chain of operations.


As well as the common uses for wrappers (changing one interface to another), there are other situations where it’s useful to wrap objects, for example:

  • if you need to keep track of a result plus extra information about the actions you’ve carried out as you perform them.
  • if the value you are wrapping is not simple in some way – perhaps it is going to be computed asynchronously and the wrapper represents a promise to provide the result.
  • if you want to add decision points or extra actions based on data not included in the wrapped object, but available in the wrapper that should be done on every operation. (This is like the decorator pattern).
  • if you want to indicate that something fundamental has changed, you can use a wrapper object that doesn’t provide the ability to get at the original object.

Actions (operations that take a value) are usually created to operate on the nonwrapped versions of objects, which means that in the general case, wrapped objects can sometimes be difficult to compose actions on. This pattern specifies two functions that can be implemented to assist with composing actions on wrapped objects.

In general, this pattern is useful when considering a sequence of operations where each step depends to some extent on the previous value and produces a new value. Because of this it is commonly employed in functional languages. Its implementation in languages without first-class functions is generally awkward. This pattern can provide extra value in languages with a strong (stronger than java) type system.


To ensure composability, two functions need to be provided, wrap and chain (although their exact names can vary based on situation). I have shown them here on the interface Wrapper:

/* This is analogous to an operation that takes something of type T and returns something of type A, except
 * that A is provided in wrapped form.
 * In a language with first class functions, Action describes a function rather than an object interface
 *    i.e. T -> Wrapper
interface Action {
	Wrapper call(T value)

interface Wrapper {
	// This is likely to be provided as a static method or constructor.
	Wrapper wrap(T value);

	/* This is normally more complex than simply returning the result of the action, as there
	 * is likely to be some behaviour or state embedded in *this* Wrapper that needs to be appropriately combined
	 * with the Wrapper that is got by executing the action.
	 Wrapper chain(Action action);

Example 1: Promises

Promises are wrappers for value objects that may not yet be available for calculation. Promises are typically instances of this pattern. Here is a (much simplified) example in javascript:

function Promise() {
	this.onResolve = [];
Promise.prototype.chain = function(thenFunc) {
	// return a new promise that will resolve at the same time
	// and with the same value as the result of the passed function
	var result = new Promise();
	this.onResolve.push(function(val) {
		var resultingPromise = thenFunc(val);
		resultingPromise.chain(function(val) {
	return result;
Promise.prototype.resolve = function(value) {
	// tell all functions waiting for a value that we have one
	this.onResolve.forEach(function(then) {
	// clear down (to avoid memory leaks)
	this.onResolve = null;
	// from now on, chain gets called immediately with the value.
	this.chain = function(then) {
		return then(value);
Promise.wrap = function(value) {
	var result = new Promise();
	return result;

This can then be used like so:

function parseJSON(json) {
	return Promise.wrap(JSON.parse(json));

function getURL(url) {
	var result = new Promise();
	var xhr = new XMLHttpRequest();
	xhr.onreadystatechange = function() {
		if (xhr.readyState == 4 && xhr.status == 200) {
	}"GET", url, true);
	return result;

function postStatistics(obj) {
	var xhr = new XMLHttpRequest();"POST", "http://statserver", true);
	return Promise.wrap();


The big win here is that actions that would otherwise have had to be nested can be composed into a data processing chain, and synchronous and asynchronous actions can be composed together transparently.

Example 2: History

Supposing you want to store the steps taken in a computation as you go along. You can use this pattern to achieve this:

function History(val) {
	this.val = val;
	this.commands = [];
History.prototype.chain = function(action) {
	var result = action(this.val);
	result.commands = this.commands.concat(result.commands);
	return result;
History.wrap = function(val) {
	return new History(val);
History.prototype.toString = function() {
	return this.commands.join("\n")+"\n"+this.val;

With some actions defined:

function addTen(val) {
	var result = History.wrap(val + 10);
	result.commands.push("added ten");
	return result;

function divideTwo(val) {
	var result = History.wrap(val / 2);
	result.commands.push("divided by two");
	return result;

You can now do a calculation and then print out the result and the steps taken to achieve it like so:


This time, extra information was threaded through the whole process, but the individual steps in the chain remained composable.


Often there are already a body of functions that operate on unwrapped values, returning unwrapped values. An ‘apply’ function can be defined in terms of the two functions already added to allow them to be easily used:

Wrapper.prototype.apply = function(func) {
	return this.chain(function(val) {
		return Wrapper.wrap(func(val));

Big Reveal

As you may have realised already, this pattern is also known as ‘Monad’ in functional programming, where the wrap function is called ‘unit’ (or ‘return’) and the ‘chain’ function is called ‘bind’ (or >>=).

Despite the fact that Monads are commonly associated with Haskell and functional programming, they’re used and useful in many places and I think describing them as a Design Pattern is much more accessible than describing them by their similarity to category theory constructs.

I came to this conclusion after realising that I needed promises to make a reasonable version of the javascript FileSystem API (which is available on github) and then suddenly realising that Promises are actually Monads, something that nobody had mentioned to me before.


Dancing-links: Understanding how the algorithm works

I’ve been writing a little recently about a program I wrote to solve sudoku and other problems using the dancing links algorithm.

This was how I explained it when someone asked for information about how it worked:

Firstly you have to understand Exact Cover. An exact cover problem is a problem where you’re given a bunch of choices, and a set of constraints and your challenge is to select a bunch of the choices that will fill every constraint exactly once.

For example, consider the case of someone creating their ice dance routine. They have a number of tricks that they need to show the judges, and don’t want to perform any trick more than once. They have a number of sequences which are groups of tricks that can be put together and they want to choose the ideal selection of sequences to cover all the tricks once. In this example, the constraints are that they must perform every trick. The choices are the possible sequences they could incorporate into their routine.

A nice way to represent problems of this sort is to draw out a table where the constraints are columns and the choices are rows, and you have a big X in cells where a particular choice fulfills that constraint.

As it turns out, given the right constraints and choices, sudoku can be described as an Exact Cover problem. It does involve 729 rows and 324 columns, which is perhaps why people tend not to use this technique when solving sudoku by hand.

Ok, assuming you’ve got that, now you need to understand Algorithm X. Knuth said of it “Algorithm X is simply a statement of the obvious trial-and-error approach. (Indeed, I can’t think of any other reasonable way to do the job, in general.)”. You can actually work through this by hand if you have pencil and eraser and the table drawn out in pen as I described in the previous section. Here’s my description of algorithm X:

  1. If your table has no columns, stop – you’ve solved it. If you’ve got a partial solution stored, then it’s actually a real solution, return it.
  2. Select a column (representing a constraint).
  3. Find a row with a cross in that column (representing a choice that fulfills that constraint). Add it to some kind of structure where you’re storing potential solutions. If you can’t find a row, give up – there are no solutions.
  4. Assume that the row you found in 3 is in the solution, so remove all columns that it have an X in that row. While removing all those columns, also remove all rows that have an X in the columns you’re removing (because you’ve already satisfied the constraint, so you’re barred from choosing something that would satisfy it again).
  5. Now recursively try to solve the reduced table. If you can’t, remove the row you tried from the potential solution structure, restore all the rows and columns you removed in steps 3 and 4 and try a different row. If you run out of rows, then give up – there are no solutions.

Now that you understand that, you can understand dancing links. Dancing Links is a way of implementing that algorithm efficiently. The key point of dancing links is that in a linked list, when you remove a node (which can be done efficently by modifying the pointers of its neighbours), the node that you’ve removed has all the information you need to add it back to the linked list (in the case that it turns out you were wrong when you guessed it was part of the solution). That plus the fact that if you make all your linked lists circular then suddenly you lose a lot of special cases is pretty much all dancing-links is.