Just A Summary

Piers Cawley Practices Punditry

We Deserve Better Than This

A long time ago

One hundred years ago, we got caught up in a really stupid war. War’s never what you’d call a good idea, but the first world war is the benchmark of stupidity (unless you’re Michael Gove, but he’s fast becoming the new benchmark of stupidity).

Something strange happened at the end of the war. In 1914, only around 30% of the adult population had the vote. By February 1918, a general election was years overdue. The Russians had killed the Tsar and were embracing communism; the women’s suffrage movement was threatening to start up again; and millions of returning soldiers — men used to violence by now — would have no say in how they would be governed.

Parliament read the tea leaves and passed the Representation of the People Act, extended the franchise to all men over 21 and many women over 30. This tripled the size of the electorate, 43% of which was now female (if they’d allowed younger women the Vote, then women would have had a clear majority because the war had killed so many men. Voting ages were equalised in 1928).

In the election, not much changed. The Tories won the most seats with a new class of MP, mostly coming from trade and commerce. Labour’s share of the vote increased dramatically, but the nature of the electoral system meant they only won 57 seats (fewer then Sinn Féin, who basically won Ireland). The Liberals came third, in the popular vote (second in seats, first past the post really sucks) but Lloyd George remained prime minister promising a land “fit for heroes”.

He didn’t deliver. The Irish had to fight for their independence and won it in 1921 (ooh look, another stupid war) and in the 1922 election Labour took over from the Liberals as the second party British politics.

Without the first world war, I wonder how long it would have been before parliament was shamed into extending the franchise to all adults. The expanded electorate may not have got the government it deserved, but the Vote was won.

Time passes…

Seventy years ago, the next big war ended. This time the returning soldiery weren’t going to be fobbed off with fine words and broken promises. Young men came home from defeating fascism in Europe and saw a sitting government still dominated by the party that had blundered into the war in the first place, still promising more of the same. They heard the Labour’s promises of full employment, a National Heath Service, a cradle to grave welfare state and a compelling vision of the future. And they voted Labour. Oh, how they voted Labour.

Labour won the kind of majority that politicians dream about and went straight to work. Attlee’s government nationalised roughly 20% of the economy; built social housing and encouraged the growth of new towns; introduced national insurance, unemployment benefit and the family allowance; expanded on the universal free education introduced with the Education Act of 1944; and created our National Health Service and what came to be known as the “Postwar Consensus”.

In five years.

In the face of austerity that made our current conditions seem like the lap of luxury.

They didn’t just deliver homes, health and education. They found money for the Arts Council too. Because once you’ve dealt with the worst that physical poverty can bring, shouldn’t you look to do something about poverty of aspiration too?

Few revolutions are so successful. No others have achieved so much without violence. A generation came back from war, said to itself, “We deserve better than this” and did something about it. If you’ve got a grandparent living who voted in that election, go and thank them. Stopping Hitler was a towering achievement, but our grandparents managed to surpass even that.

Never knowingly not evil

The Tories hated it. Every time they’ve had power since they’ve chipped away at the Postwar Consensus. They’ve had to be sneaky about it though. Once you’ve won the right to fall ill without fearing bankruptcy; once your children are guaranteed a decent education; once you have a roof over your head that isn’t two pay cheques away from being taken away… Well, you get attached to such things.

The 1944 education act was a Tory act, and rather than replace the old system, it added state schools to the mix. The rich were able to opt out and keep their children in the public school system. The public schools and their associated ‘old boy’ networks survived. Etonians don’t just learn Latin and Greek and the art of fagging; they learn that glib smoothness, the art of masking base and selfish motives behind the a veneer of affability. They learn to help their friends and the Devil take the hindmost.

The thing about villains is, they think they’re heroes. They think there’s nothing nobler than helping a chum. They think the world is just. If you’re blessed with the kind of money that Cameron and Osborne inherited you’re going to convince yourself that you somehow deserve your wealth. And if you deserve your wealth, then it’s a small step to thinking that the poor deserve their poverty.

If the world is as it is because everyone deserves their station, then the welfare state is going to seem like the next best thing to evil. The state wants to take some of your money and use it to pay some loser’s rent? It wants to give a drunk a liver transplant? Disgusting! If those people really cared about keeping their home, they’d get a decent job — it’s not hard, just have a word with a friend. And the drunk has only himself to blame. They’ve made their bed and they should lie in it.

The real trick though, is convincing those who really are a pay cheque or two from disaster (which is pretty much anyone with a mortgage or in private rented accommodation when you stop and think about it) that the enemy is the poor bastard on benefits. Not the landlord who banks their housing benefit. Not the employer who doesn’t pay a living wage; who lets the taxpayer top up their employees’ pay packets. And certainly not the government which won’t let local authorities build new social housing to help reduce housing costs (which would pay for itself in short order).

This government has that down pat. They’ve used a financial crisis — one whose seeds were sown when Thatcher and Reagan deregulated the markets and fertilised by every bloody government since (there are no innocents in this fiasco) — as the excuse and are dismantling what was so hard won by our grandparents. A government that promised “No top down reorganisations of the NHS” is gutting it. The poor are being forced out of rich areas by the benefits cap and the bedroom tax. The young are… oh god, the young… the coalition seems to read “A Modest Proposal” as sound policy. When I went to university, my fees were fully paid (Thatcher had frozen maintenance grants, not that I’d’ve got one after remans testing). My step-grandson is looking at a minimum debt of £27,000 — assuming he can live for nothing. If you’ve got the cash to get your kid the best education money can buy, you don’t want some bright lass from the local comprehensive competing with them for the plum jobs. Pull up the ladder Jack!

It doesn’t have to be like this. Ask yourself how it is that, in 1945, when the country was on the bones of its arse with precious few lines of credit and an industrial base battered by years of bombing we built a welfare state and a national health service that have lasted for seventy years? Ask how we could, at the same time, find the money to subsidise the Royal Opera House and Sadlers Wells and many other arts organisations? Ask how we could afford, as a country, to support our university students so they could spend their time concentrating on their degrees and the life of the university and not miring themselves in debt?

Ask how we can afford not to do those things now.

There is no excuse for what our government is doing to the poorest among us. Or for what it’s doing to the middle classes come to that. An underclass is handy thing. They keep those on lower middle incomes so bloody scared of falling into poverty that they’ll put up with gross abuse just so they can hang on to what they have. Some guard their little portion with such jealousy that they will not just tolerate the abuse of the poor, they will be baying for blood.

It pains me to say this, but not everything the coalition has done is evil. And I don’t just mean Equal Marriage. Even Michael “Stopped Clock” Gove’s been right about something — the emphasis on learning to code rather than merely drive Powerpoint and Microsoft Word is a good thing. The gov.uk initiative is good news — anything which reduces the influence of KPMG, Capita, G4S and their cronies (and which employs so many of my more technical friends) can’t be bad. But a ‘good in parts’ government is still intolerable.

There’s an election due in 2015. 2015, the 70th anniversary of the Attlee revolution. It’s time to do it again. Vote. Vote progressive. Vote independent or green. Hold your nose and vote Liberal or Labour. Join a fucking party and work to change their outlook. Vote pragmatic. But, whatever you do, vote. Especially if you’re young. Politicians only care about keeping the people who vote happy — if you don’t vote, they’ll ignore you. If it makes some other part of their constituency happy, they’ll shit on you from great height (though I think that may backfire yet — the thing about grandparents is, they tend to like their grandchildren and don’t like seeing them get the shitty end of the stick)

You could listen to Russell Brand and not vote ’cos it’s “irrelevant” — there’s a revolution coming! You could. But you’d be an idiot and you’d be waiting a long time. There’s been one progressive revolution that actually stuck in this country, and that was achieved by voting.

Demand the nationalisation of public goods; the Post Office, Rail, Water, Gas, Electricity. Encourage small businesses and making stuff. Build new public housing. Demand real transparency in markets and government. Fuck landlords. Fuck rentiers.

Change the world. Our grandparents did it seventy years ago. We deserve better. Let’s take a leaf out of their book and do it again.

Published on Sat, 11 Jan 2014 11:01:29 GMT by Piers Cawley .

Reading 'The Traits Paper'

There appear to be two camps around the way Moose::Roles busily arguing about whether the following code should emit a warning:

package Provider {
    use Moose::Role;

    sub foo { 'foo' };
    1;
}

package Comsumer {
    package Moose;

    with 'Provider';

    sub foo { 'no, bar' }
    1;
}

One camp holds that the code should at least emit a warning and ideally blow up at compile time. The other camp (Moose as implemented), holds that it shouldn’t. The debate gets somewhat heated, people end up appealing to the Traits paper as if it were some kind of holy writ. What’s annoying is that the folk who appeal to that paper appear to have read a different paper from the one I remember reading. So I went and read it again, and here’s what it has to say about overriding methods got from traits:

Trait composition enjoys the flattening property. This property says that the semantics of a class defined using traits is exactly the same as that of a class constructed directly from all of the non-overridden methods of the traits. So, if class A is defined using trait T, and T defines methods a and b, then the semantics of A is the same as it would be if a and b were defined directly in the class A. Naturally, if the glue code of A defines a method b directly, then this b would override the method b obtained from T. Specifically, the flattening property implies that the keyword super has no special semantics for traits; it simply causes the method lookup to be started in the superclass of the class that uses the trait.

Another property of trait composition is that the composition order is irrelevant, and hence conflicting trait methods must be explicitly disambiguated (cf. section 3.5). Conflicts between methods defined in classes and methods defined by incorporated traits are resolved using the following two precedence rules.

  • Class methods take precedence over trait methods.
  • Trait methods take precedence over superclass methods. This follows from the flattening property, which states that trait methods behave as if they were defined in the class itself.

Which is pretty much as I remember, and strongly implies that Moose is right not to issue a warning.

The paper has more to say on overriding trait implementations in its section on ‘Evaluation against the identified problems’:

Method conflicts may be resolved within traits by explicitly selecting one of the conflicting methods, but more commonly conflicts are resolved in classes by overriding conflicts.

And (relevant to another argument around role composition that’s more or less current):

… sometimes a trait needs to access a conflicting feature, e.g., in order to resolve the conflict. These features are accessed by aliases, rather than by explicitly naming the trait that provides the desired feature. This leads to more robust trait hierarchies, since aliases remain outside the implementations of methods. Contrast this approach with multiple inheritance languages in which one must explicitly name the class that provides a method in order to resolve an ambiguity. The aliasing approach both avoids tangled class references in the source code, and eliminates code that is hard to understand and fragile with respect to change.

There are folk arguing for removing the aliasing support from Moose role composition, but I have to say that I find this argument compelling.

Who’s right

When it comes down to it, referencing the traits paper is just argument from authority, which is one of the classic logical fallacies. However, if you are going to appeal to an authority, try to make sure that you’re not misrepresenting what that authority says. The original traits paper does not suggest that overriding a method got from a trait should come with a warning. On the contrary, it recommends overriding as the right way to resolve conflicts between multiple composed roles.

You may well think that this is problematic. You may be able to show examples where silent overriding has bit you on the arse. You may even have a good argument for introducing warnings. But “Because that’s how the traits paper says you should do it!” is a lousy argument, made doubly lousy by the fact that is precisely not what the traits paper says you should do.

Published on Sun, 14 Apr 2013 10:18:00 GMT by Piers Cawley under , , .

Reading Turing

Some things never disappoint. And reading Alan Turing is one of those things. In an earlier post I told an incorrect anecdote about Turing, and Russ Cox pointed me at proof, in Turing’s own words, that I was wrong. I don’t know why it’s taken me so long, but I finally got around to reading his Lecture to the London Mathematical Society on 20 February 1947.

Wow.

Seriously. Wow. He’s talking about programming the ACE, the ‘pilot’ version of which didn’t run its first program until 1950. And the Manchester ‘Baby’, the first stored program electronic computer, was more than a year away from running its first program. It sounds like it might be dreadfully speclative and either handwavy or as out there and daft as the usual crop of ‘futurologist’ type predictions.

As you can probably guess from the fact that I’m bothering to write this up, it was nothing of the sort. I suggest you nip off and read it for yourself. It won’t take you long and it’s well worth your time. Then come back here and find out if the same things struck you that struck me.

Back in the day

Here’s the sentence that brought me up short like a slap:

Computers always spend just as long writing numbers down and deciding what to do next as they do in actual multiplications, and it is just the same with the ACE.

I got to the end of the sentence before it clicked that back then a computer was a human being performing a computation. What we think of today as ‘a computer’ was what Turing called ‘the ACE’ and back then it certainly deserved that definite article.

Then I read it again and recognised the deep truth of it. Back in Turing’s day, the ACE was planned to have a memory store made up of 5 foot tubes full of mercury acting as an acoustic delay line. Each tube could hold 1K bits and an acoustic pulse took 1 millisecond to get from one end of a tube to the other, so the average access time for a single bit of memory was around 500 microseconds. When it was finally built, it was the fastest computer in the world, running at the mighty speed of 1MHz. Nowadays we think that a cache miss that costs 200 processor cycles is bad news and our compilers and processors are designed to do everything in their power to avoid such disasters. In Turing’s day there were no caches, every time something was fetched from memory it cost 500 cycles. (Well, in 1947 that would be 500 cycles + a year and a half before there was a computer to fetch the memory from in the first place).

Curiously, the gold standard of high performance memory in Turing’s day was the same circuit as you’ll find in high speed SRAM today - the bistable flip flop - but done with valves and hope rather than by etching an arcane pattern on a bit of silicon.

Subroutines and code reuse

Turing seems to have invented the idea of the subroutine. Admittedly it’s implicit in his implementation of a Universal Turing machine in On Computable Numbers…, but it’s explicitly described here. And, rather wonderfully, the pipedream of extensive code reuse is there in the computer science literature right from the start:

The instructions for the job would therefore consist of a considerable number taken off the shelf together with a few made up specially for the job in question.

There are several moments when reading the paper where I found myself thinking “Hang on, he means that literally rather than figuratively doesn’t he?” and this is one of them. When your code is embodied in punched Hollerith cards, a library is just that. Row upon row of shelves carefully indexed with reusable code stacked on them like so many books.

Elsewhere he says:

It will be seen that the possibilities as to what one may do are immense. One of our difficulties will be the maintainence of an appropriate discipline, so that we do not lose track of what we are doing. We shall need a number of efficient librarian types to keep us in order.

That’s my emphasis, and ain’t that the truth? I’m not sure that Turing would have foreseen that the nearest thing we have to ‘a number of efficient librarian types’ would turn out to be Google’s computers though. One wonders whether he’d be horrified or delighted.

Descrimination

Here he is, having painstakingly explained how the use of loops can reduce the size of a program:

It looks however as if we were in danger of getting stuck in this cycle and unable to get out. The solution of this difficulty involves another tactical idea, that of ‘descrimination’. ie. of deciding what to do next partly according to the results of the machine itself instead of according to data available to the programmer.

And there we have the nub of what makes computing so powerful and unpredictable. The behaviour of any program worth writing isn’t necessarily what you expect because it’s making decisions based on things you didn’t already know (if you already knew them, you wouldn’t have to compute them in the first place). This is why I’m optimistic about AI in the long run. I think that given that the behaviour of a single neuron is understandable and simulatable then, eventually we’ll manage to connect up enough virtual neurons and sensors that the emergent behaviour of those simulated neurons is as near to a ‘real’ consciousness as makes no odds. I’m far less convinced that we’re ever going to be able to upload our brains to silicon (or whatever the preferred computing substrate is by then). Whether we’ll able to communicate with such a consciousness is another question entirely, mind.

Job Security Code

The masters are liable to get replaced because as soon as any technique becomes at all stereotyped it become possible to devise a ssystem of instruction tables which will enable the electronic computer to do it for itself. It may happen however that the masters will refuse to do this. They may be unwilling ot let their jobs be stolen from them in this way. In that case they would surround the whole of their work with mystery and make excuses, couched in well chosen gibberish, whenever any dangerous suggestions were made

Oh, did Turing nail it here. 1947 and he’s already foreseen ‘job security’ code. I’ve seen this kind of behaviour all the time and it drives me up the wall. What the peddlars of well chosen gibberish always fail to see that, if you get it right, the computer ends up doing the boring parts of your work for you. And your time is then free to be spent on more interesting areas of the problem domain. Software is never finished, it’s always in a process of becoming. There’s a never ending supply of new problems and a small talent pool of people able to solve them; if you’re worth what you’re paid today then you’ll be worth it again tomorrow, no matter how much you’ve delegated today’s work to the computer. And tomorrow’s work will be more interesting too.

Automating the shitwork is what computers are for. It’s why I hate the thought of being stuck writing code with an editor that I can’t program. Why I love Perl projects like Moose and Moo. Why I’ll spend half a day trawling metacpan.org looking to see if the work has already been done (or mostly done - an 80/20 solution gets me to ‘interesting’ so much quicker).

Job security code makes me so bloody angry. There are precious few of us developers and so much work to be done. And we piss our time away on drudgery when we simply don’t have to. We have at our fingertips the most powerful and flexible tool that humanity has ever built, and we use it like a slide rule. Programming is hard. It demands creativity and discipline. It demands the ability to dig down until we really understand the problem domain and what our users and customers are trying to do and to communicate the tradeoffs that are there to be made - users don’t necessarily understand what’s hard, but they’re even less likely to understand what’s easy. But its very difficulty is what makes it so rewarding. It’s hard to beat the satisfaction of seeing a way to simplify a pile of repetitive code, or a neat way to carve a clean bit of testable behaviour off a ball of mud. Sure, the insight might entail a bunch of niggly code clean up to get things working the new way, but that’s the kind of drudgery I can live with. What I can’t stand is the equivalent of washing the bloody floor. Again. And again. And again. I’d rather be arguing with misogynists - at least there I might have a chance of changing something.

I’m not scared that I’m going to program myself out of a job. I’m more worried that I’m never going to be able to retire because as a society and a profession we’re doing such a monumentally piss poor job of educating the next generation of programmers and some of us seem to be doing a bang up job of being unthinkingly hostile to the 50% of the talent pool who are blessed with two X chromosomes. But that’s probably a rant for another day.

Published on Sun, 17 Mar 2013 12:49:00 GMT by Piers Cawley .

Big Data and Singing Crowds

I watched the rugby yesterday. England vs Wales at Cardiff Arms Pack. It was a great game of rugby - England were comprehensively outthought by a Welsh side with more experience where it counts, but by gum, they went down fighting to the very end. It’s going to be an interesting few years in the run up to the next World Cup.

While the game was going on, I found myself wondering why the crowd’s singing sounded so very good. It’s not a particularly Welsh thing (though Cwm Rhonda, Bread of Heaven and the whole Welsh crowd’s repertoire are have fabulous tunes). The Twickenham crowd getting behind Swing Low, Sweet Chariot sound pretty special too, even if I wish they still sang Jerusalem occasionally. How come a crowd of thousands, singing entirely ad lib with no carefully learned arrangements or conductor can sound so tight?

After all, if you took, say, 30 people and asked ‘em to sing a song they all know, it would sound ropey as hell (unless they were a choir in disguise and had already practiced). Three or four together might sound good because, with that few of you, it’s much easier to listen to your fellow singers and adapt, but 30’s too many for that and without some kind of conductor or leader, things aren’t likely to sound all that great.

I think it’s a statistical thing. Once you get above a certain number of singers, the fact that everyone’s going to sing a bum note now and again, or indeed be completely out of tune and time with everyone else, the song is going to start to make itself heard. Because, though everyone is wrong in a different way, everyone is right the same way. So the wrongs will start to cancel themselves out and be drowned by the ‘organised’ signal that is the song. And all those voices, reinforcing each other make a mighty noise.

That’s how big data works too. Once you have sufficient data (and for some signals sufficient is going to be massive) then the still small voices of whichever fraction of that data is saying the same thing will start to be amplified by where the noise is dissipated.

Just ask an astrophotographer. I have a colleague who takes rather fine photographs of deep space objects that are, to the naked eye nothing more than slightly fuzzy patches of space, only visible on the darkest of nights but which, through the magic of stacked imaging can produce images of stunning depth and clarity.

The Flame Nebula (NGC 2024), star Alnitak, and the Horsehead Nebula, Orion

If you’ve ever taken photographs with a digital camera at the kind of high ISO settings that Mike used to take this, you’ll be used to seeing horrible noisy images. But it turns out that, by leveraging the nature of the noise involved and the wonder of statistics, great photographs like this can be pulled out of noisy data. It works like this:

Any given pixel in a digital photograph is made up of three different componants:

  • Real light from the scene in front of the camera
  • Systematic error which is the same in every image
  • Thermal (usually) noise

The job of an astrophotographer is to work out some way of extracting the signal at the expense of the noise. And to do that, they have one massive advantage compared to the landscape or portrait photographer. The stars and nebulae may be a very very long way away. They may be very dim. But they don’t move. Once you’ve corrected for the motion of the earth, if you point your scope at the horsehead nebula today it’s going to look the same as it did yesterday and the day before that. Obviously, things do change, but, from the distance we’re looking, the change only happens on multi-hundred year timescales. This constancy makes the astrophotgraphers task, if not easy, at least possible.

So… the stars (like the tune of Cwm Rhonda) are unchanging, but the noise is different with every exposure (that’s why it’s called noise after all). Even if, on any given exposure the noise is as strong as the signal, by taking lots and lots of exposures and then averaging them, the noise will get smeared away to black (or very dark grey) and the stars will emerge from the gloom. Sorry. The stars and the systematic error will emerge from the gloom. So, all that remains to do is to take a photograph of the systematic error and take that away from the image.

Huh? How does one take a photograph of systematic error? You do it by photographing a grey sheet. Or, because it’s probably easier, by throwing your telescope completely out of focus so what you see is to all intents and purposes a grey sheet and taking a photograph (or lots of photographs - you’ve still got noise to contend with…) and subtracting the resulting error map from your stack of photographs and bingo, you’re left with an image that’s mostly signal. All that remains is to mess with the levels and curves and possibly to stack in a few false colour images grabbed from the infra red or the hydrogen alpha line where there’s lots of detail and you’re on your way to a cracking photograph.

Obviously, it’s not as easy as that - telescope mounts aren’t perfect, they drift, camera error changes over time. It’s bloody cold outside on a clear night. Sodium street lights play merry hell with the sky. And so on. But if you persevere, you end up with final images like the one above. That sort of thing’s not for me, but I’m very glad there are folk like Mike taking advantage of every clear night to help illuminate the awesome weirdness of our universe.

Noisy data is a pain, but, we’re starting to realise that, if you have enough data and computing power, you can pull some amazing signals out of it. Whether that’s the sound of thousands of Welsh rugby fans combining to sound like the voice of God; an improbably clear photograph of something that happened thousands of years ago a very long way away; your email client getting the spam/ham classification guess right 99 times out of 100; or Google tracking flu epidemics by analysing searches, if you have enough data and the smarts to use it, you can do some amazing things.

Some of them are even worth doing.

Published on Sun, 17 Mar 2013 10:37:00 GMT by Piers Cawley .

Getting Softer

Welcome back. I realise that I left off without telling you how I’d chosen to wire the matrix up. I’m basing my layout on the Jesse’s “Blue Shift” layout:

A reduced travel keyboard layout

However, the Maltron has a slightly different layout and I’m less gung ho about getting rid of the extra little finger keys, especially the left hand control and the shifts. The layout I’m starting from looks a little like this:

Maltron Blue v.1

If you count that up, it’s 60 keys. There are 112 keys in the original Maltron layout arranged in 8 rows of 16 columns (which means that the total number of keys that could be accommodated in the matrix is 16 * 8 or 128. Because I was using only 60 keys, I could fit everything in an 8x8 matrix, which I wired up like this:

Maltron Blue Matrix

Once all the keys were wired up, I tacked ribbon cable in place to pick up signals, crimped terminations on the other end, plugged in the Teensy++ and went searching for firmware software.

Jesse had settled on the Humble Hacker Keyboard Firmware, but I found I couldn’t get on with it, and I ended up with the tmk firmware if only because it’s the first one I managed to get working and I found the documentation a wee bit more comprehensible. However, it was driving me up the wall for while because I simply couldn’t get it to recognise key presses as single keypresses. Keys would bounce, or wouldn’t register and I couldn’t work out what was going on until I read this tip on the Teensy website. It turns out that electronics is more subtle than I realised.

Pull up resistors (at last)

I’m a software guy. So I thought that the effective way of detecting a signal was to look for a positive voltage on your controller input pin. So zero volts implies that the input bit is false (zero in boolean logic). It’s a little bit more complicated than that though. It turns out that you get a clearer signal if you treat a pin being pulled to ground as true. To do this, we need some way of arranging for our input pin to be at 5V when the switch is open and, which (if you don’t know the trick) is more tricky, at 0V when the switch is closed and current is not flowing. Enter the pullup resistor. Consider the following schematic:

A pull up resistor yesterday

All we need to know to understand what’s going on now is Ohm’s Law. Ohm’s Law is almost laughably simple but once you’ve grasped it, understanding electronics gets much easier. The law states that the voltage (V) dropped across a load is equal to the product of the current flowing (I) in Amps and the resistance in Ohms (R).

So, when the switch is open (as in the diagram), we can see that the voltage between P0 and ground is equal to 5V - IR, but no current is flowing which makes IR equal to zero and so P0 is at 5V. So… what happens when the switch is closed?

We know that the voltage between the power rail and ground is 5V and we choose R so that the resistance of the switch might as well be zero. Which means that the voltage at P0 is 0V, or as near as makes no odds, so we have our two logic levels. When the switch is open, the input pin is at 5V, which we call false, and when it’s closed the pin is pulled down to ground (0V), which we call true.

So, if we recast our matrix driver so that, rather than applying a voltage to each column in turn and check the row pins to see if they’re high, we set up pull up resistors on the column pins and, set all our rows to 5V. Then, to scan the matrix, we set a row to ground and check which columns go to ground too and on to the next. The beauty of the Teensy is that we can do that without any extra hardware, we just set a couple of registers to appropriate values and we’re golden. Once I’d done this and rebuilt my debugging firmware suddenly the debugging output was making more sense. No missed keys. No strange repeats. No keys I hadn’t touched suddenly deciding they’d been pressed. Lovely.

There’s another possible problem with keyswitches called ‘bouncing’ that the firmware takes care of for me out of the box. In theory keyswitches are dead simple. You press the button and circuit goes from not conducting to conducting with no shilly shallying around. In practice… watching the voltage across even the best switch with a suitable oscilloscope is a lesson in the damnable imperfection of mechanical bits and pieces. The voltage is high. Then low. Then high. Then low. Then low. Then high. Then low and staying there. If you don’t take this into account in your driver you’re going to be registering far too many keypresses. Which is why any firmware worthy of the name has software debouncing (there are hardware debouncing solutions, but it’s much, much cheaper and more convenient to do the compensation) and the tmk firmware is no different. I’m sufficiently lazy that I’ve not really looked at how it works in any detail. Basically, if it detects a switch change it reads the same pin multiple times and, assuming the switch state is still changed at the end of that process, then it’s a real keyup or keydown event.

Faking it ‘til you make it

The tmk firmware is substantially more competent than I’ve explored in any depth yet. I’m experimenting with what I want to do with the blue shift layer and distinguishing between taps, chording and other possibilities by setting up my ‘blue shift’ keys to send the ‘F12’ and F13’ keycodes and I’m using KeyRemap4Macbook to do most of my messing with stuff, but once I’ve worked out what I want, I expect to push as much as possible into the firmware so I don’t have to duplicate a bunch of work (and indeed find appropriate driver software) when I want to use the keyboard on a Linux or, in extremis, Windows box.

Traditions are there to be overthrown too

The keyboard on your computer is (unless you’re a weirdo like me and you’ve got a kinesis, maltron or some other alternative input device) is a living fossil. It takes the form it does because back when typewriters were invented, the mechanical constraints of needing to have typebars strike paper forced the designers to stagger the rows of keys. The keyboard layout was (allegedly) designed not to slow typists down, but to try and avoid getting keys tangled up with each other during typing by keeping common key combinations apart (I’m not entirely convinced that his is true, given that ‘e’ is next to ‘r’ and ‘t’ and ‘h’ are such near neighbours, but it’s pretty obvious that the qwerty layout isn’t really designed to minimise finger travel while touch typing (one wonders if they’d even thought of touch typing when they designed the thing). There’s no real reason to remain tied to this design. The Maltron case is designed so that there’s not much lateral movement of your fingers or wrist flexion while typing. Once you’ve learned the layout, it’s a delight to type with. But even with the radical case design and rejigged layout the Maltron is a surprisingly conservative design. The microcontroller I’m using to drive the keyboard is a pretty capable 8bit computer running at 16MHz, 8K RAM, 4K of EEPROM and 128K of flash memory to hold the program. Scanning an 8x8 matrix doesn’t come close to pushing it.

So, if we’re not tied to ‘one key one action’, what can we do?

Here’s what I’ve been experimenting with so far:

Distinguishing between tapping and press and then release. And between typing a key by itself and using it as a modifier. So at the moment I have:

If I tap (press and release quickly without pressing another key) the left blue shift, then pretend I actually tapped the tab key. If I press the key and, while holding it down, hit another key, send the ‘blue shift’ symbol associated with that key or just send L_ALT + the original keycode if there’s no blue shift symbol. The right blue shift works similarly but instead of sending tab on tap, we send RET. If I press either key hold it for a while and then release it, we don’t send anything.

The two keys on the far left (shift and control) send ESC when tapped.

I’ve also arranged things so that both ALT keys send RALT. I realise that might seem weird, but I’ve also configured my Emacs to treat RALT as a SUPER key which lets me bind actions to blue shifted keys. So when I’m in Emacs, all those keys without a blue symbol on them have more or less complicated actions associated with them. Others have used teensy based firmwares to have certain key combinations move the mouse pointer or trigger complex sequences of actions.

I’ve also got enough pins spare on the teensy that (and enough holes in the case) that I’m seriously considering using hot glue to mount a few RGB LEDs behind some of the holes in the middle of the case so that, If I end up cooking up more keyboard layers, I can indicate the keyboard (and Emacs perhaps?) state with blinkenlights. Because how can a project be complete if there aren’t blinkenlights?

Next steps

Where next? I’m not sure. I’m still experimenting with the possibilities that open up once you realise that just because we’ve always simulated a mechanical typewriter there’s no reason to keep doing it. Hardware doesn’t have to be dumb.

And then there’s the fact that a sixty key layout in a case designed to hold over a hundred keys looks scruffy. Until I started hacking my keyboard I’d tended to think that a desktop 3d printer was, for me at least, a solution looking for a problem. But now I’m trying to work out how to build a better keyboard case… Well, I think I’ve found my problem.

Published on Sat, 16 Mar 2013 22:01:00 GMT by Piers Cawley .

Fun with solder

Where were we? Ah yes, I’d just unwired my Maltron, pulled out all the switches, ordered some Cherry MX brown stem keyswitches from a Deskthority Group buy and a Teensy++ from Pieter Floris. Now all I had to do was work out how I was going to wire the thing up. Jesse’s article had some great pointers, but as I disassembled the Maltron wiring loom, I gained a great deal of respect for their decision to use fine enamelled wire (which a bit of googling revealed to be solderable copper magnet winding wire - I bought some 30SWG stuff from wires.co.uk) which, because it’s thin and solid core is easy to bend into shape and, because the enamel coating melts into solder flux, is easy to solder without worrying about stripping insulation.

One thing that worried me about both Jesse and Maltron’s wiring was the fiddly nature of the way the wired the diodes in. The Maltron wiring only had diodes on a few keys, but I was looking to experiment with some serious remapping and possibly chording layouts - hardwiring a limited set of modifier keys wasn’t in my plan.

Diodes? Why diodes?

Before I talk about how I solved that problem, I’d best explain what the problem is. Consider the average computer keyboard with 105 or so keys. How do you work out which keys have been pressed without needing 105 I/O pins on your microcontroller? You arrange things in a matrix. We’ll worry about the physical layout of the board later, but here’s a schematic of a 5x5 key matrix which, with a bit of cunning, allows us to read which one of 25 keys is pressed with only 10 microcontroller pins

A 5x5 simple keyboard matrix

Suppose we apply a signal to the first column then, by looking at the pins attached to the rows, we can tell which switches in the selected column have been pressed by checking for the signal.

S1-S13-Down-C1

By cycling the symbol from column to column rapidly, we can scan the whole matrix.

S1,-S13-Down-C3

When two keys are pressed at the same time, we can spot them with this arrangement, but what happens when, say, three keys are pressed? Let’s press switches S1, S11 and S13 and find out. When we scan column one, all is well, we get the signal out on rows 1 and 3 as we expected, but when we scan column 3, which only has one key held down, we get a signal out on rows 1 and 3 as well. What’s going on?

S1-S11-S13-Down-C3

Let’s trace the signal and see if we can work out how it gets to row one. The signal comes in on column 3, through S13 and onto row 3. But S11 is also on row 3, which means that the signal can flow through S11 onto column 1 and once it’s on column 1, it flows through S1, onto row 1 and confusion reigns.

Welcome to the wonderful world of ghosting.

If you’re happy to live with only detecting when two keys are pressed simultaneously, you can correct for this in software by ignoring ambigous symbols (or by ignoring all signals which show more than two keys if you’re feeling lazy) and you can correct things for WASD gamers with some careful matrix layout to make sure that most of the common 3+ key combos aren’t ambiguous. Or you can spend a quid on a hundred 1N4148 diodes and invest some extra soldering time to wire things up like so:

<< Signal: C3. Closed: C1R1, C1R3, C3R1. All Diodes >>

With the diodes in place, the signal can’t go back through S1 and confuse things and your controller software can be much simpler.

The original Maltron wiring doesn’t put diodes on every key, just on all the modifier keys. Diodes may be cheap, but getting them wired into the matrix in the way Maltron did is a complete pain, doing it for over 100 keys was never going to be fun. But dammit, diodes on every key is just the Right Thing. There had to be a better way.

Diodes without (much) pain

If you’ve ever spent time soldering stuff, you’ll be aware that humans have a hard time soldering due to an acute shortage of hands. Generally you need to hold one component in one hand, another component in the other, the soldering iron in your other other hand and the solder wire in the… hmm… we appear to have run out of hands. Which is where things like vises come in handy. That way you can hold one component in the vise, lock the wire or component you’re trying to attach to it by twisting, wrapping or beinding things, leaving you with hands free for the iron and solder. Wiring diodes up to the keyswitches, which have to be installed in the keyboard case while you’re doing it, is the very definition of fiddly.

However, if you go and read things like the MX keyswitch’s datasheet, it makes reference to switches with diodes fitted and when you look at the bottom of a switch you’ll see a diode symbol and four small holes.

Cherry MX keyswitch

Time to crack a switch open

Inside the switch

There’s quite a bit going on in there, but right where the four holes are is the interesting bit. We can bend the legs of a 1N4148 diode and feed ‘em through the holes, pop the lid back on and it fits, clean as a whistle. We’re onto something here.

I’d decided to go with a reduced layout based on the ‘blue shift’ layout that Jesse cooked up so, although it was fiddly it didn’t take that long to pop another 59 keys and put diodes in there. After that, it was easy enough to wrap the anode lead around one of the switch’s pins, solder it in place and clip off the excess wire:

Steps along the way

Now I had 60 keyswitches with diodes installed which I could pop into the keyboard case and wire up with magnet wire. I love solderable magnet wire. It’s magic stuff. Just wrap it tight round a pin, or the cathode lead of the diode and it stays in place while you solder it in place. I’m not going to pretend it was the work of moments, but it was pretty straightforward and I haven’t needed to resolder a single joint. There’s something very satisfying about watching capillary action suck molten solder into the joint. Physics is awesome.

The first half-row installed and partially wired

Look! I made a rats' nest of my own

Ribbon cable FTW!

A working Maltronstein

Tune in next time to really learn about the importance of the pull-up resistor, how to roll your own (or someone else’s) keyboard firmware and some thoughts on where next.

Published on Wed, 13 Mar 2013 22:49:00 GMT by Piers Cawley . Tags , ,

In which Piers prepares to void the warranty...

Some years ago (I have the awful feeling it was 1999) I was stricken with a bout of tingly numbness in my right hand. When you’re a computer programmer, the thought of being unable to type, and thus unable to program isn’t something you ever want to deal with. Terry Pratchett’s words about gnawing the arse out of a dead badger if it would make it better spring to mind. So, I replaced my mouse with a trackball, got a better chair and invested three hundred and some pounds of my own money in a Maltron keyboard.

The Maltron design makes the Microsoft Natural split keyboard look like an achingly conservative piece of peripheral design (though the original MS natural keyboard had a very nice feature that the Maltron lacks in its reverse tilt). It looks like something from a science fiction film (and maltron keyboards have been used as props in sf movies) and once I’d got used to it, it was marvellous. Comfortable. Fast to type on. Weirdly laid out.

But… slightly flimsy in feel. It’s understandable really, the keyboards sell in such low volumes that each one has the feel of a ‘production prototype’. Which is great when you’re a programmer like me and want the layout tweaked slightly from the basic spec - Maltron were more than happy to wire things up so that, where their ‘standard’ model had a caps lock key, there was a control key, or to make a normally ‘dead’ key into an extra quote/double quote. After all, when you’re forming a sheet of plastic over a plaster of paris mold, cutting holes in it with a punch and then wiring every keyswitch into the matrix by hand, there’s not much more work in rerouting a couple of wires here and there, so at the cost of a slightly longer wait I got a customized keyboard.

Time went by, the tingles went away, I started coding on laptops, and Macbooks at that. Laptops that had neither AT or PS/2 keyboard interfaces, only USB. And all those keyboards that let you plug your PS/2 plug into a simple USB adaptor do it by having USB hardware onboard, they detect when they’ve been plugged into a USB socket and switch protocols. Not so much with keyboards made in the 20th century, so the Maltron got put away in its box.

When I was working for the BBC, the tingles came back, so I pulled the Maltron out of storage, hooked it up and discovered that the time in storage hadn’t been kind to it. But the BBC is the kind of employer that really doesn’t like the thought of an expensive programmer not being able to do his job and arranged for me to have a Kinesis Advantage.

The Kinesis is a problematic keyboard for us Maltron users. It’s what the Maltron could have been, given the kind of investment and volume sales that small UK companies don’t get. You only have to look at one to realise that its heavily inspired by the Maltron. Putting partisan issues to one side, the Kinesis is a more solid keyboard than the Maltron. When you open one up it’s obvious why the Kinesis manages to feel solid and cost less. There’s a flexible PCB in there, fewer ‘real’ keys and a heavier, more rigid, injection molded, case. The initial setup and tooling costs must’ve been phenomenal by comparison to the Maltron way of building, but once those are accounted for, the cost per unit is pulled way down. And Kinesis had a second mover advantage, they’re not building a keyboard from scratch, they’re developing an already existing design. And developing it very well. If you look at the ‘bowl’ of a Maltron keyboard, each column has its own radius, the angle of each key is tuned in the case, so the case shape is complex and presumably hard to make with injection molding. The Kinesis case shape is rather simpler; the ‘middle’ finger column has greater radius than the others, but apart from that the bowl is basically cylindrical. However, when you use the advantage each key feels as right as it does on a Maltron. Kinesis do this by altering the profile of the keycaps. If you pull the all the keycaps in a bowl off their stems and put ‘em on a flat surface, you’ll see that no two have the same shape. Developing that profile can’t have been a simple process, but again, it’s much easier to cast one hundred or so keycaps in different shapes than a whole case in a more complex shape. After all, if a keycap goes wrong when its cast, you can just dump that keycap and grab another one off the pile. If a case goes wrong, you’ve got rather more plastic to melt down.

So… once I’d remapped the Kinesis to use the Maltron layout (It may have been years since I’d used the Maltron, but fingers don’t forget - when I’m typing on a flat keyboard, my fingers sit over ASDFJKL; but whey I’m typing on a contoured keyboard, they’re mapped to ANISTHOR and I find typing in QWERTY mode nearly impossible; I’m reduced to hunt and peck.) I was very happy with it, and with the help it gave to my posture, I was soon typing comfortably again.

And then I left the Beeb and came to Cornwall. And, bless them, the Beeb held on to ‘my’ keyboard (I hope whoever it ended up with worked out how to remap it back to QWERTY). But, I wasn’t hurting, so a flat keyboard was fine.

Until it wasn’t and the tingles. And good as Headforwards are as employers, they don’t really have the kind of budget that the BBC has, so I’ve ended up buying my own Kinesis for work - if nothing else, I’m not going to have the problem of leaving my keyboard behind if I have to change jobs.

But I’m a geek. I don’t just code at work, I code and write at home. It’s all very well having a lovely ergonomic keyboard at work and a painful flat one at home. So I dusted off the Maltron again, picked up an active USB protocol converter and discovered that time had definitely not been kind. Keys were stuck. Or not registering. The Shift Lock key wasn’t like every other key, when you pressed it, the controller held odwn a virtual shift key until you pressed it again. And, because this was hard wired in a control software blown into an EPROM there was no way I could remap it.

I was about to give up on getting a contoured keyboard up and working at home and resign myself to carrying the Kinesis back and forth from the office, but then my friend Nat Torkington’s Four Short Links post pointed me at my friend Jesse’s fantastic pair of blog posts about building a flat, Maltron/Kinesis inspired keyboard from scratch and I was off to the on-line shops in search of Teensy++ microcontrollers, diodes, soldering irons and new Cherry MX brown keyswitches; reading obssessive webfora like GeekHack and Deskthority; and unscrewing the baseplate of my painfully expensive hand built custom keyboard and enthusiastically voiding the warranty with the aid of a pair of diagonal cutters.

An almost disassembled keyboard matrix

Untitled

Untitled

Tune in next time to learn how a keyboard works, why magnet wire is lovely and the importance of the pull-up resistor.

Published on Sun, 10 Mar 2013 20:56:00 GMT by Piers Cawley . Tags , , ,

Belated OSCON writeup

I had such fun. Though I’m never, ever, livecoding half an unwritten talk in an Emacs window again.

You want proof?

Also. I’m not dead, I’m just writing a book on Higher Order Coffeescript for O’Reilly and I alternate between bouts of horrid mental block and massive splurges of disorganized content where everything seems to be more important than everything else.

Published on Sat, 18 Aug 2012 07:39:00 GMT by Piers Cawley .

Turing

Today is Alan Turing’s 100th birthday. I’ve been thinking about him lately, in particular about a story that demonstrates the perils of working with genius.

The story goes that, when Turing was working with the Manchester Baby (the first stored program computer ever built. Just) a colleague wrote the first ever assembler which would turn (relatively) human readable assembly language and turn it into the ones and zeroes of machine code that the machine could actually execute. He showed it to Turing, who blew up at him, ranting that using the computer to do ‘trivial’ jobs like that were a massive waste of expensive computer time.

The problem with working with a genius, from the point of view of more ordinary mortals, is that the genius has only a very rough idea of what is actually easy, and what is only easy to them. From today’s vantage point, when computers are as freely available as they are now, the idea of not letting the computer do the shitwork for you seems utterly ludicrous – programmer time is more valuable than computer time.

What’s less obvious is that the same was true in Turing’s time (when there was precisely one computer) too. It only takes one programmer to make a mistake in translating from assembler to machine code and run a job that, for instance, gets stuck in an infinite loop and you’ve probably wasted more computer time (and programmer time) than if you’d just run it through the assembler in the first place. Turing didn’t see that, because the process of translating from symbols to binary wasn’t something that he found particularly complicated. To him, what was needed wasn’t more and better tools, it was more and better Turings.

I’m not entirely sure that I believe the story (and I can’t remember where I heard it, so it may be a phantom of my memory). It certainly doesn’t chime with the Turing who was instrumental in mechanising the shitwork of finding the days rotor settings so that Enigma traffic could be cracked. The history of Bletchley Park is a story of building better and better machines to do the dully repetitive jobs that humans find so hard to stick to and which machines excel at.

The “Just write code without bugs in the first place” school of programming is alive and well today. It’s not my school though. I’m very much an exploratory coder. I like to have tests to show me the way and to keep me honest so that I don’t go breaking things when I change this bit here. I’ve customised my editing environment to help me as much as possible. I have a nasty habit of writing bare words that I should have quoted, so I have a keystroke that shoves the last word into quotes for me. Another keystroke will align all the broad arrows at the current level of nesting. I’ve got snippets set up that fill in standard boilerplate when I start a new class, a huge list of common spelling mistakes that get autocorrected while I’m not looking and an expanding list of other little helpers that I need to write when I get around to it. Automation lets me go faster, make more and better mistakes and recover from them faster. I am a of little brain and I want all the help I can give myself.

I’m not sure that Turing would be in the same camp as me.

Here’s the thing though – Turing was, definitely, a genius. But his paper “On Computable Numbers, with an Application to the Entscheidungsproblem” (the one that gave the world the Turing Machine) has a bug in it. In fact, it has two. The first is obvious enough that I spotted it when i read the paper for the first time. The second bug is rather more subtle (but still fixable. It’s okay, the field of computing is not build on sand).

I love that The Paper – The one that’s the theoretical basis for the modern world (first publishd 75 years ago, round(ish) number fans) – has a bug. It gives me a strange kind of hope. Fallibility gets us all – even people who have saved the world. I think we should celebrate the humanity as well as the genius. Turing was, by all accounts, a very odd fish, but the world is undeniably richer for his contributions.

So, let’s raise a glass to the memory of Alan Turing tonight, marathon runner, gay icon, saviour of the world and the pleasantly fallible inventor of the modern world. Not a bad CV, when you think about it.

Updated

Apparently I’m wrong about it being Turing who didn’t like the assembler but Von Neumann (another genius). See below for details. And phew! How nice to find that a hero doesn’t have feet of clay.

Published on Sat, 23 Jun 2012 15:01:00 GMT by Piers Cawley under , .

An instructive joke for all occasions

Two bulls were grazing at the bottom of the big pasture, when the farmer let a load of heifers in at the top gate.

“Hey,” said the young bull to the old, “What do you say we run up there and fuck us a couple of heifers?”

“Well,” said the old one, “You’re welcome to do that if you want to, but I plan on walking up there and fucking all of them”.

I’ll leave any interpretation up to you.

Published on Sat, 18 Feb 2012 03:15:00 GMT by Piers Cawley under .

Powered by Publify – Thème Frédéric de Villamil | Photo Glenn