The human checksum
When designing algorithms, we might have been too ambitious and forgotten to design for an important component: the human checksum.
Eric L. Loomis was arrested in February 2013. He was driving a stolen vehicle that had been used in a drive-by shooting. The police tried to stop the runaway car before it ran into a snow bank. The driver and passenger ran away but were later arrested. Loomis was one of the two. He pleaded guilty to eluding an officer and operating a vehicle without the owner’s consent.
Loomis is not an angel. He is a registered sex offender, a result of a previous conviction. He had a sawed-off 12 gauge shotgun in the car, with two empty shotgun casings and some live rounds.
During the court proceedings, the judge in the case decided to turn to an algorithm, an application called COMPAS, to make a more informed decision. Before Loomis was sentenced, the judge generated a report that provided a risk-of-reoffending assessment. The score assessed Loomis as an individual with a high risk of reoffending. And the judge made it quite clear that the output of the algorithms helped in deciding on the 6-year jail term by saying: “you’re identified, through the COMPAS assessment, as an individual who is a high risk to the community.”
The case of Eric Loomis is not an exception. Quite the opposite: algorithmic assessment of the risk of reoffending is becoming a norm. Some are concerned.
When computer technology entered our workspaces in the last century, it acted as validation or enhancement of human activities. When Steve Jobs referred to computers as “bicycles for our minds” he was suggesting that they help make us more efficient. Better use our energy: whether it’s our creative energy, computing skills, or perhaps other forms of it. However, a bicycle does not decide where to go.
At some stage, however, it becomes less clear who is the rider and who is the bicycle. Are algorithms like COMPAS allowing us to be more efficient and hopefully less biased, or — in a perverse way — are we the bicycles, allowing algorithms like COMPAS to have more impact?
My chat with Leanne Kemp on the stage of DLD 2019. The human checksum was one of the topics we discussed.
This is what Leanne Kemp, the CEO of Everledger and Queensland’s Chief Entrepreneur have been discussing recently. While we were discussing the impact of technology on society, she dropped a technical-jargon-bomb: checksums. She casually mentioned how computers used to be “checksums” for humans, and now, increasingly, humans are checksums for computers. She also told me that this is one of the core principles of her business: the human checksum.
A checksum, used in technical context, is a digital “summary” or “signature” of a piece of data. It has traditionally been used in data transmission, prone to errors caused by the medium (phone lines, radio waves, or even human transcription). If implemented well, the checksum is always the same if the data is identical on both ends of the medium, but will vary with even the smallest variations.
But we can also apply a slightly broader understanding:
A checksum provides an assurance that what we receive has been done without mistakes.
Can checksums only be provided to digital data? Can checksums only be done by algorithms? No. It is possible to have checksums that confirm no mistakes in human work. It is also possible to have humans confirm that what has been done by a computer algorithm is without mistakes.
There could be an algorithmic checksum and a human checksum. They can assure both human and computer outputs.
The old (but good): algorithmic checksum
Remember the first “killer app” for personal computers? It was the spreadsheet. It allowed humans not only to perform calculations more quickly but — more importantly — it allowed them to be confident about the results. As long as their spreadsheets were correctly designed, and the correct data was entered, any calculations would always be error free. Spreadsheet became a computer checksum for humans.
Today, the equivalents of spreadsheets are everywhere. Forms ensure the data we enter is error free. Even my email client reminds me to attach that file that I mentioned in the email. My car navigation reminds me to slow down and change lanes to make sure I arrive at my destination according to my preferences (safely, quickly, without traffic fines).
The new: human checksum
Everywhere we look, we see the emergence of algorithms that operate independently, often without a human in the loop. Government algorithms make automatic decisions in simple cases, such as extending driver licenses or approving age-triggered services. Banks proactively block credit cards if they notice suspicious behavior. These automatic decisions have various levels of independence. To continue the bicycle metaphor, some algorithms are like basic bicycles, allowing humans to be more efficient; some are like trikes, preventing humans from harming themselves. Some algorithms are like bikes equipped with navigation, recommending a human where to go. Finally, some are fully self-driving, seemingly not requiring humans at all.
And somehow the last group, the “self-driving” algorithms are all the rage. They are exciting, almost science-fiction. But they, just like science fiction characters, often go rogue, if not overseen by a human. “I am sorry Dave, I am afraid I can’t do that.” Read the twitter thread below, it is almost scary in showing how much a human checksum needs to be designed from the beginning, because sometimes it may be too late to fix it.
Rather than trying to make the algorithms perfect, we need to work on designing-in human checksums by default.
It might sound counterintuitive, that after so many years of trying to hand over all human activities to machines, we are now trying to get some back. However, this has to be the case.
One day, my daughter told me that she would like to become a wildlife carer or a manager of algorithms*. Initially, I wasn’t sure what she meant by the manager part. But now I truly believe that she can be doing one of the most important jobs in the future: making sure that whatever technology does, it does it in a way that is intended. She will be providing the human checksum.
Here’s homework for you: is COMPAS an algorithmic checksum for judges or are judges a human checksum for COMPAS?
*I might have made up a part of her statement.