Captain Cyborg and the Problem of Evil (2005)

While looking at a problem related to archiving thousands of project proposals and reports for a large IT consulting firm I was struck by the ease with which these could be classified into two strongly differentiated groups: those based on clerical or other worker replacement, and those focused on enabling an existing workforce to do more, better, or different things.

I think this is a general phenonmenon that reflects fundamental value differences between data processing and scientific computing. Thus virtually all data processing projects from the introduction of IBM's Hollerith Type III Tabulator in 1921 through to today seem to focus on benefits to be achieved through the replacement of clerical and other staff. In contrast, look at research, development, or deployment proposals from Newman's 1943 use of a Colossus in raid planning to company wide order management and production optimisation applications today and you'll see a focus, not on replacing humans, but on extending their abilities.

More specifically, almost every System 360/370 ever sold was justified on clerical layoffs - but Unix escaped an AT&T management kill order because clerical users of the first Unix application, in the Bell Labs Patent office, not only protested but convinced management to support the acquisition of new hardware for further development.

When the modern computer was being invented, in the nineteen thirties and forties, by men like Shannon, Atanasoff, Zuse, Newman, and von Neuman, the problems they hoped to solve were primarily those of what later became known as operations research - finding the maximum for some objective function in the presence of constraints. Thus early work on the use of computers was wholly unrelated to what IBM was doing with accounting machines and driven by the belief that computers could be used to extend the human ability to routinely address otherwise difficult problems in optimization and communications.

Then, as now, the scientific focus was on computers as extensions of man, while the financial and accounting focus has equally consistently been to treat accounting machines, whether card based or digital, as cheap, reliable, replacements for clerks.

Carry the extensions of man idea forward and one of the things you could get would be cyborg man -human augmentation through biologically integrated hardware. Carry the replacement ethos forward and one outcome you could predict would be brain downloading -the replacement of humans through disembodied human personality continuation.

Neither future is exactly imminent today, but both are being worked on, and both are considered possible by the people involved.

Kevin Warwick, the man news site theregister.co.uk has derisively labelled "Captain Cyborg," claims that he wants to become a cyborg -part man, part machine. To that end he's already done a number of experiments aimed at joining his own nervous system to various microchips in an attempt to learn how such interfaces can be built and how they can be used.

Look past the hype, and what he's really talking about is direct human access to machine memory and data manipulation capabilities. In other words he sees the computer as a means for extending individual scope and wants to give us better interfaces to machine capabilities, faster access to more data, and clearer communications with each other.

Here's a critical sentence, abstracted from the promotional materials on his website:

 

The possibility exists to enhance human capabilities. To harness the ever increasing abilities of machine intelligence, to enable extra sensory input and to communicate in a much richer way, using thought alone.

That's clearly in the academic tradition of seeing the computer as a means of extending human abilities - the long run future he's talking about includes full prosthetic support for Alzheimer's patients and the ability for the rest of us to pick up real skills the way we now buy software. Imagine: $495 for a license on the ability to understand the conductor's score for the Shostakovich 13th - and without giving anything else up.

Given the option I'd take it, right now, no questions asked.

Other people, however, are taking the replacement route. To stick with Brits for a moment, someone named Ian Pearson, apparently a futurist with British Telecom, got his 15 minutes a few months ago by predicting that personality continuation through brain downloading would be common by 2050. For example an Observer article written by David Smith in May of this year, included this bit from an interview with him:

 

'If you draw the timelines, realistically by 2050 we would expect to be able to download your mind into a machine, so when you die it's not a major career problem,' Pearson told The Observer. 'If you're rich enough then by 2050 it's feasible. If you're poor you'll probably have to wait until 2075 or 2080 when it's routine. We are very serious about it. That's how fast this technology is moving: 45 years is a hell of a long time in IT.'

Pearson and others like him may be promising life eternal on the digital frontier, but that's not in the cards if this idea succeeds because they're really talking about replacing, not extending, humans - in their future, there is no room for people.

The two big conceptual problems they're carefully not talking about are synchronization and accounting for the consequences of the personality's interactions with others.

As an illustration consider this bit of appropriately purple prose:

 

I am tormented by one thought: this isn't real. With a thought I walk on Mars, start a weekend with a beautiful woman, create a garden filled with all the fruits and flowers of the earth, but I am tormented by one thought: this isn't real. I think, therefore it is; I am a little God.

I think Sara and a long day cruising the black diamonds at Alta, but it isn't real. I imagine gangrene in both legs and cancer in my lungs and it is so, but Sara loves me and does not hurt. There is no evil, therefore this isn't real. I am a very little God.

I know what I am, I am an APU: an autonomous personality unit, a digital construct.

I am an experiment, a consciousness embodied in a machine. Just now I spent thirty fulfilling years with Melissa, a face from childhood; on the real time clock our lives together lasted a full microsecond. This is not real, nothing happened, no history changed, no children laughed, no one remembers; those who never were, are gone. I am a bitter little God trapped by one thought: there is no evil, this isn't real.

The system I'm a test of is called Wheeler. Wheeler makes me a god, I think, therefore it is. I can make worlds, cities, people, appear - and disappear. What I do is as real as any other APU I bring into being. Sara skied with me, but spent the day on a beach, reading in a lemon grove, at an expensive resort with a man I do not know.

I know what I am: I am not a God, I am an APU, and I want to die. Now.

Next Generation's Mr. Data could rehearse all 12 parts of Vivaldi's Four seasons while waiting for Picard's next word to arrive, but there's no known way this could work - try it, go have a conversation spacing the phonemes in your words 45 minutes apart. Mr. Data is an artificial intelligence, but human experience requires continuity, a disembodied personality cannot do no-ops while waiting for a buffer to fill or switch through one hundred million lives while polling for real world events in each one.

Accept that, generate the stored personality's perceptions of externals from its own memories and you can hope to escape infinite recursion only by invoking an external power to store new information as additional memories -thereby warping the personality construct. Mr. Data may be a staple of science fiction, but older literature has dealt at length with this -and its answer lies in the other meaning of "temps perdu," the introduction of evil and the damnation of souls.

In our example, the test subject lived thirty virtual years with "Melissa", but what did she do during that time? He's an autonomous personality construct, but what is she? A memory? something created, and then destroyed, for his microsecond of pleasure? one of many activations of the personality file for the real Melissa? What rights did she have during those thirty years? What happened to her when his attention twitched away?

It doesn't take a lot of deep thought to understand that making digital personality continuation work requires a single clock for all inputs and actions, a single shared reality for everyone and everything the personality interacts with, the complete severance of external knowledge and contact, and the local reality of both good and evil - making the virtual world so indistinguishable from the real thing that you might as well believe in life as a Masurka doll.

Star Trek's Borg have given the cyborg idea a bad reputation, but what makes the Borg evil isn't their nature as cyborgs, or even the apparent absence of free will in their society; it's the jihadist nature of their commitment to the violent assimilation of others. Oddly enough, however, that's not a necessary consequence of Warwick's ideas - but it is a natural consequence of the brain downloading idea since the continued personalities would face infinite threat from, but be unable to communicate meaningfully with, any remaining embodied humans. Theirs is a future, in other words, in which the last one in, turns off the lights.

Express a billion people as personality constructs in a single shared reality, add shielding and a million year power supply, and fire a dozen or two units off on deep space trajectories. It's a common enough Sci-Fi vision, a fine start on that life eternal thing, but one that leaves an abandoned world waiting for the next monkey to reach what David Brin called the "uplift" point - the moment something external jars presapient intelligence into a self awareness that eventually takes over from evolution.

Maybe that's a choice for some people, but to me the replacement of people by voltage fluctuations signifies an utterly unacceptable hatred for what it is to be human: a fallible, living, and fully embodied intelligence.

In contrast, there are no comparable problems with Captain Cyborg's human enhancements. He's focused on improving the man-machine interface to make the machine more useful as an extension of man's abilities -exactly what computing science as currently exemplified in the Unix and open source worlds has been trying to do from the beginning.

 

 

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer