Machina sapiens and Human Morality

Dateline: September 28, 1997

IT may be a philosophical question, but I’m not about to attempt a philosophical answer. Instead, I give some examples (mainly from Andrew Leonard’s book, Bots) to illustrate how AI has already begun to affect us morally.

Joseph Weizenbaum, who with psychologist Kenneth Colby created Eliza, the first and in some ways still the best "chatterbot," said it was "a monstrous obscenity" even to suggest that computers might become judges or psychiatrist of humans. He believes a machine, though intelligent, will not be able to appreciate our sense of "interpersonal respect, understanding, and love." His "worst fear"—that the machine would be used to counsel humans—"came true" when Colby marketed a commercial version of Eliza designed to help people cope with depression. Weizenbaum does not say it outright, but he seems to be criticizing Colby’s lack of morality in succumbing to the charms of AI.

I wonder what Weizenbaum would make of David Lebling, who helped create a "thiefbot" (a bot that stole players’ weapons and ammunition—what a ghastly crime) for the game Zork, said that "A lot of loving care went into making him as sadistic as possible."

Or of one "inveterate MUDder" (an addict of the Multi-User Dungeons and Dragons—MUD—games on the Internet) who whined to Leonard that despite devoting "4 hours a day playing on a MUD trying to reach the highest level and be the biggest baddest person in the MUD," he was frustrated and deeply hurt by the dastardly deed of other MUDders in creating bots to help themselves be bigger and badder. Aside from wondering why he doesn’t spend just a few minutes a day striving for at least a higher level of punctuation and vocabulary, one wonders wherefore "highest" equates to "baddest," and what exactly is his gripe? Can he really not see the irony? Evidently not, and neither, to my dismay, can Leonard, who judges the behavior of our hero’s opponents as seeming "obviously unfair, if not patently unethical." Am I missing something here?

Or of the fact that by the mid-1990s, IRC boasted bots that could "steer you to the latest warez—[aficionado slang for] pirated illegal software—or even deliver that software to you automatically, no questions asked."

There are few rules that even attempt to govern the behavior of either bots or their creators, and it will come as no surprise that some people break what rules there are. "What we do," boasted one IRC hacker (rogue programmers, usually juvenile in fact and invariably so in mentality), "is just pretty much piss people off and get revenge on channels that either we hate or that have channel operators in them that we hate. We net-split [force an IRC server temporarily off the network, which gives them the opportunity to take it over as it struggles back online], hack ops [steal operating privileges that confer the power to silence talkers, kick them out of a channel, etc.], we flood people off IRC, nick-collide people [cause server paralysis by fooling the server into accepting more than one user per nickname—every nickname should be unique], and do what we have to do to take over the channel." Wow. What fun. Where can I join.

Another said: "It’s a fierce life. There’s a lot of espionage between groups [of IRC hackers]—there are spies, backstabbers, extortion, scapegoating, lying, stealing, and a lot of colliding." Another: "My bots are evil. I like to get people mad." Boys: Beavis and Butthead, if not your country, would be proud of you.

But it’s not all bad news. "By late 1995, much of the IRC community regarded all bots as menaces to society. A mighty bot backlash began," says Leonard. The good news was not that bots were eliminated, though many IRC server owners managed to keep them at bay. In fact, in order to keep out the evil bots, the doors were shut on the good bots, as well—bots that gave a friendly welcome to visitors and performed other useful services. Says Leonard: "The ethics of the many proved no match for the unbridled egos of the few."

I respectfully disagree. It was good news that people cared enough to take action, even though it meant the minor and temporary inconvenience of losing the services of "good" bots. Morality—the greater good—did win in the end. And good bots are rising again. The evil still lurks, waiting for morality to let up, which it will, and bad bots will inevitably rise to ascendance on occasion. The baddest do rise to the highest, sometimes. Think of Adolf Hitler, Manuel Noriega, Genghis Khan.

With the evolution of bots from juvenile toy into commercially harvestable intelligent agents, "Bots and humans are players shooting for big bucks and palpable power," says Leonard. "Where that game will end—with the banishing of all bots, as has happened in IRC, or in the chaotic conflagration of a bot-induced info-Armageddon, or in some stable Elysium of bot and human harmony—is an unanswerable question, for now." Maybe I’m like the man who, having fallen from the top of the Empire State Building, halfway down shouts "So far, so good!" but I’ll stick with the optimists. Human evil on an apocalyptic, holocaust scale can no longer occur.

We’ve grown up a lot, in the last decades of the second millenium. We have some powerful new memes working in favor of world stability and peace—just consider the collapse of "the Evil Empire" (the Soviet Bloc), the fall of the Berlin Wall, and the current (September 1997) effort to reach international agreement to ban landmines (to which the United States, home of the brave, the free, and Beavis and Butthead, is one of the few dissenters). Recall (from a previous article) that an agreement is an exercise of our capacity to communicate a promise—one of the necessary conditions postulated by Dennett for the evolution of morality.

Beavis and Butthead are not (as I confess I once thought) Armageddon. They are just a temporary if tragic freakshow by the wayside of the road to human progress. However, sending Beavis and Butthead off to bed with no supper and a flea in their ear is child’s play compared to ticking off a couple of wayward adults, particularly if they happen to be lawyers. In 1994 the law firm of Canter and Siegal launched Usenet’s first "megaspam." Says Leonard: "In less than ninety minutes, they hit six thousand newsgroups with an advertisement offering assistance in the US Green Card Lottery—a chance for immigrants to the United States to qualify for a coveted work permit." Smearing slime on stoicism, Canter and Siegal maintained that despite the ensuing furor, censure, hate mail and death threats, they came out financially ahead, and set the stage for the email spamming we all love to hate.

But there is no question in my mind that the massive and swift public retribution and condemnation of Canter and Siegal had a chilling effect on would-be spammers. Reputable companies steer conspicuously clear of spam, and only the kinds of outfit I would not want to do business with anyway send me junk email.

An example of the morality of the spammers is someone calling himself Robert Returned, author of the HipCrime spambot which collected email addresses from Web sites and sent out spam indiscriminately, without regard to the demographics of the mailing list it collected. Mr. Returned not only was not bothered by the cries of wrath his spambot and junk mail predictably elicited: he positively luxuriated in them, "thanking his `detractors’ for `making so much noise that traffic [to the Web site his spam promoted] will remain high for a long, long time," adding: "The raving, angry notes can be a source of great enjoyment."

So far in its embryonic stage of development, Machina sapiens’ impact on our morals has been somewhat less than edifying. But it won’t always be so. Next week’s article, on the morality of intelligent machines themselves, will say why.

Until next week,





NEXT WEEK: The Morality of Intelligent Machines

Previous Features