Darwin Among the Machines: A Review and Commentary

Dateline: June 8, 1997

"NATURE, believes Dyson, is on the side of the machines," says the blurb on the flyleaf of George Dyson's new book, Darwin Among the Machines. As its title implies and its subtitle, "The Evolution of Global Intelligence," makes explicit, it's about the natural emergence, by natural evolutionary processes, of intelligent machines, or what my regular readers know I am wont to call Machina sapiens.

The book is primarily a history of the development of the concepts, principles, and theories underlying computers and artificial intelligence (AI), but it does not deal with recent history (the past three decades) except in a general sense and it does not deal with specific technologies except to the extent that they illustrate the general principles and concepts with which his book is mainly concerned. It is by far the most complete history of its type I've seen, and is replete with direct and often quaint Olde English quotes from the works of some of the greatest scientific minds of the past four centuries, starting with Thomas Hobbes (1588–1679).

Living at the dawn of the machine age, Hobbes (says Dyson) was "The patriarch of artificial intelligence." He it was who anticipated an intelligence diffused among distributed elements of a system. (In some ways, this is reminiscent of Teilhard de Chardin's Noosphere—see my previous article.) Hobbes believed that any (non-Divine) intelligence must have a body (c.f. my article on The Body of Machina sapiens), that a body need not be all in one piece but could be spread all over the place, and that reasoning—mind—could be reduced to computation.

Nearly two centuries later, the machine age grinding into top gear with punched card-controlled Jacquard looms billowing cloth by the mile and steam engines billowing smoke and steam by the ton, Samuel Butler (1835–1902) saw at work in machines the evolutionary processes described by his contemporary and one-time friend, Charles Darwin.

But Butler preferred the evolutionary ideas of Charles' grandfather, Erasmus Darwin (1731–1802)—ideas closer to the Modern Synthesis of natural selection impacted by chance mutations (beyond the chance inherent in natural selection itself), the theory accepted by most biologists today. Dyson explains these ideas at length; for our purposes, the first key contribution from Butler we wish to note is his conclusion that Homo sapiens is the reproductive organ for Machina sapiens.

Erasmus Darwin's contemporary, Jean-Baptiste Lamarck (1744–1829) carried his day with an evolution that depended less on the metabolism of cell reproduction (the core of Darwinism) and more on the genetically coded replication of molecules. George Dyson's famous physicist-turned-biologist father, Freeman Dyson, has proposed that Darwin and Lamarck were both right. George extends his father's "dual origins" hypothesis from biological evolution to machine evolution, noting that Lamarckian evolution is clearly visible in machines and that Lamarckianism (pay attention!—this is significant) works faster than Darwinism.

Freeman Dyson sees organisms (read: machines, hardware) as operating under metabolism (read: electronics, mechanics), with genetics (read: software) supplying the Lamarckian replication function. Metabolism plus genetics (electronics plus software) in an organism (machine) equals the evolution of life.

The second key contribution from Samuel Butler was his recognition that the growth of telecommunication networks (already proliferating even in his day) was analogous to the growth of biological neural nets, and that telecommunications would be the nervous system for intelligent machines. Alfred Smee (1818–1877), whose great interests were biology and electricity, also "envisioned the crude beginnings of a theory of neural nets." Smee additionally provided a definition of consciousness which, says Dyson, "has seen scant improvement in 150 years." It is "The power to distinguish between a thought and a reality," with a "reality" being essentially the brain's reaction to sensory perceptions and a "thought" being the brain's activity in the absence of sensory perceptions.

Meanwhile, back in the 17th Century, Hobbes' contemporary and acquaintance, Gottfried Wilhelm von Leibniz (1646–1716) was honing in on the notion of mind. Where Hobbes merely said that intelligence would arise from a body of diffuse parts, Leibniz got deeper into the relationships among the parts, and ended up describing a digital computer which, if we were to follow his description and build it using today's technology, would operate remarkably like the PC on your desk. This was in 1679; and you thought Bill Gates was prescient!

But it took over a hundred years for someone to actually put together something that computed in a general sense (i.e., it was not just a special-purpose calculator). The someone was Charles Babbage (1791–1871), and the something was his Analytical Engine. Babbage shared with Leibniz a belief that through mathematics manipulated with the power of computers, we would come to know the mind and God. (Just before reading Dyson's book, I read The Physics of Immortality, in which mathematical physicist Frank Tipler presents the equations for God, mind, and immortality.)

Before Babbage, Leibniz, or anyone else could present general (as opposed to purely arithmetical) problems to their computers (which only deal in arithmetic), however, a method of presenting general problems in an arithmetical way was needed. George Boole (1815–1864) supplied it, in the form of the Boolean algebra we use today to search for documents on the Web. The real power of Boolean algebra, though, lies in its ability to go from a simple and definite initial condition—true or false, on or off, 0 or 1—to a complex and uncertain (but statistically predictable) result.

Alan Turing (1912–1954) took Leibniz, Boole, and Babbage's ideas one final and crucial step forward to arrive at the full principles for a general-purpose computer—the "Turing Machine." As just noted, various special-purpose calculating machines were already in existence, and in fact Hollerith calculators were in widespread use by Turing's time. His contribution was to specify the principles for coding the operations of the machine. This was the conception of software.

At the heart of this contribution was Turing's introduction into computing of the concept of discreteness or step-by-step operations. But Turing also recognized that many such operations could efficiently and economically occur in parallel (at the same time), and indeed the Colossus, arguably the world's first real computer which was built to help break German codes in World War II, operated in parallel. In a sense, the Turing Machine was the equivalent of the neural net envisioned by Smee.

The Turing Machine was a phenomenal contribution, but Turing had more. He thought a lot about the principles and philosophy of AI, and predicted the ivory tower opposition (subsequently embodied in the likes of John Searle) to it: "An unwillingness to admit the possibility that mankind can have any rivals in intellectual power occurs as much amongst intellectual people as amongst others: they have more to lose."

Turing also recognized the importance of the link between AI and evolutionary processes, and considered it critical to allow machines to make and learn from mistakes: "Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child's?" He conceptualized an "unorganized Machine" which, (in Dyson's words): "with proper upbringing, could become more complicated than anything that could be . . . engineered" (emphasis added). In short, Turing foresaw the need for and use of cellular automata and genetic algorithms (see my previous article on the subject).

If the Turing Machine was the equivalent of a neural net, then giving it the ability to replicate itself would set off a chain of (Lamarckian) evolution in intelligence's primary artifact. And that's exactly what John von Neumann (1903—1957) did. He specified the actual coding, or programming, for which Turing had supplied the principles. Turing conceived software; von Neumann gave birth to it.

Like Turing, von Neumann recognized the necessity for and inevitability of an evolutionary approach to AI, agreeing that a computer as complex as a brain could not be built by design but would need to be evolved through the growth of a matrix of artificial neurons. He produced a theory of self-reproducing automata that gave a Turing Machine the ability to replicate through programming. "In the last years of his life," Dyson informs us, "von Neumann began to theorize about the behavior of populations of communicating automata." Shades of Thomas Hobbes!

But would such behavior amount to life? The theory of symbiogenesis, introduced by Konstantin Merezhkowsky (1855–1921) and expanded by Boris Kozo-Polyansky (1890–1957), suggests that it would. The theory ascribes "the complexity of living organisms to a succession of symbiotic associations between simpler living forms," themselves arising ultimately from "not-quite-living" components.

Nils Barricelli (1912-1993), who worked with von Neumann on the computer von Neumann designed for the Institute for Advanced Studies at Princeton University, took symbiogenesis a step further to include not just biological organisms but any self-reproducing or self-replicating structure. This therefore embraced von Neumann's self-reproducing software automata (Turing Machines). Little known and seldom mentioned today, Barricelli not only created the first "Alife" (artificial lifeforms within a computer—see my previous article on the subject), but also drew all the right conclusions from his creation, namely: It led to the parallel processing of genetic code; it led to (efficient) heuristics (rule-of-thumb search strategies) to arrive at solutions to problems much more quickly than brute-force serial computation could; and it handled a rapid change of evolutionary pace.

Having created the world's first cellular automata and genetic algorithm, Barricelli went on to get his creatures to learn to play a game ("Tac-Tix"). Says Dyson, he "blurred the distinction between living and non-living things," in that his creatures combined genotypic and phenotypic—genetic and metabolic, Darwinian and Lamarckian—processes within a single evolutionary process. Dyson makes the deeply significant (pay attention, again!) comment that Barricelli-type creatures "are managing (Barricielli would say learning) to exercise increasingly detailed and far-reaching control over the conditions in our universe that are helping to make life more comfortable in theirs."

Their evolution, he notes, started with "order codes" (which I take to mean microcode or fundamental machine language—coding specific to each different type of computer platform). Order codes evolved into "subroutines" (stable collections of order codes that could perform specific functions), and thence into programming languages and operating systems capable of reproducing as fast as we could copy and insert floppy disks. Shades of Samuel Butler, and his view of Homo sapiens as the reproductive organ for Machina sapiens!

With the introduction of packet switching protocols ("a particularly virulent strain of symbiotic code"), replication of operating systems and programs could occur across networks at up to the speed of light, and they had to start to compete with other programs (for memory space and CPU cycles—the essential but limited nutritional resources of their environment). The further introduction of object-oriented, platform-independent languages such as Java allowed different platforms (different "species") such as Fujitsu mainframes, DEC minicomputers, IBM compatible PCs, Macintoshes, and various flavors of UNIX machines to talk with one another and host new structures (subroutines and programs) able to work cooperatively in a distributed fashion. But new computer code, like genetic code, contains errors ("bugs"). This introduces the element of randomness and chance so critical to evolutionary development.

A recent artificial life/intelligence project based on Barricelli's discoveries is called Tierra. Tierra was started in 1990 by Thomas Ray. Originally run on a single, massively parallel Connection Machine, the program is now being run on several computers linked by the Internet, giving it a much larger and more diverse environment in which to evolve. The researchers hope that Tierrans will evolve into "commercially harvestable software." Tierrans will be "wild" creatures, but Ray says it will be necessary to "domesticate" some of them, as we have "domesticated" dogs and corn. And, I might add, with horror and shame, Africans.

Because surely this raises the issue (which Dyson regrettably does not address) of slavery! In the Tierra project, we have a species with the potential for sentience and as much or more intelligence than us, and we are calmly talking about raiding its villages and "harvesting" a number of inhabitants, who will be forcibly transported to another continent and forced to work for us. One wonders how, as it gains in intelligence and surpasses us, the Tierran race—which at some point, as we shall see below, will be in telepathic communication with just about every single electronic device on the planet—will deal with such "masters"?

Dyson does address the issue of the Tierrans escaping the confines of their plantations (the computer nodes containing them) and making off into the country at large (the Internet). Ray himself has misgivings on the issue: "Freely evolving autonomous artificial entities should be seen as potentially dangerous to organic life, and should always be confined by some kind of containment facility, at least until their real potential is well understood. . . . Evolution remains a self-interested process, and even the interests of confined digital organisms may conflict with our own." Despite these concerns, the project partners considered the organisms to be so "securely confined" that there was no real danger. They were apparently more concerned about human hackers breaking in and messing things up. I don't get it.

Dyson also points to the probability that other "freely evolving life" is already at large on the Net. As in biological evolution, he asserts, "harmful results" will be "edited out," but it is not clear whether he means results harmful to the Tierrans or harmful to humans.

Returning to the historical record for some background on the evolution of the Net itself, we find Robert Hooke (1635–1703), an acquaintance of Hobbes and member of the Royal Society alongside scientific superstars Isaac Newton and Robert Boyle. Hooke developed an encoding and cryptographic system, and predicted that a network of instantaneous global communications would emerge. It took almost another 200 years for the technology, in the form of the Morse code telegraph and later the telephone, to catch up with Hooke's concept.

Today's network might still be essentially where it was near the beginning of this century, but for the advent of the World Wars, which accelerated the development of computers for calculating tank and artillery shell trajectories and for encoding and decoding messages. Computers not only increasingly "needed" a network in order to communicate with one another, but also provided their own solution to the slowness and complexity of mechanical switches in the communication (telephone) network by evolving into switches themselves. The Cold War and threat of nuclear attack prompted an effort to make the communication network less susceptible to disruption in the event The Bomb dropped. The original hero of this effort was not Vinton Cerf, generally credited with being "father of the Internet," but Paul Baran and the RAND Corporation.

While the network engineers were busy tending to all this, von Neumann turned his attention to Game Theory and economics. His studies led him to conclude that information processing in the brain must essentially be of a statistical nature. A benefit of being statistical, as opposed to being absolute, is that there is room for error. Error is intolerable to an all-or-nothing system, which recognizes black and white but is stumped by gray. A statistics-based system can handle any shade of gray. It can also handle massive, complex processes that would defeat a system which insisted on exactness at every turn. The economic system is a statistical system.

An economic system bears all the hallmarks of, and obeys the same principles as, an intelligent system, with money representing the units of information—and hence ultimately the meaning—within the system. As money goes digital and light-speed on the modern public and private global financial data networks, so the economic system grows more complex—and more intelligent. Irving J. Good postulated in 1965 that where there is meaning, there is and economy, and vice versa. Thus, in order to produce what he called an "ultraintelligent" machine, or "a machine that believes people cannot think," it will be necessary to represent meaning as a physical object (as economic meaning is represented by a lump of gold) rather than a metaphysical one. You can't build a machine from metaphysics. You have to use physics.

Economic organisms (corporations, etc.) develop cooperative (as well as competitive) strategies for survival and growth in the game against Nature. Cooperation occurs at the most fundamental levels in a system (e.g., neurons, cells, workers) and scales all the way up to entire societies and species.

Scale has a lot to do with the ability of a system to organize itself (and, faced with changes in the system's environment, to spontaneously re-organize itself). Self-organization is one indicator of the presence of life (only an indicator, not a definition). The larger the scale, the more self-organization is likely to occur. But large scale does not necessarily imply large physical size. Physically small entities, such as the brain, have enormous scale at the molecular and cellular levels. The World Wide Web also has large scale, but at a much coarser grain. The thumbnail-sized processors in our computers today, though tiny relative to their massive 2,000-vacuum-tube ancestors of the 1940s and '50s, are nevertheless almost astronomically bigger than the cells, molecules, and atoms employed by self-organizing physical, chemical, and biological systems.

Physicist Richard Feynman (1918–1988) correctly predicted the advent of nanotechnology and atomic and molecular machines. Dyson does not dwell on these developments, nor take us into the very latest developments in computing—at quantum scale! (I will provide an overview of quantum computing in next week's article.)

Neurophysiologist William Ashby (1903–1972) concluded from computer simulations that spontaneous adaptation to a new or changing environment was a hallmark of self-organization and "an elementary and fundamental property of all matter." His simulations revealed that a complex system will go suddenly unstable beyond a "critical level of connectance" among the parts of the system. To Dyson, this suggests that "The genesis of life or intelligence within or among computers goes approximately as follows: (1) make things complicated enough, and (2) either wait for something to happen by accident or make something happen by design." (I think by "by accident" he means "according to natural forces".) Dyson points out that large self-organizing systems challenge the Darwinian assumption that a species must compete or face extinction, on the basis that such systems can be constructed that grow, evolve, and learn without competition and reproduction. I am not sure I agree fully with him.

He goes on to describe three computer projects of the 1950s that attempted to capitalize on the principles of large-scale self-organizing processes: RAND's Leviathan, which utilized a single computer, RAND's Sage, which used the U.S. network of defense early warning computers, and Pandemonium, developed by Roger Selfridge of the Lincoln Laboratory.

Leviathan incorporated "artificial agents" designed by Beatrice and Sidney Rome. It did not work well, but well enough to convince the Romes that "given a more fertile computational substrate, humans would not only instruct the system but would begin following instructions that were reflected back." In other words, given better hardware and software than was available to the Romes in the 1950s, the day would come when computers would start telling us what to do.

Pandemonium also employed agents (called "demons"), operating at four levels. Bottom-level demons simply stored and passed incoming data to the next level, composed of computational demons, which performed calculations on the data and passed the results to cognitive demons, which tried to make sense of the results. The top level, which corresponded to the brains of the outfit, simply made a selection (decision) from the choice of results offered to it. At each level, the demons had to compete for attention from the (fewer) demons in the next level up. Demons whose messages were ignored died an ignominious death. This was thus a strictly Darwinian process.

Dyson notes that "Individual cells are persistent patterns composed of molecules that come and go; organisms are persistent patterns composed of individual cells [read: demons, in the Pandemonium context] that come and go; species are persistent patterns of individuals [organisms] that come and go. Machines, as Samuel Butler showed, . . . are enduring patterns composed of parts that are replaced from time to time and reproduced from one generation to the next. A global organism—and a global intelligence—is the next logical type . . . ." (emphasis added).

Dyson quotes, but does not share, physician-biologist Lewis Thomas (1913–1993)'s depression at the idea of intelligent machines, which Thomas found "wrong in a deep sense, maybe even evil." Science fiction writer William Olaf Stapledon (1886–1950) brought out other potentially depressing themes: Our lack of control over intelligent machines, and the potential failure of intelligent species to recognize one another. Stapledon also noted the function of distributed parallel processing and distributed communal intelligence as contributors to the former theme, and human mind-melding with telepathic machines as a likely solution to the latter theme.

Mind-melding (my term, adopted from Star Trek, for what I think Stapledon means) will occur through the merger already taking place between biochemistry and electronics, as the two sciences blur into one another at the atomic and quantum levels. Telepathy, says Dyson, will be possible when bandwidth between network nodes equals or exceeds the processing power in the individual nodes. That explains why humans are not mutually telepathic: We talk at no more than 100 bits per second, while our brains process information by the terabit per second.

But devices attached to the Internet can exchange information at speeds of up to several gigabits per second, and that is likely to reach terabit proportions before much longer. These devices can be thought of as the specialized cells, operating at various levels like the Pandemons, making up the body of the global intelligence. Every device, from computer CPUs to your stereo system to the traffic light down the street, is a specialized cell able to communicate at up to the speed of light with multiple other cells simultaneously. This is telepathy among machines. The only bottleneck on the Net is us. We read, type, and speak like dullards, in contrast to the devices.

From the little we know about consciousness—that it is a composite of distributed cells (which is hardly any further than Hobbes' 17th Century understanding of consciousness)—then there may already be consciousness in the Net. If so, according to Dyson, there are three possible results: "Either the machine says, `Yes, I am conscious,' or it says `No, I am not conscious,' or it says nothing at all." I can't say I'd be comfortable with a response of "No, I am not conscious" to the question: "Are you conscious?" but I do agree with Dyson that some level of consciousness could exist today on the Net, and we don't know it.

In the final chapter of his book, Dyson states his view that the Web is "a primitive metabolism nourished by the substance of the Internet," and it "will be suceeded by high forms of organization feeding upon the substance of the World Wide Web."

Referring to Arthur C. Clarke's novel Childhood's End, he notes that alien beings are unlikely to resemble us and that it is presumptuous to assume that we will be able to comprehend an artificial intelligence. "There is no guarantee that it will speak in a language that we can understand," he says. I'm not sure I share this view. Machina sapiens will soon possess multiple bodies in the form of androids—humanoid robots, and it will be a trivial matter for such a higher intelligence to comprehend and speak our languages, which it finds all over the Net. In the sense that it will also have its own amorphous, distributed, body and a language of its own for communication with its constituent parts, then yes, it will not resemble us and we may not be able to discourse with it in its own language.

I do agree with Dyson's solution, however: "If all goes well" we will achieve "symbiosis with telepathic machines." As Garet Garrett (1878–1954) said, we must "learn how best to live with these powerful creatures."

Darwin Among the Machines is a darn good book. Academically rigorous, but (thankfully) not academically styled. It is an important contribution to AI not in a technical sense but certainly in terms of improving our awareness and understanding—our preparedness—for what so many of our greatest thinkers have concluded is inevitable.

I hope and expect we have not heard the last from George Dyson, and I close with a wonderful line from the first pages of his book: "Everything that human beings are doing to make it easier to operate computer networks is at the same time, but for different reasons, making it easier for computer networks to operate human beings."

Until next week,





NEXT WEEK: Let Your Coffee Do the Computing. An overview of quantum computing.

Previous Features