According to Ray Kurzweil, we are approaching a new golden age of human existence, in which human potential will be unlocked, and possibilities for health, happiness and prosperity will become unlimited. But Kurzweil is not a priest, he is not a religious zealot or evangelist, at least not in the conventional sense. He is a technological forecaster. He argues that advances in computing technologies will inevitably lead, very soon, to computers which will become so fast and so powerful that they will be more intelligent than the entire human population put together, and they will begin to design their own descendants. From there the advances will become exponential, and humans will be rapidly left behind. This predicted occurrence is called the ‘technological singularity‘, a name which links it to astrophysicists’ conceptions of a singularity as a point inside a black hole within which the normal laws of physics as we understand them do not apply, and nothing can be predicted. Likewise, we cannot predict what will happen at the point of the technological singularity, as our predictions would involve the impossible task of trying to guess how a superhuman intelligence would operate.
The idea of the technological singularity goes back to the 1990s, and it is being taken seriously; the Singularity Institute for Artificial Intelligence was established in 2000 to promote research into the singularity, and receives the support of Nasa and Google, amongst others. Kurzweil did not coin the term ‘singularity’, although he has more recently popularised it. Using ‘Moore’s Law‘, which states that the potential processing power available for computing devices will double every eighteen months, Kurzweil predicts that the singularity will occur as soon as 2045. He has written about it, and starred in two documentary films about the idea, arguing that when the singularity comes, the world will change beyond our current ability to comprehend it, and a new age will begin. Kurzweil hopes that the new artificial intelligences will help us to achieve the incredible, such as immortality and intergalactic travel. His vision appears to have been compelling to many; reading and listening to its advocates promoting their ideas, and especially browsing the website set up to proselytise for the cause, one is given the inescapable impression of an almost religious obsession with Kurzweil’s predictions and prophecies of the impending age of computer intelligence, the coming of the machine messiah.
This hypothesis, and the movement which has grown up around it, is not without its prominent critics, including technologist Jeff Hawkins, cognitive psychologist Stephen Pinker, and George Moore, the originator of Moore’s Law. You can read their views here, amongst those of others, including singularity advocates who are often called ‘singularitarians‘. I think the predictions of the singularitarians rest on two specific false premises, which I want examine. Firstly, they assume things about the nature of technological change in relation to society which are undermined by the history of technology. My own research into the social history of the telephone in the late nineteenth century provides some interesting evidence here, although I won’t go into it in too much detail. Secondly, I think that the assertions about the rise of superhuman artificial intelligence in this manner are indicative of a misconception of the nature of intelligence itself.
Advances in technology have astounded people for a long, long time, probably since some bright spark first had the smart idea of banging two rocks together to start an artificial fire, thus ending human dependency on accidental forest fires or lightning-induced conflagrations. I like to imagine contemporary observers marvelling about what could possibly come next. I wonder if any resisted the spread of the new technology. The Victorian era provides a great example of this kind of amazed wonderment, as technological advances in communications, transport, medicine and other fields revolutionised many aspects of business, commerce, healthcare and government, and slowly trickled down into everyday life. Many Victorians, witnessing the rapid, and seemingly increasing, pace of change, wondered about the future of their world as it changed around them. Over the course of the twentieth and now into the twenty-first century, this pace has not abated; if anything it has accelerated, with computing technologies in particular changing our everyday lives almost beyond recognition.
However, I feel some observers and commentators, like Kurzweil, perhaps overwhelmed and a little carried away by all this wonderful progress, have in their enthusiasm forgotten that for a technology to spread and change our world, it has to be used. There’s no future for a technological artefact or system which no-one uses. The factors which lead individuals, users of technologies, to choose the technologies they do are complicated. The pressure to keep up with others doing something they were already doing better or faster can certainly be a powerful force in driving technological proliferation. In my own research, I can see how many people in business felt the need to adopt the telephone when their competitors began to use it, even when it was not the best medium for the job. In the beginning it was often not very reliable, and conversations were not very private, often being overheard on nearby wires or by operators at the exchanges. These reluctant users, then and now, provide fuel for those who argue that technologies have a sort of autonomous existence, changing society as soon as they are invented and moulding people’s lives accordingly. This model of the interactions between technology and society is called ‘technological determinism’, the theory that technologies shape society more than societies shape technology.
Social histories of technology provide a much more nuanced view of this picture, and philosophers of technology have largely abandoned this over-simplified view. For example, the inherent power of the technology of the telephone did not trump people’s freedom to choose to use it; many chose not to, despite pressure, and others recognised the potential of the new medium and campaigned to improve it. So my first response to the singularity hypothesis is that the assertions of the inevitability of the event are misplaced for a couple of reasons. Firstly, Moore’s Law is not based on some necessary physical law, but rather on contingent human factors: on economics and on business. People using computers want faster computers in order to compete with their rivals. Researchers cater to this need, and in many ways the ‘law’ proves a self-fulfilling prophecy as it provides a specific goal and expectation at which to aim, but if some physical barrier to their progress stood in their way, they would have to stop. They might find a new way around the barrier, a step such as the silicon chip, or quantum computing, but there is no necessary predictive power in Moore’s Law.
In addition, Kurzweil’s assertions of inevitability seem to deny the human actors involved any agency whatsoever. It is as if to say that now a certain course of action has been started by one group of individuals, it cannot possibly be altered by another. On the contrary, if sufficient numbers of people believe that it is in their best interests to do so, they will engage in lobbying or legislation. The spread and growth of technologies is negotiated by complex interactions between different groups with different vested interests at heart. It is true that some technologies, especially network-based technologies such as telecommunications systems, develop an inertia as they grow which can subsequently change society. However, the initial impetus has come from somewhere, and it is always of human origin. When some complain that the pace of technological change appears unstoppable, and use it as proof of technological determinism and the influence of technologies over society, all they are noticing is the influence of other groups of people, those producing the technologies, over their own lives. To imply that any example of technological development could never be stopped is a degree of technological determinism which cannot be supported.
I want now to argue that Kurzweil’s predictions point to a misunderstanding of human intelligence and consciousness. I should add that others have pointed to this as well, but I have not seen it raised precisely like this elsewhere. The basic singularitarian assumption is that a quantitative increase in processing power will lead to a qualitative change, and the machine in question will suddenly wake up and become conscious. Kurzweil seems to believe that simply by producing a computer with the processing power of all the human brains in the world, such a computer would necessarily be as creative and as innovative as the whole of humanity put together, and at this point we would no longer be able to rival such an intelligence. However, I believe he confuses and conflates two separate things: the speed with which processes and operations can be carried out, and the original act of creativity; in short, the ability to answer a question, and the ability to ask the question in the first place. The former is a value that can be measured quantitatively, but the latter is something which can be assessed only qualitatively. What this means is that the super-fast, super-intelligent machine which Kurzweil posits will be capable of nothing more than very, very fast calculations.
The underlying problem is that intelligence has here been drastically simplified. I see no reason why any increase in an intelligence’s rational problem solving ability, as would happen when we increased a computer’s processing power, would necessarily lead to its developing an emotional intelligence. The two are different types of intelligence, and we see them manifesting themselves independently of one another in humans; a strength in one area of intelligence – rational, emotional, spatial – does not necessarily mean a strength in another. Processing power in a computer is simply an endowment of one particular type of intelligence. A computer with more processing power will simply be able to do more calculations faster. Why should it suddenly become curious about the world? Kurzweil seems to assume that a quantitative increase in processing power will lead a computer intelligence to manifest a desire to seek answers and meaning. However, no matter how fast the computers of the future become, without this qualitative shift, this change in kind rather than degree, they will not cross the line from advanced calculating devices to conscious entities.
Kurzweil is now desperately trying to keep himself alive and healthy until 2045, when he expects the technology will become available for his mind to be uploaded to a computer, granting him immortality. His followers seem equally excited. Although his is not the only interpretation of the idea of the singularity – some are more pessimistic and involve the new superhuman artificial intelligences wiping out humanity for one reason or another – he is doing more to actively proselytise than others. His optimistic certainty shares something in common with religious visions of the imminent end of this current state of the world, and its glorious transition into something infinitely better. In the same way that the pursuit of science can masquerade as a form of pseudo-religion through the adherence of scientists to a dogmatic scientism, might also technology be capable of producing the same effect? If previous examples of failed predictions are anything to go by, Kurzweil’s cult will not end when his prophecies do not come true. Rather his disciples will simply recalculate, restate, and reaffirm their beliefs. But if they are waiting for computers to reach the stage where they will miraculously be able to solve all of humanity’s problems, I feel they will be sorely disappointed by the continued absence of Kurzweil’s machine messiah.
The following links to articles, some referenced in the text above, may provide some interesting further reading: