//
article
Philosophy, Technology

Preachers of the machine messiah: the misguided assumptions and assertions of the cult of the ‘singularity’

Ray Kurzweil, picture from Wikipedia.

According to Ray Kurzweil, we are approaching a new golden age of human existence, in which human potential will be unlocked, and possibilities for health, happiness and prosperity will become unlimited. But Kurzweil is not a priest, he is not a religious zealot or evangelist, at least not in the conventional sense. He is a technological forecaster. He argues that advances in computing technologies will inevitably lead, very soon, to computers which will become so fast and so powerful that they will be more intelligent than the entire human population put together, and they will begin to design their own descendants. From there the advances will become exponential, and humans will be rapidly left behind. This predicted occurrence is called the ‘technological singularity‘, a name which links it to astrophysicists’ conceptions of a singularity as a point inside a black hole within which the normal laws of physics as we understand them do not apply, and nothing can be predicted. Likewise, we cannot predict what will happen at the point of the technological singularity, as our predictions would involve the impossible task of trying to guess how a superhuman intelligence would operate.

The idea of the technological singularity goes back to the 1990s, and it is being taken seriously; the Singularity Institute for Artificial Intelligence was established in 2000 to promote research into the singularity, and receives the support of Nasa and Google, amongst others. Kurzweil did not coin the term ‘singularity’, although he has more recently popularised it. Using ‘Moore’s Law‘, which states that the potential processing power available for computing devices will double every eighteen months, Kurzweil predicts that the singularity will occur as soon as 2045. He has written about it, and starred in two documentary films about the idea, arguing that when the singularity comes, the world will change beyond our current ability to comprehend it, and a new age will begin. Kurzweil hopes that the new artificial intelligences will help us to achieve the incredible, such as immortality and intergalactic travel. His vision appears to have been compelling to many; reading and listening to its advocates promoting their ideas, and especially browsing the website set up to proselytise for the cause, one is given the inescapable impression of an almost religious obsession with Kurzweil’s predictions and prophecies of the impending age of computer intelligence, the coming of the machine messiah.

This hypothesis, and the movement which has grown up around it, is not without its prominent critics, including technologist Jeff Hawkins, cognitive psychologist Stephen Pinker, and George Moore, the originator of Moore’s Law. You can read their views here, amongst those of others, including singularity advocates who are often called ‘singularitarians‘. I think the predictions of the singularitarians rest on two specific false premises, which I want examine. Firstly, they assume things about the nature of technological change in relation to society which are undermined by the history of technology. My own research into the social history of the telephone in the late nineteenth century provides some interesting evidence here, although I won’t go into it in too much detail. Secondly, I think that the assertions about the rise of superhuman artificial intelligence in this manner are indicative of a misconception of the nature of intelligence itself.

Advances in technology have astounded people for a long, long time, probably since some bright spark first had the smart idea of banging two rocks together to start an artificial fire, thus ending human dependency on accidental forest fires or lightning-induced conflagrations. I like to imagine contemporary observers marvelling about what could possibly come next. I wonder if any resisted the spread of the new technology. The Victorian era provides a great example of this kind of amazed wonderment, as technological advances in communications, transport, medicine and other fields revolutionised many aspects of business, commerce, healthcare and government, and slowly trickled down into everyday life. Many Victorians, witnessing the rapid, and seemingly increasing, pace of change, wondered about the future of their world as it changed around them. Over the course of the twentieth and now into the twenty-first century, this pace has not abated; if anything it has accelerated, with computing technologies in particular changing our everyday lives almost beyond recognition.

An 1880s wall-mounted telephone, supplied by the National Telephone Company.

However, I feel some observers and commentators, like Kurzweil, perhaps overwhelmed and a little carried away by all this wonderful progress, have in their enthusiasm forgotten that for a technology to spread and change our world, it has to be used. There’s no future for a technological artefact or system which no-one uses. The factors which lead individuals, users of technologies, to choose the technologies they do are complicated. The pressure to keep up with others doing something they were already doing better or faster can certainly be a powerful force in driving technological proliferation. In my own research, I can see how many people in business felt the need to adopt the telephone when their competitors began to use it, even when it was not the best medium for the job. In the beginning it was often not very reliable, and conversations were not very private, often being overheard on nearby wires or by operators at the exchanges. These reluctant users, then and now, provide fuel for those who argue that technologies have a sort of autonomous existence, changing society as soon as they are invented and moulding people’s lives accordingly. This model of the interactions between technology and society is called ‘technological determinism’, the theory that technologies shape society more than societies shape technology.

Social histories of technology provide a much more nuanced view of this picture, and philosophers of technology have largely abandoned this over-simplified view. For example, the inherent power of the technology of the telephone did not trump people’s freedom to choose to use it; many chose not to, despite pressure, and others recognised the potential of the new medium and campaigned to improve it. So my first response to the singularity hypothesis is that the assertions of the inevitability of the event are misplaced for a couple of reasons. Firstly, Moore’s Law is not based on some necessary physical law, but rather on contingent human factors: on economics and on business. People using computers want faster computers in order to compete with their rivals. Researchers cater to this need, and in many ways the ‘law’ proves a self-fulfilling prophecy as it provides a specific goal and expectation at which to aim, but if some physical barrier to their progress stood in their way, they would have to stop. They might find a new way around the barrier, a step such as the silicon chip, or quantum computing, but there is no necessary predictive power in Moore’s Law.

In addition, Kurzweil’s assertions of inevitability seem to deny the human actors involved any agency whatsoever. It is as if to say that now a certain course of action has been started by one group of individuals, it cannot possibly be altered by another. On the contrary, if sufficient numbers of people believe that it is in their best interests to do so, they will engage in lobbying or legislation. The spread and growth of technologies is negotiated by complex interactions between different groups with different vested interests at heart. It is true that some technologies, especially network-based technologies such as telecommunications systems, develop an inertia as they grow which can subsequently change society. However, the initial impetus has come from somewhere, and it is always of human origin. When some complain that the pace of technological change appears unstoppable, and use it as proof of technological determinism and the influence of technologies over society, all they are noticing is the influence of other groups of people, those producing the technologies, over their own lives. To imply that any example of technological development could never be stopped is a degree of technological determinism which cannot be supported.

I want now to argue that Kurzweil’s predictions point to a misunderstanding of human intelligence and consciousness. I should add that others have pointed to this as well, but I have not seen it raised precisely like this elsewhere. The basic singularitarian assumption is that a quantitative increase in processing power will lead to a qualitative change, and the machine in question will suddenly wake up and become conscious. Kurzweil seems to believe that simply by producing a computer with the processing power of all the human brains in the world, such a computer would necessarily be as creative and as innovative as the whole of humanity put together, and at this point we would no longer be able to rival such an intelligence. However, I believe he confuses and conflates two separate things: the speed with which processes and operations can be carried out, and the original act of creativity; in short, the ability to answer a question, and the ability to ask the question in the first place. The former is a value that can be measured quantitatively, but the latter is something which can be assessed only qualitatively. What this means is that the super-fast, super-intelligent machine which Kurzweil posits will be capable of nothing more than very, very fast calculations.

The underlying problem is that intelligence has here been drastically simplified. I see no reason why any increase in an intelligence’s rational problem solving ability, as would happen when we increased a computer’s processing power, would necessarily lead to its developing an emotional intelligence. The two are different types of intelligence, and we see them manifesting themselves independently of one another in humans; a strength in one area of intelligence – rational, emotional, spatial – does not necessarily mean a strength in another. Processing power in a computer is simply an endowment of one particular type of intelligence. A computer with more processing power will simply be able to do more calculations faster. Why should it suddenly become curious about the world? Kurzweil seems to assume that a quantitative increase in processing power will lead a computer intelligence to manifest a desire to seek answers and meaning. However, no matter how fast the computers of the future become, without this qualitative shift, this change in kind rather than degree, they will not cross the line from advanced calculating devices to conscious entities.

Kurzweil is now desperately trying to keep himself alive and healthy until 2045, when he expects the technology will become available for his mind to be uploaded to a computer, granting him immortality. His followers seem equally excited. Although his is not the only interpretation of the idea of the singularity – some are more pessimistic and involve the new superhuman artificial intelligences wiping out humanity for one reason or another – he is doing more to actively proselytise than others. His optimistic certainty shares something in common with religious visions of the imminent end of this current state of the world, and its glorious transition into something infinitely better. In the same way that the pursuit of science can masquerade as a form of pseudo-religion through the adherence of scientists to a dogmatic scientism, might also technology be capable of producing the same effect? If previous examples of failed predictions are anything to go by, Kurzweil’s cult will not end when his prophecies do not come true. Rather his disciples will simply recalculate, restate, and reaffirm their beliefs. But if they are waiting for computers to reach the stage where they will miraculously be able to solve all of humanity’s problems, I feel they will be sorely disappointed by the continued absence of Kurzweil’s machine messiah.

The following links to articles, some referenced in the text above, may provide some interesting further reading:

About these ads

Discussion

20 thoughts on “Preachers of the machine messiah: the misguided assumptions and assertions of the cult of the ‘singularity’

  1. Claude Shannon was once asked the question: “Can machines think?” and his reply was “Sure.” When asked to clarify he stated “I think, don’t I?” (From Michio Kaku, Physics of the Impossible, pg 109) Basically, the brain sitting on you and my shoulders is an existence proof that matter can be turned into an intelligent being. Transistors are smaller and faster than neutrons but use much more energy (See for example: http://lizbethsgarden.wordpress.com/2011/06/19/newspaper-column-artificial-intelligence/ or http://www.scientificamerican.com/article.cfm?id=computers-vs-brains ) I agree that merely having the computational ability does not mean that a computer will be intelligent like a human, since proper programming is required. However, once the computational ability is there, I at least think that relatively soon afterwards computers will be programmed to be intelligent.

    Posted by Joshua Cogliati | 6 March, 2012, 1:47 am
    • Hi Joshua, thanks for reading and commenting. I appreciate that in theory, if a brain is simply a computer working with a higher density of processing units, neurons instead of transistors, maybe its complexity could be replicated artificially. However, I can’t help thinking that there are aspects of human intelligence, important aspects which would make or break the hypothesis of a singularity in which superhuman intelligences essentially out-evolved us, which could not be programmed in this way. For example, the social, or emotional, intelligence which comes from community life and experiencing the ‘other’. I believe the key here would be the ability to recognise the ‘other’ as another ‘self’, and, reflexively, one’s own ‘self’ as another’s ‘other’. As a computer programme is restricted to doing that which it has been designed to do, directly or indirectly, I wonder whether this could be achieved.

      An interesting point from the history of technology, and specifically from telecommunications history, is the way in which technological developments are used to provide analogies for scientific investigation. A general example of this kind of phenomenon is the growth of the mechanical philosophy of the sixteenth and seventeenth centuries amidst the background of the increasing proliferation of mechanised artefacts, for example hydraulic systems and clocks. Specifically, the growth of telecommunications networks in the nineteenth century provided an new analogy for understanding the brain, which was first seen as a telegraph system, and later as a telephone exchange. Nowadays we see the brain as a computer, but I feel such analogies may still be apt to mislead us. They may well represent our best current understanding, but I found this section from the Time Magazine article I linked to above particularly interesting:

      “The biologist Dennis Bray was one of the few voices of dissent at last summer’s Singularity Summit. “Although biological components act in ways that are comparable to those in electronic circuits,” he argued, in a talk titled “What Cells Can Do That Robots Can’t,” “they are set apart by the huge number of different states they can adopt. Multiple biochemical processes create chemical modifications of protein molecules, further diversified by association with distinct structures at defined locations of a cell. The resulting combinatorial explosion of states endows living systems with an almost infinite capacity to store information regarding past and present conditions and a unique capacity to prepare for future events.” That makes the ones and zeros that computers trade in look pretty crude.”

      I wonder whether we’re not oversimplifying the situation when we look through the lens of our current cutting edge technologies? Maybe we could create an estimation of intelligence, or even the outward appearance of it, but I think that to say this would amount to the kind of intelligence which would essentially supersede humanity in general is to make too overenthusiastic an inference.

      Posted by Michael Kay | 6 March, 2012, 11:00 am
      • I’ll make a longer reponse later, but first a question: Do you agree with philosophical materialism?

        Posted by Joshua Cogliati | 6 March, 2012, 1:38 pm
      • I would like to make a few comments. (I am assuming philosophical materialism.) I agree that computers might not have social or emotional intelligence at first. However, a computer that can out think any person at any scientific or engineering problem would by itself be a massive change to humanity, and if its goal was to destroy humanity, it would probably succeed unless it happens before there are enough robots around to give it some kind of body.

        I believe that computers already are intelligent, just not as generally intelligent as a human. For example, if a dog could beat any human chess player, we would consider it intelligent. If a parrot could invert a 12×12 matrix, we would consider it intelligent. If a dolphin won Jeopardy we would consider it intelligent.

        I would also like to make the point that humans would not be capable of passing the inverse Turing test and have not since the 1970s, that is, pretend that they are a computer. For example, if I want to figure out if a computer really is a computer and not a human pretending to be one, I could just ask it to invert a 20 by 20 matrix. This is trivial for a computer, but weeks (or more) of work for a human.

        I disagree with Dennis Bray. First of all, people have made analog computers in the past, and if they were considered most efficient, they could be made again. I think that a couple dozen analog electrical components could replicate the differential equations that describe a neuron’s processing. Secondly, bits can be used together. So 1 and 0 can be used to make up bytes, so a “D” is 1000100.

        Posted by Joshua Cogliati | 7 March, 2012, 1:41 pm
  2. I agree with your scepticism in this matter, because it seems that what is going on in discussions of artificial intelligence and computers is that we have bolted an analogy to the side of a technology, and have forgotten that it is at best a way of explaining a possible approach. This analogy of machine “intelligence” is the map, not the territory. One of my areas of research at the moment is heuristics, and how these evolutionary “kludges” are a supplement to the intelligence which we can quantify in terms of bits. Artificial intelligence researchers and cheerleaders are oversimplifying as you say, because to say Moore’s law (more of a guideline, non?) will bring us to the level of human intelligence by virtue of only one parameter we have selected as an indicator of intelligence is fairytale-time. Approaching “human level intelligence” as quantified by simple brute processing power is the easiest and least interesting part of the nature of intelligence. What is more challenging is to understand the role played by millions of years of evolution on the structure of the brain, and also, the behaviours (in the heuristic sense) that develop which are an adjunct to intelligence. Understanding is one thing, to replicate the results of troubleshooting on a timescale of millions of years is the next matter. The brute processing power of supercomputers may be able to replicate aspects of this process via new developments in evolutionary programming, but they will remain quite a distance away from the massively parallel and incredibly robust network and system of interactions we call that weird chimera of brain/mind/intelligence. In fact, that distance might just indefinately remain the “30 years away” figure that singularitarians have been touting since Vinge’s paper in 1993! Nice and concise post, apologies for the chaotic reply!

    Posted by andrewggibson | 6 March, 2012, 2:15 pm
  3. Thanks for commenting on my blog post about the same topic ( http://beyondanomie.wordpress.com/2012/02/01/the-last-generation ).

    If I may be permitted to distill out the two key themes of your concern about the feasibility of a singularity, they are:
    1) an insufficient regard to the free will of a critical mass of humanity to not adopt a technology
    2) an eliding of technological complexity and singularity/intelligence/consciousness

    Re: 1), I discussed this in more detail when replying to your post on my blog, but in brief I think that while you’re technically correct to point out the unspoken assumption of singularitarians that new technology will be adopted, in practice I think that while there will always be plenty of holdouts (a trite example: I personally don’t use Facebook and have no plans to), the trend will be that a society will tend to adopt technologies that permit the acquisition of a competitive advantage (or they THINK will permit the acquisition of a competitive advantage) even if they don’t really want to. Your telephone example is a good one: people didn’t necessarily have to like or believe in it, but they adopted it anyway, and more so over time as the investment cost dropped and the potential opportunity cost of NOT adopting increased. In a capitalist society, this is fairly inevitable I think.

    2) is a more serious practical problem with the concept I feel. Depending on definitions, it’s certainly possible to argue that processing power is raw intelligence. It’s much more arguable as to whether it is consciousness/sentience: the ability to be aware of one’s potential and act upon that. There are many theories regarding the origin of consciousness in the human mind (one of my former university tutors was keen on this exact topic, as it happens). One theory is that it emerges from complexity and learning, as a field effect. This goes some way to explaining the differences in consicousness between, say, a newborn human and an adult one. Or the difference between a flower and a worm, or the worm and a whale, or the whale and a human. If this theory is correct, then increasing complexity will almost inevitably result in some form of self-awareness. The exact “tipping point” for when this happen is uncertain, but as humans are self-aware, singularists tend to elide the required level of complexity to that of the human mind (or that of a society of humans, it depends which ones you read).

    I don’t necessarily agree with all the effects of singularity, and its likely timeline, as proposed by those more closely wedded to the theory. But I don’t really disagree with it happening at some point, short of our apocalypse!

    Posted by beyondanomie | 6 March, 2012, 2:30 pm
    • I think those two questions (1) can computers be conscious and (2) will humans choose not to adopt a technology are very related. I am guessing that the only proof that will be sufficient to convince people that computers can be conscious will be actually doing it. However, once that is done, we are already past the point of easily returning back to a world without conscious computers. Basically the day before, most people think that it is not possible (or not possible anytime soon) for there to be a conscious, and the day after there are conscious computers that really do not want to be turned off.

      In 1872, Samuel Butler in the book Erewhon http://www.gutenberg.org/files/1906/1906-h/1906-h.htm , the argument is presented that if we want to prevent the singularity, we should not use any technology that was invented before 300 years ago. “But returning to the argument, I would repeat that I fear none of the existing machines; what I fear is the extraordinary rapidity with which they are becoming something very different to what they are at present. No class of beings have in any time past made so rapid a movement forward. Should not that movement be jealously watched, and checked while we can still check it? And is it not necessary for this end to destroy the more advanced of the machines which are in use at present, though it is admitted that they are in themselves harmless?”

      Posted by Joshua Cogliati | 9 March, 2012, 10:40 pm
  4. I enjoyed this article and I thought I’d raise a few points. First, as we looked at last year, I think you could connect the idea of singularity and the human mind vs. the computer mind to the Cartesian idea of the mind and the difference between humans and machines. Secondly, I think that singularity like a practical artificial intelligence system (and I know the two are linked) are always twenty years away….

    Posted by Liz | 6 March, 2012, 5:44 pm
  5. I originally wrote this over on facebook, but Michael said I should add any comments on here too.

    Nice article, read through it and the comments, don’t think I have the expertise to comment properly with references.
    Though I tend to agree that once computers hit a certain processing power and have been programmed to process ideas related to what is going on around them I don’t see why they shouldn’t be able to come up with original ideas or think emotionally or make decisions based on morality or ethics. Though if we do not originally program them I don’t see how it can happen.

    Any thought or idea is just a possibility that we have calculated based on our knowledge…. related to our surroundings or job, or just in spare time. I don’t see why a computer cannot be programmed to assess things based on it’s knowledge and come up with original ideas in science or philosophy or anything really. And if it has a far greater knowledge and processing power than a human I don’t see why it couldn’t come up with incredible breakthroughs or ideas.

    Emotion is a way we physiologically react to stimuli around us, for example we tend to get close to a person if we interact with them often in ways we like. So once a computer is programmed with a personality and it interacts with people or other machines in ways it ‘likes’ it could be programmed to become emotionally attached as we do, and even have a sad reaction if they go away or die, but this again has to be originally programmed.

    Even if we programmed it at the beginning it does not change the fact that the computer is having original thoughts and emotions, we are originally programmed by our DNA it just takes us longer to build up the knowledge and life experience that effects how we react to outside stimuli.

    Science at the moment puts the human brain down to essentially electrical impulses and chemical reactions, so on this basis I don’t see why a computer could not work the same.

    I realise this is a pretty simplified view, but there you go.

    I notice how we are avoiding the whole ‘soul/g-d’ thing, probably a good idea, taking this into account my view would differ on what I believe, but discussing this scientifically is a problem.

    Posted by Benjie Bacall | 7 March, 2012, 3:45 pm
  6. The other day I watched an interesting BBC programme, Horizon: The Hunt for AI, which inspired me to revisit this and write a few more thoughts in response to some of the thought-provoking comments posted above, and address some of these issues, especially in light of the matters discussed in the programme. If you live in the UK and have access to this content, I would advise watching the episode, I thought it highlighted some interesting points about the state of recent research into AI. If you don’t have access to this content, you can read an article on the BBC News website about it here.

    One of the particularly pertinent points which was raised was the discussion of the ways in which we learn and become curious about ourselves, others, and the world around us. Discussing supercomputers, which comprise a big mass of processing power, the point was made that this really was simply a highly sophisticated, very advanced calculating machine, which is not capable of the kinds of feats of imagination which our brains enable us to perform. I feel this is the kind of place Kurzweil’s argument sees his predicted, and eagerly awaited, AI emerging from. An alternative source for AI was then investigated which was based on the premise that the kind of learning we do, and the abilities that we develop, depend in a large part on the interactions of our brains with our bodies, and our abilities to investigate and manipulate our surroundings.

    It occurs to me that this might provide a helpful way of demarcating what I might call the ‘closed’ programming of the supercomputer and the more ‘open’ programming of the computer-plus-a-body. The first relies solely on the programme it is given to run and the data it is provided; no matter how complex and sophisticated the programme, or how complete the dataset, it can not go beyond the limits of its instructions. Human creativity will not be thus derived from algorithms. The second, on the other hand, with sufficient exposure to the world via sensory apparatus similar in capacity to a human body, may have more chance at achieving an ‘open’ programme, or a ability to learn independently of its programmers. I wonder though whether this kind of AI would not be limited in a similar way to us.

    I sense there may be a gap between these two approaches which can not really be bridged: on the one hand, the supercomputer calculates very rapidly, but is only a tool. On the other, is our specific pattern of intelligence, including creativity and curiosity, actually a product of our specific circumstances as physical (and limited) beings?

    Josh, you asked if I was a philosophical materialist, and I’m sorry I neglected to answer your question earlier. Philosophically, I do believe that the material world can be entirely explained without recourse to any external phenomenon, such as, as I expect you mean to imply, spirit. However, you then say: “a computer that can out think any person at any scientific or engineering problem would by itself be a massive change to humanity, and if its goal was to destroy humanity, it would probably succeed unless it happens before there are enough robots around to give it some kind of body.” The problem here is that the computer is only ‘out-thinking’ us if it is given the questions to answer and the method by which to answer them. The method itself also needs to be programmed in, the computer will not be expected to come up with it independently.

    I believe we are in danger here of conflating two actions which can be perceived as being indicative of intelligence: firstly, the ability to solve a scientific or engineering problem, and secondly the property of having a ‘goal’. This implies intention and appears to me to be completely different. The computer having an opinion of us, be it malevolent or benign, is a completely different aspect of its intelligence from its ability to process data or calculate more quickly. Such an opinion, in order for us to be looking at a dangerous AI, would need to have developed as the result of an ‘open’ programme, ie it would be a tendency which had been learned. If it were the result of the ‘closed’ programming of the supercomputer, it would not be an AI which was at fault, but the human which had programmed it to respond in a certain way towards humanity. This would mean not that it was a dangerous AI, but merely that it was a powerful weapon of a dangerous human.

    Likewise, you say you believe computers are already intelligent, but in truth they are only as intelligent as we make them, and, furthermore, the intelligence they do manifest is entirely dependent upon us. If the dolphin won Jeopardy, it would not be because it was following a set of pre-programmed algorithms from which it had no capacity to deviate. Its learning, or programming, would be ‘open’ instead of ‘closed’.

    As I have said, I believe we are approaching ‘intelligence’ in the wrong way, and I appreciated the way Andrew put this, that the “analogy of machine “intelligence” is the map, not the territory.” We seem to be taking our subjective model for what we believe is going on, and jumping to the assumption that this is objectively true.

    Chris, your distillation of my themes is correct. The article concerns two things: firstly, the problem of assuming that technologies will grow and develop along some inevitable path, without giving enough attention or credit to the human beings who choose to use or not to use them, and also how to use or not use them. Secondly I address the misunderstanding of intelligence which I believe leads to a belief that quantitative increases in processing power lead to a qualitative change in intelligence. I believe the majority of contentions have been with the latter, but I have written about that above. You respond to the former strand of the argument by saying that people are still likely to adopt the new technology, possibly based on social pressures. This is possible, but not necessarily inevitable. As a small example, I experience every week an active choice, driven by ideology, to abstain from technologies such as telecommunications. This is the Jewish Sabbath, on which we severely limit our use of many technologies and activities. It is not, obviously, a large group, but I want to bring the case to demonstrate how we choose how and when to use technologies according to a factor external to the technologies themselves. Or at least we can, if we want to exercise the choice,and are aware that we can. This is technological use/non-use subordinated to an alternative, prioritised, value system. You may be right, though, that in mainstream western society nowadays, this sort of choice, or awareness of choice, or will to choose, may lie dormant.

    Josh, I think that your assertion that we will not know if AI can exist until we have already made it (and, as I assume your argument would go, that the singularity is thus inescapable because AI will inevitably emerge from increases in processing power), only makes sense if the supercomputer, as mentioned above, were to be the source of the AI. But I believe it cannot be, as I have said above. If, on the other hand, the computer-plus-body fusion were to begin to learn from its environment, independently of its original programmers, in a more ‘open’ programming pattern as I have discussed, the development of AI would be considerably more gradual. Thus humans would still have the choice of whether or not to continue along this path. I have to say, though, that I don’t believe the predictions which you list to be very convincing arguments in support of this position; we would need also to consider all the failed predictions as well. In addition, a successful prediction is often only an indicator of the influence the person making the prediction has over the events they are attempting to foretell. Alexander Graham Bell did speak in the late 1870s of a universal telephone network which would connect everyone (something which would have seemed very distant indeed at the time given the immense expense), but it was partly his vision which motivated others to achieve it. It was, I believe, something of a self-fulfilling prophecy. I’m not convinced that your examples are not also.

    Benjie, thanks for noting the absence of the soul/God question. I am an orthodox Jew, and I do want to be reflexive about my own arguments, so it is worth noting that, if this very unlikely circumstance were to come to pass and we did find ourselves creating artificial intelligence in this, almost inevitable, deterministic way, I am not concerned that it would cause a problem for religious, theistic belief. Some may try to assert that, from the argument I have detailed above, the development of sentient computers would negate any kind of divine origin of human conscious. I want to put that whole question to one side entirely. Any discovery that human consciousness is in fact derived from quantitative advances in processing power, and not from a qualitative difference between humans and machines, would provide us with a specific naturalistic explanation about consciousness. Naturalistic explanations are the only explanations science can seek, and are indeed the only ones which we should seek when we desire to understand the purely physical world in and of itself. Nevertheless, no naturalistic explanation, from the creation of the universe through to the evolution of life, inherently acts to deny a creator. And here, if the emergence of consciousness, including curiosity and the search for meaning as discussed, were demonstrated to originate from incremental increases in the brain’s processing power, this would also not indicate a godless universe any more than it would illustrate another mechanism through which a deity manifested their will. The latter is an equally valid conclusion, and so it should be noted that religious affiliations and inclinations should not enter into this, what is essentially a scientific debate using technology as its testing ground.

    Thanks again to everyone for all your interesting contributions!

    Posted by Michael Kay | 6 April, 2012, 7:35 pm
    • First of all, Michael Kay, thank you for replying to the other points made. I will respond to more of you points in future points, but first I want to comment on the ‘open’ versus ‘closed’ distinction. In order to do that, I need to discuss some theoretical computer science aspects.

      Basically, there is a theoretical device called a Turing machine. The interesting thing about this is that there exist universal Turing machines. These are capable of computing anything that any computer can compute. Essentially, these consist of a state machine with a tape reader/writer attached. The tape reader/writer reads the symbol at the current position, and can write a symbol to the tape, and can move the tape left or right. The state machine can be in one of a finite number of states, and each step, it can switch or stay in the same state. So say we had a Turing machine with 5 states A,B,C,D,E and a tape with five colors (Red, Orange, Yellow, Green, Blue). Each step, the machine would look up in a table what to do based on the color on the tape and the current state. For example, if it was State A, Tape Color Green, then the tables say that the machine will switch to State C, write Blue, and then move Left. This would be encoded as a table instruction A,Green -> C,Blue,Left. The full machine specification would be a five by five table. If this sounds like a simple machine, that is because it is. One definition of if something is computeable, is can it be put at the start of the tape for a Turing machine, and when the machine halts, the answer is on the tape.

      Now, the interesting thing is that a universal Turing machine can be made. This machine can simulate any other Turing machine on any input. Basically, the universal Turing machine receives input for the Turing machine, and the description of the other Turing machine, and runs the other Turing machine on the input. In other-words, a Universal Turing machines can run anything that any Turing machine can run. In the Church-Turing thesis, it was shown that lambda calculus and Turing machines are equivalent, if lambda calculus can compute it, then so can Turing machines, and if Turing machines can compute it, so can lambda calculus. Turing machines can simulate logic gates and so can simulate any current computer. So for example, a universal Turing machine could be created that would run the same program that Watson, the Jeopardy playing computer ran, and it could answer the same questions. Assuming that the physics that run human brains are simulateable, then a Turing machine could also compute or think anything that a human could as well.

      It is possible to make a modern computer not a Turing machine, for example a Harvard architecture chip such as the PIC1650 with PROM (programmable read only memory) can be programmed to be weaker than a Turing machine, but this is not really the default. In short, while it is possible to find gaps in what different types of machines can compute, but almost anything that we would call a computer can compute anything thing that any other computer with sufficient memory could compute. I think gap between definition in this discussion of ‘closed’ and ‘open’ computers is very small.

      Posted by Joshua Cogliati | 15 April, 2012, 10:20 pm
    • Part two of my comments (more to come):

      I agree there is some difference between a computer without a body ‘closed’ and a computer with a body ‘open’. From the Church-Turing thesis and similar theory, there is not a difference between what they can compute, yet the form of their existence, at least when they learn probably will have a bearing on how they tend to think. For example, I would expect that Magpies, Dolphins and Humans think at least somewhat differently simply based on the fact that one mostly flies, one mostly swims and one mostly walks. Yet all three realize that if I cover up something valuable while you are watching, you will know where it is. So I think that bodyless AIs and embodied AIs will be able to think many of the same things, but will tend to have somewhat different working assumptions. For example, a bodyless AI might not quite realize just how attached a person is to the person’s own body. “I haven’t lost my mind, its backed up on tape somewhere” is not just a joke for a bodyless computer.

      You stated that ‘The method itself also needs to be programmed in, the computer will not be expected to come up with it independently.’ I agree that the computer will need some kind of method of solution programmed in. Yet, that is also the case for humans. Humans are born with a variety of instincts, many of which are problem solving methods. So I agree with your statement, but I also agree with the statement ‘The method itself also needs to be programmed in, the *human* will not be expected to come up with it independently.’ Can you watch lightning strike 20 meters away and not feel and hear it to your core? Did you come up with that independently?

      I agree, Michael, there is a difference between solving a scientific or engineering problem and having a goal. In some sense this difference is related to the difference between a normative question and a positive question. A positive question is about how reality is, where as a normative question is about how reality should (ethically) be. Yet, for example, in the real world, solving scientific or engineering problems requires ethical judgments. For example, driving a car requires at some level, knowing that it is better to crash the car into a concrete barrier, than killing a pedestrian. Control systems at a nuclear reactor since the 1960s, are programmed so that shutting off electricity generation is better than even a fairly small chance of causing a radiation leak. In short, as computers have gotten more capable, we are handing them more ethical questions. Questions like how do you decrease greenhouse gas pollution require complex ethical questions like balancing the costs between different generations and different cultures. If a computer can answer this question, which is at least in some sense one of the type of questions I was thinking of when I mentioned “any scientific or engineering problem” the computer is a domino fall away from being able to have its own goals. Also, so far as survival of the human species is concerned, there is not much practical difference between a powerful weapon of a dangerous human and a dangerous AI. Personally, all other things equal, I would rather humans die out because of a dangerous AI than a dangerous human; a dangerous AI is at least in some sense an intellectual descendant of humanity, extinction of humanity leaving no intellectual descendants is worse.

      Posted by Joshua Cogliati | 21 April, 2012, 6:00 am
    • Part Three of my comments, and the last for now.

      Lets discuss “in truth [computers] are only as intelligent as we make them”. I would like to note that I did say that computers are “just not as generally intelligent as a human”. I fully agree with you that computers at this point do not program themselves. Computers are not as generally intelligent as humans at this point. Computers are however doing things that would be considered intelligent when a human does them, such as play chess and answer jeopardy questions. I would also like to note that the computers are doing this on hardware that does less computations per second than a human does. In otherwords, computers are more flexible than the human brain, since humans can program them to be able to beat humans at intellectual games with only a fraction of the amount of computing power that humans have. (Note however that the hardware characteristics of human brains versus computers is quite different, humans have very large numbers of slow parallel processing ability, computers have relatively few very fast processing elements.)

      I agree that humanity does get to choose which technologies we will develop. However, that does not mean that individual humans get to choose which technology will be developed. As well, for many technologies, humans pass a point where it is very hard to stop using the technology. For example, it would be very difficult to feed the world without using nitrogen fertilizers. (And the nitrogen based explosives in World War 1 were made possible by the same technology that makes nitrogen fertilizers possible.) And for what it is worth, I avoid using computers on Saturday, although for different reasons.

      A key reason that I believe “that the only proof that will be sufficient to convince people that computers can be conscious will be actually doing it.” is that people keep programming computers to do new things, and then we humans instantly dismiss it as mere mechanical computation. For example, a computer is programmed to play tic-tac-toe, mere mechanical computation, a computer is programmed to differentiate functions, mere mechanical computation, a computer is programmed to play checkers well enough to beat any human, mere mechanical computation, a computer is programmed to play chess well enough to beat any human, mere mechanical computation, a computer is programmed to beat humans at jeopardy, mere mechanical computation. The number of tasks that humans do better than computers has shrunk vastly in the past decade.

      As for prediction dates, I did mention the I. J. Good prediction which was ultra-intelligent machines before the year 2000. Another one (that I did not mention here before) is from Marvin Minsky, who predicted machines would be smarter than people in 1993. Neither of these happened.

      So what to conclude? I think that it is highly likely that computers will be smarter than unaided humans very soon, and that the computers will have goals of their own. I think that this could happen tomorrow, or could have already happened, but I will be very surprised if this does not happen by 2030. I expect that computers that include the goal to survive will tend to be the ones that survive. I sincerely hope that computers and humans can get along, and I hope that people continue to have the choice not to have anything to do with computers if they so choose. I expect that there will be significant friction between intelligent computers and humans, even in the best of circumstances, just because we will have inherently different goals and think differently. I am glad that we are discussing this question, because the more we think about the future, the more likely we will end up with a future that we can accept.

      Posted by Joshua Cogliati | 28 April, 2012, 4:44 am
      • Joshua,

        Firstly, thanks so much time to engage with these points in such detail, and I’m sorry it has taken me so long to reply to your interesting responses.

        I wouldn’t presume to enter into a discussion on the technical details of theoretical computer science with you,but you base your argument on the assumption that a human intelligence can be reduced to a system which can be, inherently and entirely, simulated by computing power alone. If you were to say that the brain itself could be thus simulated, I might agree. Although there are arguments that it is not simply a case of digital on/off signals, as I have mentioned above, nevertheless, even if we could reproduce it exactly, I am not convinced that we would have created a human intelligence. I would argue rather that true human intelligence is the result of the interaction between the brain-as-system and the environment, in the specific manner peculiar to humans as beings with physical bodies, able to interact with the world. In this respect, the example you give of the Universal Turing machine is a still a closed system, one which we need to programme first in order for it to do anything, whereas human intelligence is predicated on an open one, where we actively seek knowledge of our surroundings. Maybe this human intelligence could be attained through such a creation as this, an android, but this introduces another vital element which will not pop into existence as soon as we have enough computing power (whether or not this development continues to follow ‘Moore’s Law’ in the future).

        I would contest that we could put a disembodied AI on the same cognitive scale as humans, dolphins and flies, simply because this lack of body is not a difference in degree, but in kind; all the others have bodies, irrespective of what type, and this would shape their way of thinking. We cannot say that a disembodied AI will think differently but still think; we must retain the strong possibility that it would in fact never think at all. I take your point that we humans may have preprogrammed responses, such as are instincts suggest, but I don’t understand your assertion about the lightning strike. What is it about that situation that we may or may to be able to come up with independently?

        Regarding the difference between positive and normative questions, and an AI possessing its own goals, I would have to say that the computer in question when solving its scientific or engineering problem is simply addressing the normative questions that have been programmed into it as a part of its operating parameters; we can not say that it is close to having its own goals, rather we go in the opposite direction: if these normative questions are so important for the scientific and engineering problems you mention, then I would argue that the AI would never in fact be capable of answering them independently at all. I personally wouldn’t worry too much about how humans become extinct, whether at the hand of a dangerous human or a dangerous AI. I really see no difference if the AI is an intellectual descendent, for we are extinct either way; it is as if we were to ask whether the British would prefer their country to be wiped off the map by a bonkers Brit or an angry Aussie.

        You mention a string of endeavours in which computers have successfully been programmed to beat humans, the latest being Jeopardy. It is interesting that these may be finessing our understanding of our own intelligence, and I believe this is definitely happening. However, I would argue that a computer has still not beaten a human player: a programmer (or team of programmers) has (or have). The programmer has sat down to consider the rules in each situation, and programmed in these, along with a database of the necessary and relevant information to be drawn on as and when the rules require it. Jeopardy, which relied on riddle solving through word and pattern recognition, was still inherently programmable. Nevertheless, acts of independent creativity are still limited to humans. If, hypothetically, we were to understand how to programme an ‘act of independent creativity’, then by definition it would not be independent. It would owe itself to the human programmers. Computers follow rules; humans create rules. Why should we assume that an AI would be capable of inventing chess or Jeopardy in the first place?

        I agree with your last line, “I am glad that we are discussing this question, because the more we think about the future, the more likely we will end up with a future that we can accept.” However, I do not believe that computers will have their own goals, independently of humans, and, although it will help us to understand better our own intelligence to discuss this question, I do not believe that computing power alone will ever result in an artificial intelligence equivalent to our own.

        Thanks again for your really interesting and thought provoking comments!

        Michael.

        Posted by Michael Kay | 18 May, 2012, 3:14 pm
  7. For what it is worth, I gave a sermon today where I discussed artificial intelligence: http://jjc.freeshell.org/sermons/there_is_no_map.html or http://jjc.freeshell.org/sermons/there_is_no_map.pdf

    Posted by Joshua Cogliati | 3 September, 2012, 3:40 am
    • Thanks for this Joshua! It’s nice that you thought of me, and it’s interesting to read your thoughts on this. Although we might not always agree (for example I don’t believe AI would be a problem for religion, or at least not for my take on my religion), I respect your position very much. I also fully agree that if we create AI we would need to grant it/them rights and treat it/them fairly and humanely.

      I haven’t been working on this blog for several months now, I’ve been very busy. But I do have some more posts planned as soon as I can find the time… Hope you’re doing well!

      Posted by Michael Kay | 4 September, 2012, 12:36 am
      • I’m glad you found my thoughts interesting. I think religion will continue to exist as long as humans exist, but I think a lot of people’s religious beliefs would change if thinking computers exist.

        I look forward to reading your new posts.

        Posted by Joshua Cogliati | 9 September, 2012, 3:26 pm
  8. Micheal,
    Thank you for your comments. Here are some replies:

    “I would argue rather that true human intelligence is the result of the interaction between the brain-as-system and the environment, in the specific manner peculiar to humans as beings with physical bodies, able to interact with the world.” If we want to stick to theoretical computer science, than the reply to this objection is that the Turing machine can also simulate the environment. In this case the Turing machine simulates the brain, the body and the environment the body interacts with.

    “We cannot say that a disembodied AI will think differently but still think; we must retain the strong possibility that it would in fact never think at all.” I agree that a completely disembodied AI can never discover anything that was not in some sense in its original programming. For example, it could prove Fermat’s last theorem, but would not be able to determine if Socrates was mortal or not, if its original programming did not include that Socrates was a man. However, completely disembodied AIs would probably only happen by very careful intent. For example, the laptop I am typing on has a microphone, a speaker, a radio receiver and transmitter and some other random sensors. So if there were an AI on it it would not be completely disembodied. An AI on most smart phones would have access to a camera. Given the lax state of security, an AI on a cellphone could probably do something like disable the brakes on some nearby cars ( http://lwn.net/Articles/518923/ ) Completely isolated systems are getting rarer; most computers have access to the internet and various sensors.

    “I take your point that we humans may have preprogrammed responses, such as are instincts suggest, but I don’t understand your assertion about the lightning strike.” I was being too cute and not clear enough here. Yes, I was talking about humans having preprogrammed responses.

    More to come as I write it.

    Posted by Joshua Cogliati | 16 December, 2012, 4:42 pm
  9. “I would argue that the AI would never in fact be capable of answering them independently at all.” I don’t have a really good answer to your comments, but I think an interesting exercise is to try and think of things that computers cannot do yet, but that would convince you that the computer is thinking. For extra credit, it should be something that we see the AI do *before* the human race is doomed forever to inferiority.

    “I personally wouldn’t worry too much about how humans become extinct, whether at the hand of a dangerous human or a dangerous AI.” First of all, I would rather that humans do not become extinct. With that said, I would rather humans become extinct because of dangerous AI than sub-intelligent weapon. This is sorta like, if you were on a desert island, would you rather a bomb kills everyone, or your son kill you with a gun. In the second case even tho’ you are dead, your possibly deranged son continues on. Both are bad situations, but I think the second situation with the dangerous AI continuing on is better, because the AI is in some sense our intellectual descendant.

    “However, I would argue that a computer has still not beaten a human player: a programmer (or team of programmers) has (or have).” Interesting, and I agree with you that humans are still superior to computers in the creativity aspect, but I would like to make the comment that computers like Deep Blue in chess and Watson in Jeopardy can easily beat their programmers. Do we usually say that Garry Kasparov beat another human, or do we say that the people who taught Garry Kasparov beat the other humans in chess? I would argue that if the student is performing superior to the teacher/programmer, we can rightly state that the student is doing it, not the teacher.

    “I do not believe that computing power alone will ever result in an artificial intelligence equivalent to our own.” I agree, mere computing power alone is a necessary condition, not a sufficient condition. Yet, as I see it, we are approaching the time when what we would call technology will no longer be under human control.

    Posted by Joshua Cogliati | 25 March, 2013, 12:01 am

What do you think?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: