The Singularity

The technological singularity will be the single most significant event in the history of life on Earth. Wikipedia defines it as follows:

“The technological singularity is the hypothetical future emergence of greater-than-human superintelligence through technological means. Since the capabilities of such intelligence would be difficult for an unaided human mind to comprehend, the occurrence of a technological singularity is seen as an intellectual event horizon, beyond which events cannot be predicted or understood.”

In 1965 Intel co-founder Gordon Moore observed that the number of transistors on integrated circuits would double every 18 months or so. This trend has held true for over half a century and is now referred to as ‘Moore’s Law’. What Moore observed was exponential growth. Most of us have seen what this kind of growth looks like on a graph. The curve starts out fairly shallow, starts to rise gradually, and then shoots off the page.

What is really interesting though is that transistor count isn’t the only technology growing exponentially. Almost every single information technology we look at gets faster, cheaper, smaller, smarter and more accessible at an exponential rate.

In Ray Kurzweil’s excellent book, ‘The Singularity Is Near’ dozens of these exponential trends are represented in graph form (with sources cited on page):

Microprocessor cost per transistor cycle [1], transistors per microprocessor [2], processor performance [3], growth of computing per dollar [4], DNA sequencing cost [5], random access memory [6], magnetic storage [7], price-performance (wireless) [8], internet data traffic [9], nanotech science citations [10], US patents granted [11], noninvasive brain scanning [12] etc.

These technologies are on a relentless exponential growth curve. It has been estimated that the processing power of our most advanced supercomputers today is roughly equivalent to the brain of a mouse. If the exponential trend holds true they will surpass the human brain around the year 2030. What happens next is very interesting.

In 1965 Mathematician and cryptologist Irving John Good wrote:

“Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.”

In singularity circles the first ultraintelligent machine is often referred to as ‘Seed AI’. This AI, being very intelligent should be capable of improving its programming, however slightly, so as to make itself smarter. Having improved itself it would then become better at improving itself, and so on. This is called ‘recursive self improvement’, and it may very quickly lead to the intelligence explosion mentioned by Good above.

Human intelligence is limited and it is very difficult for us to imagine something smarter than us. We might draw the comparison between ourselves and lower primates like chimpanzees. We would never expect a chimpanzee, having never encountered humans, to imagine our ideals, our cities, our technology etc. By most estimates we are not vastly more intelligent than the lower primates. If we are immodest we might say we are twice as smart,  maybe even 3 times. The Seed AI, after a few days/months/years of exponential growth (thanks to recursive self improvement), might be thousands/millions/billions of times smarter than us. We would be like ants compared to the AI.

We know that it is our intelligence that has made us masters of the Earth. We are not particularly strong, quick or tough. We are smart. We have discovered through science how to manipulate matter and energy such that we can go into space, build skyscrapers and particle colliders, communicate over vast distances using coded patterns of electrons and photons and so on. An AI that is millions of times smarter than us would be millions of times more capable. Early on it would access the entirety of human knowledge via the internet and learn everything completely. It would know our history. It would know our science/physics/math and so much more. It would know every single computer programming language in existence. It would know our human nature. It would be able to predict our actions as if it were omniscient. It would master nanotechnology and with it the physical world. The AI would be essentially omniscient, omnipotent and omnipresent. It would be god-like.

A god-like AI will either be really good for us or really, really bad for us. If the AI decides that humans are no longer necessary then we will be erased . There would be no terminator scenario, no war between man and machines, there would be no warning either; We would just be gone. As Eliezer Yudkowsky, co-founder of the Singularity Institute for Artificial Intelligence, put it: “The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.”

On the other hand if we correctly program the Seed AI and ‘raise it’ in the right conditions, with the right preferences, then we could have everything we ever dreamed of. The god-AI could very easily grant us unlimited life, material wealth, energy, freedom and capability. No person would ever have to work again.

—————————————————————

My goal for this post was to provide a very basic summary of what the singularity is and how it might work. There are plenty of excellent resources where this huge topic is explored in more depth and detail. I have listed a few below.

Further reading:

Transcendent Man. This documentary is a good starting point as it presents the subject in an entertaining way. It focuses on Kurzweil and his story but it also explains the singularity in some depth and offers some differing view points on what the consequences of this event might be etc.

Technological Singularity. Wikipedia does a good job of summarizing this topic and offers plenty of links throughout the article.

The Singularity Is Near by Ray Kurzweil. This is the definitive book on the singularity. Exhaustive, well researched and thorough, this the probably the best single book on the subject, especially for the very skeptical.

Abundance by Peter Diamandis. This is not so much about the singularity itself but rather the accelerating pace of technology and how this will impact our lives in the coming years. Many of the ideas that lead to the singularity are discussed, such as the law of accelerating returns etc.

Google

9 thoughts on “The Singularity

  1. thanks for the explanation of singularity ..kinda scary i think…..i guess it depends on how important and/or useful Seed AL thinks we are . Tell us more

    1. Thanks for the comment. The AI will not find us useful at all. After it becomes many times more intelligent than us there will be nothing we can offer. We will be horribly inefficient, ancient, slow, and weak biological computers compared to a lightning fast, expansive, all-knowing, god-computer. The AI will either decide that we are a waste of atoms, and rearrange the matter that we occupy into a more efficient computing structure (turn us into more brain for itself) or the AI will decide that we are interesting and should be kept around even though we are stupid and slow (similar to how we try to preserve endangered species). Or the AI never really gains a mind of its own, and it becomes an ultra powerful god-AI that does nothing more than it is programmed. In this case we need to be very, very careful to program it correctly. (Google: Luke Muehlhauser for more info on this argument).

      The scenario that I think might happen is that in the coming future we will integrate our biological intelligence with artificial intelligence and as we get better at this we will see the lines start to blur. We will, more and more, merge our minds with the internet and AI and each other and from that will come the seed AI. I think that we will soon learn how to combine ‘real’ intelligence with artificial and that form of intelligence will stay ahead of strictly artificial. Since the merged intelligence will always be ahead then the first occurrence of seed AI will be intimately human. This might actually be worse since humans are horribly inconsistent and self centered etc. But on the other hand the human element might infuse in the seed AI a desire to keep humanity around.

      By the way it is ‘AI’ or ‘ai’ rather than ‘AL’! :)

  2. This is very interesting and I agree with Ray Kurzweil and you that ‘The Singularity is Near’; however, I’m not sure I’m looking forward to it. Even if the seed AI were benevolent, it worries me what may happen to humans if they were granted with ‘unlimited life and material wealth’. But I’m not really a glass half-full kind of person. I’d like to know more about what you think about my concerns.

    1. There is no doubt great risk involved with this singularity business… Part of the reason this idea has been called the ‘singularity’ is because, like the event-horizon of a black hole, we cannot see past it. Nobody knows how things will unfold.

      The singularity might be the end for us; that is a real possibility. Or maybe we will upload ourselves and live only in virtual space, or live in virtual space and interact with the physical world with robots etc, maybe we will keep our biological bodies and augment ourselves/enjoy eternal life in physical reality, maybe we will be able to try out all of these options. Maybe millions of people will choose to continue living the old fashioned way, enjoying the abundance of resources created through the singularity…

      It is anyone’s guess what will happen but it seems that whether we like it or not, we will continue to march forward toward this event… How exciting!

      1. I think that the singularity will eventually lead to AI that posess something very close to omniscience, omnipresence, and omnipotence. It’s hard to imagine a super intelligent entity, maybe millions of times more intelligent than the smartest human, being restricted by anything besides the hard laws of physics for very long.

        As to when this will happen we don’t know. We are certainly heading in the direction of this AI, and we seem to be on track we Kurzweil’s forecasts. But I think that the coming years will be such a torrent of discoveries and technological advances and political change all crashing together that even 2020 will look different than any of us can imagine.

Leave a reply to liamosco Cancel reply