What is Superintelligence? – Part 1

How many movies, cartoons and sci-fi series have you seen featuring some kind of superintelligent robotic race? Probably quite a few. In some films, such as Terminator, they come to conquer the world; in others, they help us out; and in some, like Wall-E, they’re simply adorable. Of course, these robots are fictional, but will they always be? Will the future bring superintelligent AI? If it does, what will they look like and when will they appear?

In Superintelligence: Paths, Dangers, Strategies by Nick Bostrom, we learn about the journey toward AI so far – where we might be going; the moral issues and safety concerns we need to address; and the best ways to reach the goal of creating a machine that’ll outsmart all others.

What fundamentally sets us apart from the beasts of the field? Well, the main difference between human beings and animals is our capacity for abstract thinking paired with the ability to communicate and accumulate information. In essence, our superior intelligence propelled us to the top.

So what would the emergence of a new species, intellectually superior to humans, mean for the world?

First we’ll need to review a bit of history. For instance, did you know that the pace of major revolutions in technology has been increasing over time? For example, improving at the snail’s pace of a few hundred thousand years ago, human technology would have needed one million years to become economically productive enough to sustain the lives of an additional million people. This number dropped to two centuries during the Agricultural Revolution in 5,000 BC. And in our post-Industrial Revolution era it shrunk to a mere 90 minutes.

A technological advancement like the advent of superintelligent machines would mean radical change for the world as we know it. But where does technology stand at present?

We have already been able to create machines that have the capacity to learn and reason using information that’s been plugged in by humans. Consider, for example, the automated spam filters that keep our inboxes free from annoying mass emails and save important messages.

However, this is far from the kind of “general intelligence” humans possess, and which has been the goal of AI research for decades. And when it comes to building a superintelligent machine that can learn and act without the guiding hand of a human, it may still be decades away. But advancements in the field are happening quickly, so it could be upon us faster than we think. Such a machine would have a lot of power over our lives. Its intelligence could even be dangerous, since it would be too smart for us to disable in the event of an emergency.

Since the invention of computers in 1940, scientists have been working to build a machine that can think. What progress has been made? One major advance in Artificial Intelligence (or AI) are man-made machines that mimic our own intelligence.

The story begins with the 1956 Dartmouth Summer Project, which endeavored to build intelligent machines that could do what humans do. Some machines could solve calculus problems, while others could write music and even drive cars. But there was a roadblock: inventors realized that the more complex the task, the more information the AI needed to process. Hardware to take on such difficult functions was unavailable.

By the mid-1970s, interest in AI had faded. But in the early ‘80s, Japan developed expert systems – rule-based programs that helped decision-makers by generating inferences based on data. However, this technology also encountered a problem: the huge banks of information required proved difficult to maintain, and interest dropped once again.

The ‘90s witnessed a new trend: machines that mimicked human biology by using technology to copy our neural and genetic structures. This process brings us up to the present day. Today, AI is present in everything from robots that conduct surgeries to smartphones to a simple Google search. The technology has improved to the point where it can beat the best human players at chess, Scrabble and Jeopardy!

But even our modern technology has issues: such AIs can only be programmed for one game and there’s no AI capable of mastering any game.

However, our children may see something much more advanced – the advent of superintelligence (or SI). In fact, according to a survey of international experts at The Second Conference on Artificial General Intelligence at the University of Memphis, in 2009, most experts think that machines as intelligent as humans will exist by 2075 and that superintelligence will exist within another 30 years.

It’s clear that imitating human intelligence is an effective way to build technology, but imitation comes in many forms. So, while some scientists are in favor of synthetically designing a machine that simulates humans (through AI, for instance), others stand by an exact imitation of human biology, a strategy that could be accomplished with techniques like Whole Brain Emulation (or WBE).

So what are the differences between the two?

AI mimics the way humans learn and think by calculating probability. Basically, AI uses logic to find simpler ways of imitating the complex abilities of humans. For instance, an AI programmed to play chess chooses the optimal move by first determining all possible moves and then picking the one with the highest probability of winning the game. But this strategy relies on a data bank that holds every possible chess move.

Therefore, an AI that does more than just play chess would need to access and process huge amounts of real world information. The problem is that present computers just can’t process the necessary amount of data fast enough.

But are there ways around this?

One potential solution is to build what the computer scientist Alan Turing called “the child machine,” a computer that comes with basic information and is designed to learn from experience.

Another option is WBE, which works by replicating the entire neural structure of the human brain to imitate its function. One advantage this method has over AI is that it doesn’t require a complete understanding of the processes behind the human brain – only the ability to duplicate its parts and the connections between them.

Most of the great discoveries of humanity were achieved either by a single scientist who reached a goal before others got there or through huge international collaborations. So, what would each route mean for the development of SI?

Well, if a single group of scientists were to rapidly find solutions to the issues preventing AI and WBE, it’s most likely their results would produce a single superintelligent machine. That’s because the field’s competitive nature might force such a group to work in secrecy.

Consider the Manhattan Project, the group that developed the atom bomb. The group’s activities were kept secret because the U.S. government feared that the USSR would use their research to build nuclear weapons of their own.

If SI developed like this, the first superintelligent machine would have a strategic advantage over all others. The danger is that a single SI might fall into nefarious hands and be used as a weapon of mass destruction. Or if a machine malfunctioned and tried to do something terrible – kill all humans, say – we’d have neither the intelligence nor the tools necessary to defend ourselves.

However, if multiple groups of scientists collaborated, sharing advances in technology, humankind would gradually build SI. A team effort like this might involve many scientists checking every step of the process, ensuring that the best choices have been made.

A good precedent for such collaboration is the Human Genome Project, an effort that brought together scientists from multiple countries to map human DNA. Another good technique would be public oversight – instating government safety regulations and funding stipulations that deter scientists from working independently.

So, while the rapid development of a single SI could still occur during such a slow collaborative process, an open team effort would be more likely to have safety protocols in place.

Check out my related post: Can AI redistribute wealth for us?


Interesting reads:

https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dangers,_Strategies

https://www.goodreads.com/book/show/20527133-superintelligence

https://www.telegraph.co.uk/culture/books/bookreviews/11021594/Superintelligence-by-Nick-Bostrom-review-a-hard-read.html

16 comments

  1. I’ve already read the stories of Sience fiction describing higher intelligences.
    In general they are disappointing.
    It is difficult for an author to describe a person or a machine smarter than him.

    Liked by 1 person

  2. First of all I would like to say excellent blog!
    I had a quick question in which I’d like to ask
    if you don’t mind. I was curious to find out how you center yourself
    and clear your mind prior to writing. I’ve had trouble clearing
    my thoughts in getting my thoughts out. I truly do take pleasure
    in writing but it just seems like the first 10 to 15 minutes are generally lost simply just trying to figure out how to begin.
    Any recommendations or tips? Thank you!

    Like

  3. Does your site have a contact page? I’m having a tough time locating it but, I’d like to send you an email.
    I’ve got some ideas for your blog you might be interested in hearing.
    Either way, great site and I look forward to seeing it improve over
    time.

    Liked by 1 person

Leave a reply to abetterman21 Cancel reply