Future We Can Build with Superintelligence

1. Superintelligence is near.

According to surveys of top Artificial Intelligence experts conducted by Nick Bostrom in 2013:

– More than 90% of experts believe human-level AI will be achieved in 21st century.
– More than 10% of experts believe human-level AI will be achieved by 2022.
– Most experts believe human-level AI will be followed by Superintelligence defined as “machine intelligence that greatly surpasses the performance of every human in most professions” within 30 years.

Expert surveys conducted by other entities seem to predict analogous timelines and non-expert opinion is surprisingly similar as well.

2. Superintelligence = God?

First question for Superintelligence from its engineers: “Is there God?”.  Superintelligence: “There is now.”

Many people are fascinated by the opportunities Superintelligence can open up, but also fear that it can end the humanity or cause massive damage. This list includes such thoughts leaders as Elon Musk, Stephen Hawking and Bill Gates some of whom were inspired by Nick Bostrom’s Superintelligence: Paths, Dangers, Strategies book.

Some other thought leaders like Ray Kurzweil who inspired Singularitarianism agree about the wealth of opportunities and some probability of danger, but seem much more optimistic about humanity’s ability to handle it.

3. What do I think about it?

– I agree with most people mentioned above that human-level AI will be the most important invention for homo sapiens as even if subsequent inventions will have much larger impact, they will be made by or with significant assistance of Superintelligence.

– I think it’s very good that Nick Bostrom popularized this debate about existential risk and AI safety and Elon Musk donated 10 million USD to the Future of Life institute. I just hope safety will be handled by scientific and engineering community rather than through regulators’ intervention.

– I am sure when we will come close to human-level AI level its developers will be smart enough to program in as one of the critical rules into AI to always let creator stop its actions. And I feel that many AI movie creators, for example, “Ex Machina“, “The Terminator“, “The Matrix“, may think differently, or otherwise I think they would try to explain why AI becomes “runaway”.

– Similarly to a minority of experts I buy into Ray Kurzweil’s prediction that human-level AI measured by the Turing Test may come as soon as 2029 as I am convinced by exponential growth of most important technologies to feed into it, by Kurzweil’s Law of Accelerating Returns and by difficulty for humans to internalize that law. At the same time Elon Musk who once mentioned that it may happen as early as in 2019 I think is too optimistic.

– In terms of worries what makes me different from most experts and observers of this topic is that I think a more realistic worry  is that the invention of Superintelligence may cause unprecedented concentration of power.
4. “Supermonopoly” using Superintelligence

This is probably the most extreme scenario of concentration of power. I think its probability is lower, but much higher than existential damage probability.

Human-level AI has a realistic potential to takeoff very quickly using the Internet and become Superintelligence giving probably the most important first-mover advantage in the history of humankind to its inventor. Of course, moving from human-level AI to Superintelligence can take decades or more, but even in that case it may be likely for some similar “quick takeoff threshold” to exist.

My estimation is that the first developer is most likely to be a corporation that managed to collect a galaxy worth of data and attract top engineering talent, somewhat likely to be a small group of outside-the-box of thinkers in “a garage“, and extremely unlikely to be a terrorist group or a violent authoritative government.

The first Superintelligence may be able to help its protect its developer’s first mover advantage urgently after been given birth to by:
– amassing wealth via successful trading on stock exchange,
– issuing patents,
– teaching how to attract top talent to join the first-mover group,
– inventing a significantly more efficient energy source or other groundbreaking technology,
– and a multitude other ways that may be hard for homo sapiens to predict, understand or imagine.

I know little about corporate laws, but I imagine how if it the first Superintelligence will be developed by a corporation it may be actually its Fiduciary Duty to Shareholders to protect first mover advantage in order maximize return for the shareholders. This way or the other The First Superintelligence owner may be attracted to apply Singapore-style “benevolent dictatorship” for the “greater good of all” globally.

Some of the best things AI community can do to prevent this I believe are:
– Publishing more open source data for AI training
– Publishing more open source AI software
– Publishing more open source AI knowledge
– Considering societal implications while choosing an employer

All of this and a patent reform I think would be helpful regardless of the superintelligence topic.

5. So how do I think we should use Superintelligence?

As Demis Hassabis, the founder of Deep Mind, states in his Zeitgeist 2015 talk Superintelligence can help deal with such important complex systems: climate, ecology, energy, health & disease, macroeconomics, engineering and physics.

I am optimistic about all these. I believe that Superintelligence will help us start living sustainably with our planet and other species, increase life expectancy to hundreds of years and unlock many mysteries of the universe, but what I think still requires a serious intervention is providing basic opportunities for every human as otherwise capitalism which is dominating at the moment seems to be doing relatively little to address that.

As of today, 9 June 2015, the largest corporation by market capitalization is Apple with 736.26B USD. I love Apple and their products and the value they produced for the world I think is immense, but I still think that statistic means that something is extremely broken. It is a much better business to remove seconds or sometimes milliseconds of friction for a first world customer like me than to provide vital water and food to a third world orphan. And this reward gap may increase drastically as efficiency and wealth of first world customers increases with stronger AI.

There are orders of magnitude more than enough of what I believe is extra money in the first world that would be required to feed, educate at least basic literacy and connect to the Internet every child on Earth, but people rather spend them on other things because they believe those children will waste money or get spoiled even though most people who say these things got all of those from their parents for free. Or they believe most of the money will be lost in the system before reaching the person in need and instead of investigating the options or improving the system most people seem to opt-in to distracting themselves with other things.

So what I believe we should do is to provide a set of basic opportunities for every child on Earth and summon Superintelligence to help us achieve it.

6. New Human Package

I call this a New Human Package, a welcome package for every new human. And I think it should include:

I. Health
Opportunity to lead a physically and mentally healthy life: food, security, medicine and doctors, and loving home.
– We should start tracking and increasing world’s minimum life expectancy rather than average.
– Minimum life expectancy should sound as normal as minimum wage sounds now.
– Child mortality should reach zero.

II. Knowledge
Access to most of the world’s most important knowledge and intelligence, and ability to communicate with anyone who wishes to communicate.
– Internet throughput should be as fast as human senses altogether.
– Superintelligence should be available for everyone at any time.

III. Education
Everyone should get education that:
– teaches how to leverage Superintelligence
– teaches to convert information into knowledge
– helps find things they like doing, things they would like to achieve and unique skills
– encourages scientific skepticism, method & exploration
– introduces educational topic options rather than enforces them
– is completely voluntary and inviting rather than enforced
– promotes risk-taking & creativity
– is highly personalized
– is improving fast
– is efficient
– is fun

III. Basic human rights and freedoms
See: UN – The Universal Declaration of Human Rights

I believe these are three most important pillars of opportunities for everyone.
Addressing this will help people address other areas such as transportation and energy by themselves empowered by their own intelligence, Superintelligence and other technology.

This way I believe we can reach Real Life Paradise very soon.
A society with no crime, no violence, no unwanted labor.

People will be able to spend time on whatever they want to be doing be it sports, art, science, engineering, exploring the universe or learning how to fly.

I believe this the future that we can build and we should build.

Tilek Mamutov

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s