Life 2.0 : life has become customizable

By Maciamo Hay,
on 22 April 2014 (updated on 3 May 2014)

Shall we wish for the singularity to happen and could it happen without human intervention ?

The technological singularity may not happen as long as humans don't allow it to happen. And is it reasonable to let it happen in the first place ?

Could the singularity happen without human intervention?

The singularity describes a tipping point, where the accelerating pace of technological progress leads to a hyperbolic and unstoppable growth in artificial intelligence, relegating humans to a secondary role for future scientific and technological developments.

One of the prerequisites for the intelligence explosion of the singularity is that a artificial superintelligence (ASI) be able to recursively improve itself, meaning that it could autonomously improve the design of its constituent software and hardware. While it is reasonable to assume that an AI of equal or slightly greater than human intelligence would possess the ability to improve its own software, it won't be able to modify its hardware without human assistance.

An AI is not a robot, it is a computer. It can think but cannot act beyond the digital realm. An ASI could wish it had more computing power. It could think about a more efficient hardware design. But unless there are already autonomous robots that can go get the raw materials and build machines of their own without human intervention, no AI could change its hardware. I have no doubt that tweaking the software can greatly increase a computer's power and range of abilities. However, if the hardware doesn't follow the ASI self-improvement will eventually reach an upper limit, and the singularity won't happen.

Besides, computers aren't eternal, and even have shorter lifespans than humans at the moment. Without transferring its data to another machine, the ASI would slow down and deteriorate as it ages, just like us. If an autonomous ASI wishes to keep increasing its performances exponentially, it will need to upgrade its hardware regularly.

Consequently the singularity isn't going to happen unless humans are willing to help the ASI improve its hardware, or until we build extremely capable robots that the ASI can use to modify the physical world.

Is it reasonable to let the singularity happen ?

The original meaning of technological singularity is a blind spot in our ability to predict the future once machines become millions of times more intelligent than all humans combined. No matter how hard we try to think about it, how many scenarios we envisage, we simply cannot know what will happen, and this is why it is so dangerous.

In the best case scenario, supported by Ray Kurzweil, all humans will share the benefits of the intelligence explosion by being connected to the ASI through neural implants. There won't be an ASI and us, but a single harmonious entity. The intelligence explosion will continue indefinitely, and eventually spread to all the universe with human-machines.

Others have imagined a utopian world ruled by a Friendly AI that manages everything perfectly for the sake of humans and other life beings, a perfect software that creates peace and prosperity and eliminates all suffering for everyone on Earth. It is easy to be tempted by such scenarios. But it could turn out just the opposite way too.

An evil, or more likely an indifferent or misguided ASI, could wipe out all humans and all life on Earth. A Terminator-like scenario, although prominent in the popular imagination, is in fact one of the least likely ways this could happen, unless humans purposefully build human-like terminator robots themselves, which would be extremely foolish and irresponsible. I don't see why a computer would need to build robots that look anything like humans or animals. There are plenty of more efficient designs, most of which we cannot even conceive of with our limited cognition, but that an ASI could.

There are apocalyptic scenarios scarier than powerful robots taking over the Earth and trying to eliminate humans. Among them is the grey goo hypothesis, in which molecular nanobots self-replicate out-of-control consume all matter on Earth while building more of themselves. Unfortunately this scenario does not even need the creation of an ASI to happen.

Although machines could be designed to have feelings and emotions, they would never be quite like those of humans. In theory, a Friendly AI could be programmed to emulate only positive human traits like altruism, compassion, etc. The risk is that creating one positive feeling necessarily implies creating its opposite too. The laws of the universe want that things exist in duality. Heat cannot exist without cold. Light cannot exist without darkness. To be able to measure something on a scale, it needs to have an opposite end.

The problem is that if we try to teach a computer what kindness is, it will need to be defined by its opposite. By doing so, it creates the knowledge of the opposite feeling in the computer, and that's what is dangerous. If a bug happens or the AGI decides to reconfigure itself, it may start behaving the opposite way as it was originally programmed. It may even be safer not to try to emulate any emotion at all in a powerful AI. But then how do we protect ourselves ? We can never be sure that we will be safe because the ASI will behave in a way that we can't predict with our human thinking.

Could an artificial superintelligence slip out of human control and improve its hardware on its own ?

The development of an artificial superintelligence poses a real existential risk (i.e. the risk that the human race as a whole might be annihilated) that shouldn't be underestimated, as Michael Anissimov explains in an interview with Ben Goertzel for Humanity+ Magazine. Luke Muehlhauser, the Executive Director of the Machine Intelligence Research Institute (MIRI), scrutinizes the difficulties of building a superintelligence that does not kill us. The world renowned theoretical physicist and Nobel Prize laureate Stephen Hawking says that creating an ASI "would be the biggest event in human history", although he warns that "it might also be the last, unless we learn how to avoid the risks".

One of the most serious risks would be to let ASI take control of an army of highly skilled and dexterous robots. The danger is not just that these robots could attack humans, but more indirectly that they would possess the ability to improve the hardware of the ASI, allowing for the unrestrained exponential growth of its intelligence toward the singularity. Once this happens, the ASI could design and build anything it wants, be it robots, machines or other "beings" that are beyond our limited human imagination. If we want the singularity to happen as safely as possible for humans, the ASI should remain under human control.

One way to prevent a computer-based ASI to improve its hardware would be to make sure that robots are never autonomous enough to get to the ASI computer on their own with the necessary equipment to improve the hardware. That may prove extremely difficult if the ASI can get control of autonomous vehicles, advance humanoid robots and 3-D printers or nanobots that can be used to manufacture computer hardware. Obviously the ASI computer would need to be guarded only by humans, not by machines that it could control to restrict access to humans.

If that wasn't bad enough, even if we make sure that the ASI computer cannot be reached by other machines that could tweak its hardware, there is still an alternative way for it to get the job done. If at least some humans do get neural implants to improve their cognition or use telepathy (or 'techlepathy', as George Dvorsky called it), then an ASI could potentially hack into their brains and take control of their bodies, just like robots. And it does look like we are heading soon toward the use of neural implants.

In an interview for io9, Kevin Warwick, professor of cybernetics at the University of Reading, Anders Sandberg, a neuroscientist at the Future of Humanity Institute at the University of Oxford, and futurist Ramez Naam, all agreed that we already have the technologies required to build an early version of the telepathic noosphere. Telepathic networks could be built within a few years from now, but would really become powerful enough to become attractive to the general population and compete with other forms of telecommunications from the late 2020's or early 2030's. That timing is well ahead of the most optimistic dates for the singularity.

Commercial brain-computer interfaces have barely entered the market and they have already been proven to be hackable, although not yet to control a person's movements.

Do humans need the singularity ?

Considering the risks involved in letting an artificial general intelligence grow exponentially out of human control, it would be unwise and indeed extremely irresponsible to allow for the technological singularity to occur. The singularity basically means that technology grows beyond our control and that we surrender our destiny to superior machine intelligence. Why would we want to do that ?

I am all in favour of progress, but humans do not need the singularity to live much better lives. It would be easy to prevent AI and autonomous robots to evolve beyond a certain risk threshold while sustaining exponential growth in various fields of technology. The aim for the next few decades would be to achieve a post-capitalist society of abundance with free Internet and telecoms and extremely cheap solar energy and 3-D printed products for everyone. Agricultural robots would efficiently tend vertical farms. Advances in biotechnologies would put an end to diseases and stop or reverse aging. Genetic enhancement and BCI would work to increase human intelligence and empathy. And so on.

I am fine with supercomputers helping humans manage the world more efficiently, but do we need one (or several) ASI billions of times more intelligent than us that keeps improving itself way beyond our control and imagination ? Wouldn't it be safer to keep distinct computers with specialized functions instead of building an omnipotent AGI ? So long as there is no centralized AI that controls all the computers and robots worldwide, the risks remain constraint. But how could a supercomputer be prevented access to other machines in the age of the Internet of Things, where all electronic devices are connected in a huge global network ?

Humans like to test the limits of their capabilities. Sometimes they build machines just because they can, not because it is in their best interest. We can build supercomputers to help us solve problems that we couldn't solve on our own. It is fine to create one extremely powerful AI to serve as a universal translator of human languages. It is fine to build another one to help us diagnose medical conditions. It is fine to build as many AI as needed for specific, limited tasks, as long as there is no way for them to form a unified, self-aware, or at least autonomous intelligence that starts making decisions beyond our control. Achieving the singularity requires us granting an AGI free, unchecked capability to control machines and improve itself at will, and that simply isn't a sane thing to do.

The existential risk involved in the creation of an artificial superintelligence is taken seriously enough by a number of researchers to have given rise to a number of scientific institutions and organizations to discuss and tackle the issue, including the Machine Intelligence Research Institute, Institute for Ethics and Emerging Technologies, the Future of Humanity Institute (University of Oxford), the Centre for Study of Existential Risk (University of Cambridge), and the Lifeboat Foundation.



About - Contact - Disclaimer - Privacy Policy - Copyright © 2014-2020 VitaModularis.org All Rights Reserved.