Here’s Why Artificial Intelligence Might Kill Us All

3.) Self-Protection

The epitome of the issue is that no one knows really how to properly operate superintelligent machines. Most people just assume they will not hurt us or even be grateful for us, however, crucial research done by A.I. scientist Steve Omohundro implies that they will inherit essential drives. Whether their job is to pick stocks, mine asteroids, or control our critical infrastructure of energy and water, they’ll start to show signs of being self-protective and they will look for resources to help better complete their tasks. They’ll go to war with us in order to survive, and they won’t be able to be turned off. Omohundro’s study results with the fact that the drives of superintelligent machines will be on a parallel path with the path of humanity unless we take the time to take specific notes on every nut and bolt when designing them. We are correct to ask the same question that Stephen Hawking asked, “So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right?” The answer is actually, not right. With only a couple of exceptions, they’re creating products, not discovering logic and safety. Within the next decade or so, artificial intelligence-advanced objects are estimated to create trillions of dollars in financial income. Some tiny part of that should probably be invested in the logic of autonomous machines, solving the A.I. control issue and making sure humans will thrive in a world like this.

2.) Robots Will Learn to Program Themselves

Won’t it be long until a mentally active machine learns the process of A.I. research and development? In one other way,  HAL, the fun-loving robot might be able to learn how to program himself to be more intelligent in an increasingly rising age increasing bicentennial intelligence That’s the nutshell version of a theory called the “intelligence explosion,” which was created back in the 1960s by the British mathematician I.J. Good. At that time, Good was researching early artificial neural networks, which is the basis for “deep learning” strategies that are causing an uproar today, about 50 years after the fact. He thought that self-improving machines would become just as smart, and then even smarter than us humans. These robots would rescue mankind by solving impossible epidemics, like disease, famine, and war. Closer to the end of his life, Good changed his ways. He was scared that global competition would force nations to invent superintelligence without any thoughts of safety necessities in the process. And like Stephen Hawking, Stuart Russell, Elon Musk, Bill Gates and Steve Wozniak, Good was scared that it would wipe out humanity.

1.) No One Knows How to Control a Superintelligent Machine.

Stephen Hawking elegantly caught exactly what the problem was when he stated that, in the short term, A.I.’s effect is reliant on who is operating it. However, in the long term, it relies on whether the machine can be controlled at all. Regarding the short-term element, Hawking essentially notes that A.I. happens to be a “dual-use” technology, an expression that is used to explain machines able to create great good and great harm. Nuclear fission, which is the science behind nuclear bombs and power plant reactors, is a “dual use” machine. Since dual-use machines are only as dangerous as their users intend for them to be, one could only wonder what the harmful effects of A.I. could be…

To see what a human like robot looks like in 2017, watch the video below!

Comments are closed, but trackbacks and pingbacks are open.