Page 11 - Artificial Intelligence: Building Smarter Machines
P. 11

Future of Life Institute, an organization dedicated to researching and raising
                   public awareness of the safety issues surrounding superintelligence.
                       Together with Nobel Prize–winning physicist Frank Wilczek, AI
                   researcher Stuart Russell, and Stephen Hawking, Tegmark composed a letter
                   that was published in the online newspaper the Huffington Post. “Success
                   in creating AI would be the biggest event in human history,” they wrote.
                   “Unfortunately, it might also be the last, unless we learn how to avoid the
                   risks. . . . Whereas the short-term impact of AI depends on who controls
                   it, the long-term depends on whether it can be controlled at all.” The risks,
                   according to the writers, include “autonomous weapon systems that can
                   choose and eliminate their own targets.” By December 2015, the Future
                   of Life Institute had awarded $7 million in grants for proposals on ways to
                   minimize the dangers of AI.
                       Science fiction sometimes captures the danger of trusting AI too much.
                   H.A.L. 9000, the eerily sentient computer from the iconic 1968 film
                   2001: A Space Odyssey, had a will of its own. Claiming to be “foolproof
                   and incapable of error,” H.A.L. controlled most of the operations aboard
                   the spaceship Discovery One. The problem came when H.A.L. wanted to
                   take charge of the astronauts and the entire space mission as well. The
                   supercomputer would stop at nothing—even murder—to get its way. In
                   a heart-thumping cinematic sequence, the last astronaut left alive unplugs
                   H.A.L.’s circuits one by one as the frightened computer begs him to stop.
                       Almost fifty years after the movie’s release in 1968, scientists
                   acknowledge that the development of a strong artificial intelligence would
                   create potential hazards as well as benefits. Even Ray Kurzweil acknowledges
                   that AGI “will remain a double-edged sword.” Besides helping humankind,
                   he observes, “it will also empower destructive ideologies [ideas].”
                       Speaking at MIT in 2014, Elon Musk, the founder of Tesla and the
                   space exploration company SpaceX, is even more fearful. He cautioned,
                   “With artificial intelligence, we are summoning the demon.” Researchers
                   aim to program computers with strong human values to prevent negative
                   consequences and maximize the benefits to humanity.






                                                                         The Singularity  81
   6   7   8   9   10   11   12   13   14   15   16