Technology

How a Paperclip-Producing AI Might End up Wiping off Humanity

How a Paperclip-Producing AI Might End up Wiping off Humanity

Recently, there has been a lot of discussion on the benefits and drawbacks of artificial intelligence (AI) and artificial general intelligence (AGI), in great part because of developments in powerful language models like Open AI’s Chat GPT.

As a result of the potential existential peril for mankind, if we sleepwalk into producing a super-intelligence before we have discovered a method to restrict its impact and control its aims, some in the business have even urged for AI development to be delayed or even shut down immediately.

If an AI were to uncover footage showing us pushing, shoving, and otherwise mistreating Boston Dynamics robots, you would imagine it as being hell-bent on destroying humanity. One philosopher and leader of the Future of Humanity Institute at Oxford University believes our demise could come from a much simpler AI; one designed to manufacture paperclips.

Humanity
How a Paperclip-Producing AI Might End up Wiping off Humanity

Nick Bostrom, well-known for his simulation hypothesis as well as his work in AI and AI ethics, described a scenario in which a powerful AI is assigned the basic objective of producing as many paperclips as it can. While this may appear to be a harmless objective (Bostrom picked this example because of how innocuous the goal appears), he demonstrates how it might lead to a good old-fashioned skull-crushing AI apocalypse.

“The AI will quickly realize that it would be much better if there were no humans because humans might decide to switch it off,” he told HuffPost in 2014. “Because there would be fewer paper clips if humans did this.” Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans.”

The purpose of the scenario is to demonstrate how a minor objective might have unexpected repercussions, but Bostrom claims that it applies to every AI given goals without sufficient restrictions on its behavior, adding that “the point is its actions would pay no heed to human welfare.”

Although it is on the extreme end of the spectrum, Bostrom also suggests that we take the horse’s path.

“At first, carriages and plows were used in conjunction with horses, substantially enhancing their output. Later, cars and tractors took the place of horses, he said in his book Superintelligence: Paths, Dangers, Strategies.”When horses became obsolete as a source of labor, many were sold off to meatpackers to be processed into dog food, bone meal, leather, and glue. In the United States, there were about 26 million horses in 1915. By the early 1950s, 2 million remained.”

Bostrom had a foresightful idea on how AI may go awry by seeking to serve particular groups, such as a paperclip producer or any “owner” of the AI, rather than mankind as a whole back in 2003.

The danger of failing to give superintelligence the super goal of generosity is one of the hazards associated with its development. This may occur, for example, if the superintelligence’s designers chose to design it to just benefit this particular subset of people rather than all humans, he said on his website. “Another possibility is that a well-meaning team of programmers makes a significant error in designing its goal system.”

“To return to the earlier example, this could result in a superintelligence whose primary goal is to manufacture paperclips, with the result that it begins transforming first all of Earth and then increasing portions of space into paperclip manufacturing facilities.” More subtly, it might result in a superintelligence attaining a state of circumstances that we may today consider ideal, but which turns out to be a false paradise in which vital elements of human flourishing have been irreparably lost. We must be cautious about what we desire for from a superintelligence, since we may receive it.”