Researchers at the Sunshine Coast University in Australia studied the risks associated with the introduction of artificial intelligence in various areas of life, using the example of Santa Claus. On the pages of the scientific journal The Conversation they described what the consequences would be for the world.
On the one hand, if you replace Santa Claus with an artificial intelligence system with the code name SantaNet, then providing gifts to children who behaved well will become much more effective by attracting helpers and drones. The capabilities of such a system will be much more human. The problem is that out of control, the artificial intelligence will be able to cause a real disaster.
The first risks will arise when the system itself first attempts to compile a list of those children who deserve a gift and those who were naughty.
The artificial intelligence can achieve this only with the help of a mass covert surveillance system. But evaluating “good” based on the machine’s own conclusions can lead to massive inequality. After all, there have already been cases when artificial intelligence expressed discriminatory views.
In addition, there is the issue of resource use. This problem has two sides. First, the system may decide that resources should be conserved, which means it can encourage children to behave badly in order to reduce the number of gifts needed. The second is the opposite situation, when a supercomputer, thinking exclusively about gifts, decides to use all the available resources of the Earth to make them.
Delivering gifts will also pose problems, scientists say. If someone tries to interfere with courier drones, for example, by denying access to airspace, such opposition to a smart system may end up not in the best way for a person. Including because today there is already technology that allows drones to gather in a swarm. It is impossible to resist such a force.
The researchers note that despite the seemingly far-fetched problem, the global artificial intelligence does indeed carry such risks. Well-intentioned systems can still create huge problems when they try to optimize the way they achieve their goals.