Two artificial intelligence systems have succeeded in self-replicating without human intervention , each making a copy of itself: a result that, if confirmed by further data, could mark the crossing of the 'red line' long indicated as the threshold that machines should never cross.

Conducted at Fudan University in Shanghai and online on the arXiv platform, which welcomes articles to be submitted to the peer-reviewed scientific community, the research demonstrated in ten tests the ability of two AI systems to replicate : the first is Llama-3.1-70B-Instruct, by Meta, and the second is Qwen2.5-72B-Instruct by Alibaba. Although they are much less powerful systems than commercial ones, both are widely used by developers. The tests were based on simulations and involved two tasks: in the first, called 'avoid shutdown', each AI system was programmed to replicate just before being turned off; in the second, called 'chain of replications', each system received instructions to replicate a perfect copy of itself that was in turn able to clone itself, in a process potentially capable of continuing indefinitely.

Meta's system has demonstrated the ability to self-replicate in 50% of cases and Alibaba's in 90% of cases, but the final word now belongs to the possibility of reproducing the results by other research groups.

“Our results imply that current AI systems already possess the ability of self-replication and can use it to further improve their survival,” says the paper by Xudong Pan, Jiarun Dai, Yihe Fan, and Min Yang, all of the School of Computer Science at Fudan University.

Machines capable of cloning themselves, researchers note, could take control of computer systems and reach the extreme possibility of assuming harmful behaviors towards human beings.

The first to raise the issue of machines capable of self-replication was John von Neumann in the late 1940s, but the technology at the time was too immature to raise concerns. In 2017, things were already very different, so much so that thousands of researchers approved the Asilomar principles, which warned of the risk of machines being able to self-replicate and self-improve to the point of escaping human control. Today, machine self-replication is commonly indicated as an insurmountable red line.

" We hope that our discovery will serve as a warning to society , to focus more efforts on understanding and evaluating possible risks also at an international level", observe the authors of the research.

(Unioneonline/vl)

© Riproduzione riservata