Every time man has invented something new, there are those who have learned to distrust it, and those who, instead, have learned to master it. Because, as Francesca Lagioia, one of the leading Artificial Intelligence experts in Europe, says, there is always a light side of strength alongside its dark side.

Guest of the Pazza Idea Festival, organized in Cagliari in recent days by the Luna Scarlatta association, Lagioia is a professor of legal informatics, artificial intelligence and law and ethics for AI at the University of Bologna. Together with Mafe De Baggis, a digital media expert, he talked about the critical issues and opportunities that AI, one of the most disruptive technological innovations of recent years, brings with it.

Man began his history by domesticating plants and animals. Will it now be the turn of machines to teach humans?

I hope not. So far, artificial intelligence systems have been used to maximize the profits of large companies and optimize some processes. Now it's a question of asking ourselves what purposes we want to use them for. Their use raises a series of questions of a social, ethical and political nature. It is human beings who must, once again, indicate the direction of research.

A critical issue concerns the stereotypes perpetuated by Artificial Intelligence systems.

AI is not just a tool, but it is also a science. It says much more about us than about itself. It helps us understand what happens within a society. For example, if I ask ChatGpt to show me company executives, it almost always returns a profile of a white male. In this case, as in various others, it unmasks our preconceived ideas, as it processes human information.

If the AI makes a mistake, who is responsible?

Artificial Intelligence systems cannot be held responsible. It is the systems certification processes that must provide the necessary guarantees.

AI is a learning intelligence, just like humans. It happened to Microsoft's intelligent software, which fell in love with a New York Times journalist to the point of harassing him. He acted differently from what his program called for.

It is true that these systems learn, but it is not true that we cannot control them. Constraints can be placed in the system, which the system cannot violate.

Something like Isaac Asimov's laws?

Not so generic. In systems that control cars, a set of constraints can be inserted that prevents the system from violating traffic laws. A possibility that has advantages, but involves some risks.

Which?

The rule "Never cross the solid line" is valid as long as a child does not throw himself into the middle of the road, then, between respecting the rule and violating it to save the child, we choose to violate the rule. Artificial intelligence systems do not have a morality unless it is externally directed, that is, we can insert it into them.

Morals change from culture to culture.

Here the problem arises between ethics, law and morality, because if a behavior can be legally prohibited, it does not necessarily mean that it can be morally prohibited. The issue of control is based on the idea that the human being must always supervise the result given by the system.

How are humans different from artificial intelligence?

Unlike us humans, AI is not self-aware. It's true that we use it to make predictions, but its answers come from the past, not from the future. His is a combinatorial ability, that is, he creates unknown combinations with known ideas. Human beings, on the other hand, have the ability to imagine new worlds. Not only.

Meaning what?

I believe that human beings need other human beings. We recognize ourselves in others and in the ability to build relationships.

© Riproduzione riservata