Home / Cultura / Geoffrey Hinton Turns to the Pope in the Fight to Regulate AI

Geoffrey Hinton Turns to the Pope in the Fight to Regulate AI

//?#

«Most of the leading artificial intelligence researchers believe that we will most likely create beings much more intelligent than us within the next 20 years. My greatest concern is that these super-intelligent digital beings will simply replace us. They won’t need us. And since they are digital, they will also be immortal: I mean that it will be possible to revive a certain artificial intelligence with all its beliefs and memories. At the moment, we are still able to control what happens, but if there is ever an evolutionary competition between artificial intelligences, I think the human species will be just a memory of the past.»

Geoffrey Hinton is in his home in Toronto. Behind him is a large wall bookcase and he – as usual – is standing. Because of an annoying back problem, they call him «the man who never sits down» (but in reality, for short periods and with some precautions, he can). He is wearing a blue shirt that, together with his gray hair and sweet smile, make him look like those characters in books or movies who come to save us from a terrible threat. They say he is one of the three fathers of artificial intelligence, but since the other two (Yoshua Bengio and Yann LeCun) were, in different ways, his students, if there were an Olympus of artificial intelligence, Geoff Hinton would be Jupiter. And in fact, last year he won the Nobel Prize in Physics (with John Hopfield) «for fundamental discoveries and inventions that enable machine learning with artificial neural networks.»

To put it simply: without him, we wouldn’t have Chat GPT. When I invited him to join the group of artificial intelligence experts who will meet Pope Leo XIV on September 12, he replied: «I am an atheist, are you sure they want me?» But then he accepted because he evidently hopes that the pontiff wants to play the game of science: «It would have a great influence.» Much more than the appeals and petitions of the scientific community that have followed one another in two years with the aim of slowing down technological development while waiting to better understand the risks and the reasonable certainty that it does not get out of hand.

The first to sound the alarm was Hinton himself at the end of April 2023 when he suddenly left Google, the company he had worked for the last ten years, to be able to speak freely about the risks of artificial intelligence. The last was again him, a couple of months ago, when he signed an appeal, together with Nobel Prize-winning colleague Giorgio Parisi, to ask Open AI, the company that developed Chat GPT, for greater transparency («They are deciding our future behind closed doors»). Let’s start from here.

You had some resonance and collected three thousand authoritative signatures, but I don’t think that was the real purpose.

«The issue is complicated by the fact that there is a lawsuit between Musk and Altman. The reason Musk is doing this while simultaneously developing Grok is competition. Musk wants to have less competition from OpenAI. This is what I believe, I’m not sure, but in my opinion his motivation in wanting to keep OpenAI as a “public benefit corporation” is not the same as ours. Our motivation is that OpenAI was created to develop AI in an ethical way and instead is trying to shirk that commitment. Many researchers, like Ilya Sutskever, have left; I think they did it because not enough importance was given to safety. The goal of the appeal was to put pressure on the California attorney general, who can decide whether or not to allow OpenAI to become a for-profit company».

So you didn’t really expect a response from Sam Altman.

«No, of course not.»

I would say that the real response to the appeal came a few days ago, at the White House, when the President of the United States summoned all the CEOs of the Big Tech companies, including OpenAI, Meta and Google…

«The President tried to prevent AI rules in the United States for ten years, but he failed; individual states can continue to legislate.»

What effect did it have on you to see the leaders of the big tech companies so obsequious to political power? Perhaps it is the first time in history that we are witnessing an absolute concentration of power.

«My reaction was one of sadness: seeing the leaders of these companies unwilling to take any moral stance on anything.»

When the digital revolution arrived, the dreams were different: a fairer, more equitable, better world for everyone.

«I’m not sure it was everyone’s dream. I think for most of the people involved, the dream was to get rich. But they could convince themselves that yes, they would get rich, but at the same time they would help everyone. And to a large extent this has been the case for the web: if we put aside the social consequences, looking only at daily life, the web has really helped people.»

Take Tim Berners Lee: he didn’t get rich and gave the web protocols away for free to everyone. Then something went wrong. I would say: capitalism.
«I am not totally against capitalism: it has been very effective in producing new technologies, often with the free help of governments for basic research. I believe that, if well regulated, capitalism is fine. It is acceptable for people to try to get rich so long as, by getting rich, they also help others. The problem is when wealth and power are used to eliminate the rules, so as to make profits that harm people.»

Is that what is happening now?
«Yes. The thing that seems to matter most to them is not paying taxes. And they support Trump because he lowers taxes: this is their primary drive. But they also want to eliminate the rules that protect people to make it easier to do business. That’s why they support Trump.»

How do you evaluate what the European Union is doing? It is said that it thinks too much about regulation and too little about innovation. So we are lagging behind in the race for artificial intelligence. As you know, the European Union has passed an important law, the AI Act. In your opinion, is it a good starting point or just an obstacle to innovation?

«I think it’s an acceptable starting point, but there are parts I don’t like: for example, there’s a clause that says none of these rules apply to military uses of AI. This is because several European countries are major arms producers and want to develop lethal and autonomous systems. Moreover, as I understand it, European legislation started with a particular emphasis on discrimination and bias, worrying more about these aspects than other threats. So, it places a lot of emphasis on privacy. Privacy is important, but there are more serious threats.»

He touched on a very sensitive point. Silicon Valley companies were born with the idea of not collaborating with military power. When DeepMind was sold to Google, one of the conditions was that the AI would not be used for military purposes. But last January, that condition was removed. Now all artificial intelligence companies provide technology to the US Department of Defense. And a few days ago, Google signed an important agreement with Israel to provide artificial intelligence systems to the armed forces. What do you think?

«There is a simple rule that political scientists use: don’t look at what people say, but at what they do. I was very sorry when Google eliminated the commitment not to use AI for military purposes. It was a disappointment. And the same when they canceled the inclusion policies just because Trump had arrived.»

And we come to Pope Leo XIV: as soon as he took office, he said he wanted to deal with artificial intelligence. What role do you think he can play in this debate?
«Well, as you know, he has about a billion followers. If he said that it is important to regulate AI, it would be a counterweight to the narrative of Big Tech leaders thanking the President for not imposing rules. I believe that the Pope has real political influence, even outside of Catholicism. Many religious leaders, such as the Dalai Lama, see him as a moral voice. Not all popes have been like this, but this one has, and so has his predecessor. For many non-Catholics, his moral opinions are sensible, not infallible, but listened to. If he says that regulating AI is essential, it will have an impact.»

What would you like to tell the Pope about the risks of artificial intelligence?

«I believe that for the Pope, it is useless to focus on the so-called existential risks, such as the disappearance of the human species in the long term; and better to focus on the current risks. You don’t have to believe that AI is a “being” to understand that it can lead to mass unemployment, corruption of democracies, easier and more effective cyber-attacks, and the creation of lethal viruses accessible to anyone. It is already happening. So on these short-term risks, what I call ‘risks due to bad actors,’ we can find common ground. These are risks that any religious or political leader can understand, without having to enter into the metaphysical question of whether or not AI is a ‘being’».

Sometimes it seems that governments do not fully understand what is at stake, that they trust these companies too much.
«That’s exactly how it is. Governments often do not have enough in-house experts and rely on consultants who come from… guess where? From the Big Tech companies. So they end up only hearing the voice of the companies. It’s as if the Wall Street regulators were all former bankers who still think like bankers.»

You are considered an apocalyptic. How do you feel? Optimistic or pessimistic?

«Realistic. I see both sides. But history tells us that, without rules, power is abused. So I think that if we don’t regulate, it will end badly.»

Do you think there is a particular lesson that we should remember now? «Yes. Let’s think about nuclear energy. When scientists realized that their discoveries could lead to the atomic bomb, many of them tried to warn governments. But in the end, it was the military who decided how to use it. It was not the scientific community that set the limits. I see the same risk today: if we let only governments or companies decide, AI will be used in very dangerous ways.»

Is it really a unique moment in the history of humanity or just another progress?
«It is a turning point. For the first time, we are creating entities that can be smarter than us. This has never happened before. We have created stronger, faster, but never smarter machines. This changes everything».

So what can we do?

«We need a global movement, not just a few isolated voices. We need to build broad consensus in the scientific community, otherwise politicians will ignore us.»

After 90 minutes, the interview is over (this is a highly condensed version, but it has been approved by the professor). There’s still time for a personal question. Geoffrey Hinton comes from an exceptional family. Everest is named after his great-uncle, who was a cartographer (and his middle name is also Everest). Another relative is the father of Boolean logic, the mathematical foundation that underpins computers and digital systems. As if that weren’t enough, his father always set the bar very high for him. He used to say, «If you work twice as hard as I do, by the time you’re twice my age, you might be half as good.» I ask him:

Did you manage to achieve the goal your father set for you?

(He smiles) «Definitely at least half.»

This is a very condensed, but approved by professor Hinton, version of a much longer conversation held on September 7th. A full version will be released soon. Here, the Italian version.

Fonte Originale