Vinod Aggarwal: “AI is a unique technology, it challenges the core of humanity”

Vinod Aggarwal: “AI is a unique technology, it challenges the core of humanity”
Vinod Aggarwal during his speech at the SKEMA Center for Artificial Intelligence seminar, 2024.

ArtificiaI intelligence (AI) is such a unique and recent technology that the issue of its regulation is agitating the entire international community. On the occasion of his presence at SKEMA Business School as part of the SKEMA Center for Artificial Intelligence (SCAI) seminar, Vinod Aggarwal, the Director of Berkeley APEC Study Center, answered Margherita Pagani’s questions.

Do you think the biggest risk with AI comes more from the international economic tensions it may generate rather than the technological excesses it could lead to?

It’s actually both. At this point, we really don’t know how generative AI will develop. If we are on the verge of artificial general intelligence (AGI), then that could pose a technological threat and lead to many things like unemployment, which could be very disruptive. The other aspect is that there is a geopolitical tension growing between the United States of America (USA) and China, and also with the European Union (EU). The geopolitical threat overlays the technological one. And part of it has to do with the extent to which AI is seen to have dual use aspects to it, military applications. And the more AI goes in the direction of having very interesting military applications, the more it will become a geopolitical issue. It makes it difficult for countries to cooperate in the management of AI, because each one would like to get a security advantage.

Except in the EU, are there currently any regulations governing AI that are binding on governments or is it more like an invisible hand that governs AI?

All countries are trying to regulate AI. China has passed a lot of rules and regulations on AI. The United States has well. I think the difference is that they are not as elaborate as the EU’s AI approach. There is an effort to have international discussions about AI, but I think there is not a complete agreement on how this technology is to be regulated, and what are the issues that need to be considered. Given the technology is relatively new in terms of its recent advances, it’s difficult to come to an agreement in terms of what I call the meta-regime, the principles and norms underlying any regulatory effort.

In that sense, academics can play an interesting role in trying to think about areas of consensus, or what we call cognitive agreement, or cognitive consensus, in terms of the meta-regime, so that there can actually be a basis for creating rules and procedures that are consistent with the meta-regime.

It sounds quite complex: there are so many stakeholders, like industries, academia, many states. With that in mind, do you think it is realistic to regulate AI on a global scale? The issues at stake in regulating AI are primarily ethical, but ethics can vary greatly from one part to another part…

I think it depends on the direction AI goes. We have had the ability to regulate technologies such as nuclear technology. We had nuclear weapons treaty with the Soviet Union and the United States. So, it is possible even if the relationship between China and the United States becomes more adversarial. My former student, Andrew Reddie, who is a professor now, has written about: it is possible to come to various arms control agreements, even in sectors like biotechnology or bio-warfare. The real issue is that if it’s only about ethics, countries will defer on their view of what is considered to be ethics, on how they view the global issues of ethics, what makes it much more complex. So AI is a unique technology in that sense, it challenges the core of humanity in some sense.

In order to regulate AI effectively, do you think we should be prepared to accept some restrictions on innovation?

Some things definitely are out of bounds. The EU directive on AI does point to some things that are extremely high risk. We do not want AI being used to discriminate against people, to pick on minority groups, to take advantage of certain people, to make selling in a way that is antithetical to our values. But authoritarian governments have a very different view than democratic governments. There is a deep tension between authoritarian states that might want to control the technology, so they can keep track of their people and watch them. In the United States, we try to discourage too much information dissemination. We do not want people being tracked, whereas in some countries, it seemed to be okay to be tracking people for “safety”, but it can also lead to discrimination against minority.


Read also: Yihyun Lim: “AI may help us visualise and combat Climate change”

Margherita PaganiDirector SKEMA Center for Artificial Intelligence

All author's posts

Close Menu