Artificial Intelligence: can it be controlled without preventing innovation?

Artificial Intelligence: can it be controlled without preventing innovation?
Philippe Pradal s'adresse aux acteurs de l'IA réunis lors de l'AI Act Day, le 4 décembre 2023 à Paris.

On 4 December, while the draft “AI Act” was still being debated in Brussels, “AI Act Day” was setting the tone in Paris.  In partnership with Datacraft, the Impact AI Think and Do Tank brought together a group of influential players in artificial intelligence (AI), with, as the chief “aspect of progress”, the task of laying down the foundations of an innovative and inspiring AI that is nonetheless ethical, sustainable and responsible.

AI is developing in leaps and bounds. That’s why we love it. But it is also why it feels threatening, and why we want to regulate it. None of the many artificial intelligence players brought together on 4 December at the Bpifrance Hub by Datacraft and the Impact AI Think And Do Tank was in any doubt: a responsible AI means a controlled AI. The question is how. How can AI – particularly generative AI – be regulated without inhibiting innovation? 

”There’s a time crunch”

That day, the subject was still the main topic for discussion between the European Parliament and the Member States of the European Union.  Since then, they have produced an initial regulation, a basic “AI Act” that could enable companies to develop useful, virtuous, safe solutions.  But AI is evolving faster than the law. While ChatGPT has revolutionised the AI landscape in less than a year, legislation, in the view of the most optimistic, will take at least two years to become effective. ”There’s a time crunch”, remarks the préfet Renaud Vedel, coordinator of the national strategy for artificial intelligence.


Watch also: How to use Artificial Intelligence to increase business creativity?


How can we legislate for something that is constantly changing? How can a regulation introduced today apply to a technology whose state of development in two years’ time is an unknown quantity? “Twenty years ago, new versions of software came out every five years. With AI, it’s every six months,” said Guillaume Leboucher, a member of the Docaposte executive committee and creator of the “AI for Schools” foundation. “There will always be time lags – it’s inherent to our work; that’s innovation. Is it that serious?” Maybe it isn’t that serious, but it raises questions. To start with, the idea of the right to employ. This brings the conflict between civil and common law to the fore again: if the right expressed by the law is lagging behind the actual situation, could jurisprudence make it possible to catch up?

“We can’t blame ArtificiaI intelligence for every ill”

The corporate world is eager to experiment. “Today, if we don’t try things out, we don’t know what we’re doing,” said Guillaume Leboucher, who wants to “bring the law closer to the experts” so that the two sides can innovate together.  Meanwhile, his colleague on the podium, Maxime Havez, Data Director at Crédit Mutuel Arkéa, “believed strongly in the certification approach” as a way of keeping pace with change: “This has enabled us to develop processes, tools and so on.” In this ever more VUCA world, Guillaume Leboucher suggested learning from another disruptive event that took everyone by surprise: “Why not create a scientific body, as was done for Covid?”

But if we wonder about the how, we must also wonder about the why – because generative AI raises societal issues. “In a year, endless questions have come up,” said Franck Morel, a partner in the law firm Flichy Grang. The former adviser to Premier Ministre Edouard Philippe laid it on the line: “For the first time, a revolutionary technology is going to affect white collar workers.” In his view, “a third of professional activity in France is exposed to generative AI.” 

Labour value came under scrutiny. A Doctor of Sociology, Yann Ferguson (a professor at Toulouse Jean Jaurès University) suggested another perspective on the situation: will this new technology “alter or enhance the essence of labour? A recent study by the OECD showed that the majority of people who work with AI are happier than before. But closer examination of this study reveals that those who work with AI have a good experience of it, not those who are managed by it.” However, “we can’t blame AI for every ill”, said the sociologist.  “AI has no ideas of its own. It has no aim to restrict real work. If that happened, there would be an organisation behind it.”

Human intelligence is ready for artificial intelligence

Yann Ferguson raised the question of the power to act: “At work, the most fundamental value is independence, i.e. what enables you to do what it takes for something to be well done. Even in the period of Taylorism, we knew that workers deviated from their instructions in order to do their work properly. […] If there is freedom of judgment, there is a sense of responsibility.” 

The sociologist already saw a responsible artificial intelligence emerging. In this climate of anxiety, the man ranked in 2018 among the 200 most influential French people in AI identified a “technology socialisation process.” “We are not condemned to a Darwinian approach to technology,” he said. In this attempt to regulate AI, “we are seeking to bend technology to our values. I see maturity here as regards technological development. In the past, we innovated and then discovered that there was a price to pay for progress, only to try to remedy it afterwards. Today, we are breaking out of this linear pattern: we’re looking to innovate while thinking about the possible cost involved.” 

Yann Ferguson’s message was not one that many are listening to: he called on us to believe in technology. “When the first hydrogen-powered aeroplane is unveiled in ten years’ time, I don’t imagine the CEO of Airbus saying he’s the last person in the world to have faith in his technology. And yet that’s exactly what former OpenAI CEO Sam Altman said about ChatGPT! If we build a technology, it’s because we believe in it and its ethical dimension.”

“Despite Sam Altman’s warnings, the average user is delighted by the services Artificial intelligence provides…”

But this fine vision of a responsible AI did not convince everyone. One man in the audience stood up and surprised the gathering. The mayor of Nice, Philippe Pradal, thought that Sam Altman was perfectly “clear-minded”. “I think that generative AIs are Trojan horses; they are not real AI,” said the member of Parliament for the Alpes-Maritimes, also a member of the French Law Commission. What concerned him was precisely that AI does not disturb people – or not enough. “It is the acceptability of AI that worries me. Despite Sam Altman’s warnings, the average user is delighted by the services AI provides…” Staying on track, Philippe Pradal cited Rabelais and Asimov’s laws of robotics and then said, “There should be a general distrust of AI – throughout the value chain, not only with decision-makers.”  


Read also: How HR can benefit from ChatGPT (and when they shouldn’t)


This is perhaps the chief contribution of legislation: encouraging people to get together to reflect and ask questions.  This also underpinned the task of Christophe Liénard, president of Impact AI, who reiterated firmly that “generative AI will not be deployed unless it is responsible.”   

If responsible AI needs to be established, it is because it is a massive innovation: it concerns not only all sectors of the economy but also our lives as a society. AI is spreading throughout the world; its value chain is global, and it is challenging our relationship with the world. As employees, professionals and citizens, we are now involved in a learning process that requires innovation and regulation to work together.

Fabien SeraidarianVice Dean Research & Knowledge Transfer, Scientific Director MBAs programmes, SKEMA Business School.

All author's posts

Kevin ErkeletyanEditorial manager, SKEMA Business School

All author's posts

Close Menu