She is a world reference in philosophical and ethical aspects of science and technology. Professor Elisabeth Hildt (Illinois Institute of Technology) was SKEMA Business School‘s special guest, on December 8th, for a very insightful and inspiring seminar on “Assessing Trustworthy AI”. In the margins of this event, she agreed to answer our questions.
We are entertained by many famous movies related to Artificial intelligence (AI), like I Robot, Her, and Love Death & Robots. So in your opinion, why AI is always seen as untrustworthy or like a big threat in most of the movies?
It’s true, Artificial Intelligence in movies or science fiction is often depicted as threatening humans and humankind. Probably this is what is considered to make these movies interesting, it’s sensational. But this type of characterisation detracts us from normal life contexts. And there’s a risk. In movies, it’s engaging, but there is a clear risk that people get a wrong perspective on AI, that they don’t see the real world problems and issues, but instead have kind of fancy, fictitious problems in mind that are not realistic. I don’t think that there’s a real risk that robots will take over the world or dominate humankind. AI is technology that humans control. It’s not a kind of enemy that miraculously appears. It’s not like there is a technology that comes out of the blue and we can’t influence what’s going to happen.
But do you think it’s a warning to Humans that AI becomes stronger and stronger?
Yes, AI technology has been improving considerably. In certain very narrow contexts, AI can now perform better than humans. Right now, this is the case only in very specific contexts for which the machine learning system has been trained. And so far this is very limited. But yes, technology is improving. And there are attempts to develop general artificial intelligence, which would be more like human intelligence that is more flexible and functions in all sorts of contexts. But even then, it would still be technology, and it is humans who design, develop and shape it.
In your view, what is the responsibility of the researchers to build confidence in AI?
I think there’s a huge responsibility for researchers, computer scientists, and engineers, but also for policymakers, to aim at designing, developing, and deploying AI technology in such a way that is beneficial to individuals and society. Overall, it is people who are accountable for what the technology does. And there are several criteria that are relevant here, which include human autonomy, transparency, fairness, and that there is no discrimination against certain groups.
Since nowadays technologies are rapidly developing, do you think that we should ask ourselves the question of adapting and upgrading ethics or our way of thinking to the new norms of technology?
That’s an interesting question. On the one hand, our values and moral views are not necessarily fixed and stable. They may undergo some changes with technological development. On the other hand, I do not think that we should say: “Oh yeah, here’s this new type of technology and this is how it is, and we all have to adapt to it”. I mean, it is humans who build technology, and the task is to shape it so that it is in line with the values held in society. But it is also a fact that we adjust our behaviour to technology. For example, most of us regularly use our cell phones now, and we couldn’t have imagined this 30 years ago. Cell phones have a broad influence on our behaviour. But the values we have, and the general rules for interacting with each other, still should be in place when we communicate using cell phones or other types of technology, even though certain new rules have evolved. Overall, this is a shaping process, with a clear need to consider the ethical and societal implications of new technology.
Nowadays, AI is everywhere. So do you agree or not that AI should play a role in every part of our lives?
AI seems to be everywhere, but I don’t think that AI does play or should play a role in all parts of our lives. While I can see that AI is instrumental in a lot of contexts, it certainly is not fundamental to our human existence. We need to think more about what it is that makes us human and what it is that is really central to our lives. There are a lot of areas and life contexts that have nothing to do with technology or AI, such as interpersonal communication, friendships, or relationships.
So we can conclude that AI does play a role everywhere. But what kind of role? It depends, right?
Yes, we have to decide what roles we want it to play. It is important to shape the roles of AI so that it is beneficial for us, both at the individual and societal level.