Robots lack authority over humans due to… “interactional justice”. Professor Valentina Pitardi, Senior Lecturer in Marketing at Surrey Business School, presented her research on this topic during the SKEMA Centre for Artificial Intelligence (SCAI) seminar series. We explore with her the reasons behind this resistance.
You found that people are less likely to follow instructions given by robots as opposed to humans, why is that?
Generally, people tend not to comply when instructions come from robots (Figure 1). This is largely due to what we call ‘perception of justice’. It means that when we receive instructions by robots, we perceive that the treatment we are receiving is less fair compared to if it was a human.
This can be explained by one of the dimensions of justice, known as ‘interactional justice’, which is about feeling respected during the interaction. Our studies showed that people perceive low interactional justice with robots, meaning they feel the robots didn’t treat them with the respect and dignity they’d expect from a human.

Did you find any cultural or generational differences?
In terms of generation, we included age as a control variable (including participants from 18-75 years), but we didn’t find significant differences. That said, this could change in the future, maybe in 10/20 years when people will be more accustomed to robots, especially the younger generation (who seems to differentiate less between for example what’s virtual and what’s not).
As for culture, our research primarily focused on Western countries. Our experiments are conducted in the UK with British citizens, and our online samples stem from the US, Canada, and Britain. We found consistent results across these countries. However, it would be interesting to explore this in other regions, where cultural norms around authority and compliance might differ.
Does the type of robot and its characteristics (e.g. human likeness) influence compliance?
Not really. For instance, we tested whether the robot’s gender had any impact. Our hypothesis was that people might perceive ‘female robots’ as warmer and more empathetic, thus feel less mistreated, leading to greater compliance. But we found no such effect. We also looked at the robot’s level of human-likeness and perceived intelligence, but it didn’t affect anything.
So far, everything we tried on the robots didn’t really change the results. It seems that people feel that robots cannot ensure a respectful and fair treatment, and this seems to affect compliance.
Interestingly, in one of our experiments, we found that people tend to follow instructions given by a sign more than those given by a robot (Figure 2). This suggests that the absence of human is not the problem, but the perceived justice.

What about the robot’s voice? Could unfamiliarity be the problem?
The robots we used have Pepper’s voice, which is a human-like voice, but still recognisably artificial. While we didn’t actually manipulate the voice to test its effect, it’s possible that it contributes to the perception of distance and difference between humans and robots. However, if we were to give robots human voices, we’d essentially be reinforcing the idea that you need a human.
It’s true that we’re not used to receiving instructions from robots, and habits do play a role in compliance, we grow up learning to follow orders from humans and signs. We tested this by measuring participants’ perception of novelty, and even after accounting for how new or unfamiliar they found the robot, the effect remained.
Does this resistance tie into perceptions of authority? Do only humans have authority over other humans?
People do recognise that humans have more authority than robots, but in our research, authority didn’t mediate compliance. We also tested whether social consequences would influence compliance, and while it did play a role, it wasn’t the main driver. It was the sense of interactional justice that stood out in both cases.
When people follow rules for the collective good, like staying silent in a classroom so everyone can hear the teacher, compliance is often driven by a shared understanding of why the rule exists. It’s a small personal sacrifice, which feels justified when asked by another human.
However, when a robot gives the same instruction, people resist because they feel that the robot cannot relate to what it’s asking. It doesn’t understand what it means to sacrifice something for others, making the interaction feel impersonal and unfair.
This effect might not be as strong in contexts where authority and consequences are more significant (like law enforcement) because people mostly obey to avoid fines or jail. Maybe in this context authority could be a bigger driver.
Did you explore situations where robots are more effective than humans?
Yes, we found that when you feel mistreated by a human, such as prioritising certain groups over others in a way that feels discriminatory, robots can actually perform better.
Read also: Prof. Sotiris Manitsaris: “AI can redefine how music is performed”
For example, in contexts where families or older people are prioritised, some people might feel unfairly treated. A robot giving those same instructions might be seen as impartial, since it’s programmed and doesn’t actually know what it’s doing. In that case, people comply less with the human and more with the machine.
But this is not something actionable for businesses. We don’t want to justify mistreating people to make robots appear more effective.

So, what recommendation would you give companies looking to implement robots in roles requiring compliance?
Companies should be careful about assigning robots in contexts where people could feel mistreated or frustrated. The main recommendation would be to pair robots with humans, rather than leaving them to operate alone. Human oversight can help balance the interaction and mitigate the perceived lack of fairness.
That said, we generally advise against using robots in contexts where compliance is important but easy to avoid, like taxes or handling refunds. These could lead to higher instances of cheating and can be costly.
What societal impact does this research reveal?
What makes this research different from the others on AI is the focus on instructions. We built a habit of complying with instructions automatically (with some exceptions), it’s part of how society functions.
Our findings suggest that robots disrupt this dynamic. Habits and societal norms are being challenged by technological innovation, and this opens a dialogue about what that means, as we integrate robots into more aspects of our daily lives.