PEOPLE are commonly blind to how much influence Generative AI (Gen AI) has over their work, when they choose to enlist the support of technologies such as Chat GPT to complete professional or educational tasks, new research finds.
The study, carried out by associate professors Dr Mirjam Tuk and Dr Anne Kathrin Klesse alongside PhD candidate Begum Celiktutan at Rotterdam School of Management Erasmus University, reveals a significant discrepancy between what people consider to be an acceptable level of AI use in professional tasks, and how much impact the technology has on their work.
This, the researchers say, makes establishing the ethics and limitations of using such technologies difficult to define, as the answer to whether GenAI usage is considered acceptable is not clear-cut,
Dr Tuk says;
“Interestingly, it seems acceptable to use GenAI for ourselves but less so for others. This is because people tend to overestimate their own contribution to the creation of things like application letters or student assignments when they co-create them with GenAI, because they believe that they used the technology only for inspiration rather than for outsourcing the work,” says Dr Tuk.
The researchers draw these conclusions from experimental studies conducted with more than 5,000 participants.
Half of the studies’ participants were asked to complete (or to recall completing) tasks ranging from job applications and student assignments to brainstorming and creative assignments with the support of ChatGPT if they wished.
To understand how participants might also view others’ use of AI, the other half of the studies’ participants were asked to consider their response to someone else completing such tasks with the help of ChatGPT.
Afterwards, all participants were asked to estimate the extent to which they believed ChatGPT had contributed to the outcome. In some studies, participants were also asked to indicate how acceptable they felt the use of ChatGPT was for the task.
The results showed that, when evaluating their own output, on average participants estimated 54% of the work was led by themselves, with ChatGPT contributing 46%.
But, when evaluating other people’s work, participants were more inclined to believe that Gen AI had been responsible for the majority of the heavy lifting, estimating human input to be only 38%, compared to 62% from ChatGPT.
In keeping with the theme of their research, Dr Tuk and her team used a ChatGPT detector to assess participants’ accuracy in their estimations on how much they believed their work, and the work of others, had been completed by the technology and how much was human effort.
The difference in estimated contribution by the creator and by ChatGPT, the researchers say, highlights a worrying level of bias and blindness toward how much of an impact GenAI really has on our work output.
“Whilst people perceive themselves as using GenAI to get inspiration, they tend to believe that others use it as a means to outsource a task,” says Prof Tuk. “This prompts people to think that it is totally appropriate for themselves to use GenAI, but not for others to do the same.”
To overcome this, instilling awareness of bias for both self and for others is vital when embedding GenAI and setting guidelines for its use.
The full study “Acceptability Lies in the Eye of the Beholder: Self-Other Biases in GenAI Collaborations” is available to read in the International Journal of Research in Marketing.