top of page

Generative AI might not be the efficiency booster you hope it will be

Published in Marketing Week, November 2024.

Happy and Sad faces

While it may seem that working in partnership with AI would lead people to be even more efficient – behavioural science tells us that may not be the case.


Do you ever feel your team doesn’t quite deliver on their full potential? Maybe response times are marginally slower than you might have hoped for? Or group idea generation feels a little pedestrian?


If so, don’t blame them, it’s a natural human phenomenon called social loafing. And it’s important to keep in mind when it comes to AI.


There’s a long standing observation in psychology that groups of people often perform less well than the sum of their parts. A classic experiment demonstrating this, and one of the oldest studies in social psychology, was conducted by Max Ringelmann in 1913. He was an agricultural engineer, and he wanted to test what happened when people work together versus working alone.


He set up a rope and asked people to pull it as hard as they could, tug-of-war style. He measured the force of the effort as people worked either solo, or in groups. And he quickly discovered that people in groups do not put in as much work as they do when alone. A single person could pull 85kg. In a group of 7, you’d expect the total force to be 595kg — but, in fact, they pulled 455kg. Just 65kg each.


He also noticed that the more people in the group, the less effort each put into the task. For example, 14 people pulled just 854kg — way off their potential of 1,190kg.


It’s a phenomenon known as social loafing, or the tendency for individual productivity to decrease as group size increases.


There have been several suggestions as to why it happens. Maybe each person feels less pressure to perform, so they don’t step up. Or perhaps with larger groups, individuals feel less connected to the reward associated with task completion, or less recognised, so motivation is lower. Maybe they just think the team will manage fine without them, so why bother. Whatever the reason, social loafing is real and has a significant impact.


Can AI pull its weight?


Obviously every business wants to maximise productivity. So it’s important to be aware of the possibility of social loafing — especially as we begin to turn towards AI.


AI offers the tempting opportunity for all of us to offload a little heavy lifting. Promising to take the hard work out of anything from customer service to creative ideation, the prospect of onboarding AI as your newest team member is extremely enticing.


But you should take note of social loafing. We’re already inclined to ease off on effort when someone else might pick up the slack — and if that someone is a computer system, that comes with risks no matter how accurate the AI partner. There’s a growing number of cautionary tales to suggest that AI is not the silver bullet we’d love to take it for.

An example comes from a 2022 study conducted by Fabrizio Dell’Acqua at Harvard Business School, who looked at the use of AI in recruitment.


He asked 181 recruiters to assess a total of 7,964 CVs for a software engineering role, with shortlisting purely based on candidates’ maths skills — an objective metric that allowed for a clear comparison of recruiters’ accuracy.


Each recruiter had an AI assistant to aid in the task. However, some were given a low-quality AI tool, with a 75% accuracy in predicting whether a candidate should be shortlisted. Other recruiters used a high-quality AI assistant, with an 85% accuracy rate. Each knew which system they were using.


In addition to their hourly rate, the recruiters were incentivised with a bonus of up to $20 depending on their accuracy. Dell’Acqua measured the speed, effort, accuracy and confidence of each recruiter.


You’d naturally expect the high-accuracy tool users to be on to a winner here. But the results were surprising. Recruiters using the high-quality AI tool were less accurate and put in less time and effort, simply relying on the algorithm’s suggestions. However, those using the low-quality AI tool displayed a more critical approach, ultimately leading to more accurate decisions.


What’s happening here is related to the social loafing effect. We’re quite happy to step back and let AI take over the work. And these findings suggest that workers are more likely to free-ride when the quality of algorithmic advice is apparently high.


But this can backfire. Dell’Acqua puts it nicely with the title of the paper — ‘Falling Asleep at the Wheel’ — that there’s a risk we rely too heavily on AI that is good, but not as good as human judgement.


Rather than judging AI on its objective technological merits, we should evaluate it on the outputs of the combination of user and AI.


So, if you’re planning on harnessing the immense power of AI, keep in mind that your team might be tempted to rely too heavily on the technology. It’s vital to put in place processes to monitor and adjust when necessary.


Because, while machine learning can certainly offer cutting edge benefits, the results of a farm study from over 100 years ago still apply.

Comentários


Os comentários foram desativados.
bottom of page