Science & Technology

Did my computer say it best?

Robot hand typing on the laptop keyboard, 3d rendering isolated illustration
(Getty Images)

Research finds trust in algorithmic advice from computers can blind us to mistakes

With autocorrect and auto-generated email responses, algorithms offer plenty of assistance to help people express themselves.

Aaron Schecter

But new research from the University of Georgia shows people who rely on computer algorithms for assistance with language-related, creative tasks didn’t improve their performance and were more likely to trust low-quality advice.

Aaron Schecter, an assistant professor in management information systems at the Terry College of Business, had his study “Human preferences toward algorithmic advice in a word association task” published this month in Nature Scientific Reports. His co-authors are Nina Lauharatanahirun, a biobehavioral health assistant professor at Pennsylvania State University, and recent Terry College Ph.D. graduate and current Northeastern University assistant professor Eric Bogert.

The paper is the second in the team’s investigation into individual trust in advice generated by algorithms. In an April 2021 paper, the team found people were more reliant on algorithmic advice in counting tasks than on advice purportedly given by other participants.

This study aimed to test if people deferred to a computer’s advice when tackling more creative and language-dependent tasks. The team found participants were 92.3% more likely to use advice attributed to an algorithm than to take advice attributed to people.

“This task did not require the same type of thinking (as the counting task in the prior study) but in fact we saw the same biases,” Schecter said. “They were still going to use the algorithm’s answer and feel good about it, even though it’s not helping them do any better.”

Using an algorithm during word association

To see if people would rely more on computer-generated advice for language-related tasks, Schecter and his co-authors gave 154 online participants portions of the Remote Associates Test, a word association test used for six decades to rate a participant’s creativity.

“It’s not pure creativity, but word association is a fundamentally different kind of task than making a stock projection or counting objects in a photo because it involves linguistics and the ability to associate different ideas,” he said. “We think of this as more subjective, even though there is a right answer to the questions.”

During the test, participants were asked to come up with a word tying three sample words together. If, for example, the words were base, room and bowling, the answer would be ball.

Participants chose a word to answer the question and were offered a hint attributed to an algorithm or a hint attributed to a person and allowed to change their answers. The preference for algorithm-derived advice was strong despite the question’s difficulty, the way the advice was worded, or the advice’s quality.

Participants taking the algorithm’s advice were also twice as confident in their answers as people who used the person’s advice. Despite their confidence in their answers, they were 13% less likely than those who used human-based advice to choose correct answers.

“I’m not going say the advice was making people worse, but the fact they didn’t do any better yet still felt better about their answers illustrates the problem,” he said. “Their confidence went up, so they’re likely to use algorithmic advice and feel good about it, but they won’t necessarily be right.

Should you accept autocorrect when writing an email?

“If I have an autocomplete or autocorrect function on my email that I believe in, I might not be thinking about whether it’s making me better. I’m just going to use it because I feel confident about doing it.”

Schechter and colleagues call this tendency to accept computer-generated advice without an eye to its quality as automation bias. Understanding how and why human decision-makers defer to machine learning software to solve problems is an important part of understanding what could go wrong in modern workplaces and how to remedy it.

“Often when we’re talking about whether we can allow algorithms to make decisions, having a person in the loop is given as the solution to preventing mistakes or bad outcomes,” Schecter said. “But that can’t be the solution if people are more likely than not to defer to what the algorithm advises.”