News & Events

ICDS News

People more likely to trust machines than humans with their private info

Posted on May 8, 2019
Some people naturally trust machines with their personal data — and that has positive and negative implications.

GLASGOW, Scotland — Not everyone fears our machine overlords. In fact, according to Penn State researchers, when it comes to private information and access to financial data, people tend to trust machines more than people, which could lead to both positive and negative online behaviors.

In a study, people who trusted machines were significantly more likely to hand over their credit card numbers to a computerized travel agent than a human travel agent. A bias that that machines are more trustworthy and secure than people — or the machine heuristic — may be behind the effect, said S. Shyam Sundar, James P. Jimirro Professor of Media Effects, co-director of the Media Effects Research Laboratory and affiliate of Penn State’s Institute for Computational and Data Sciences (ICDS).

“This tendency to trust the machine agent more than the human agent was much stronger for people who were high on the belief in the machine heuristic,” said Sundar. “For people who did not believe in the machine heuristic, it didn’t make a difference whether the travel agent was a machine or a human.”

This suggests that the presence of a machine agent on the interface served as a cue for triggering the heuristic or the ingrained belief that machines are superior, according to Sundar, who worked with Jinyoung Kim, former doctoral student in mass communication and currently a researcher at Amazon.

That faith in machines may be triggered because people believe that machines do not gossip, or have unlawful designs on their private information. However, Sundar said while machines might not have ulterior motives for their information, the people developing and running those computers could prey on this gullibility to extract personal information from unsuspecting users, for example, through phishing scams, which are attempts by criminals to obtain user names, passwords, credit card numbers and other pieces of private information by posing as trustworthy sources.

“This study should serve as a warning for people to be aware of how they interact online,” said Sundar. “People should be aware that they may have a blind belief in machine superiority. They should watch themselves when they engage online with robotic interfaces.”  

On the other hand, because some people trust machines more, developers could use the findings to create more user-friendly websites and applications that make people feel more comfortable completing transactions online, according to the researchers, who report their findings today (May 7) at ACM CHI Conference on Human Factors in Computing Systems held in Glasgow, UK.

“One way we can leverage this heuristic, especially for design, is, if you want to engender greater trust and you’re building an automated system, or an algorithm, making sure you identify it as a machine-based system — and there is no human in the loop — could actually increase trust,” Sundar said. “This is especially true in areas where involvement of humans can lead to unpredictable and undesirable outcomes.”

Sundar added that people with a high degree of trust in machines only need subtle design indications that they are interacting with a machine.

“In all of this, one thing I would like to stress is that the designers have to be ethical,” said Sundar. “They should not unethically try to extract information from unsuspecting consumers.” 

The researchers recruited 160 participants from Amazon Mechanical Turk, an online crowdsourcing website frequently used in studies, for the study. The participants were asked to use either a human or a machine chat agent to find and purchase a plane ticket online. After the agent returned the flight information, it prompted the participants for their credit card information. The participants then reported their intentions to provide that information. The researchers measured the participants’ trust of machines by asking them to respond to a series of five statements about interacting with machines.

Future research will examine the role of machine heuristic in promoting people’s trust toward artificial intelligence or AI systems, particularly chatbots, smart speakers and robots.

The National Science Foundation supported this research.

Share

Related Posts