Adding human touch to unchatty chatbots may lead to bigger letdownPosted on April 18, 2019
By Matt Swayne
Article originally published on Penn State News
UNIVERSITY PARK, Pa. — Sorry, Siri, but just giving a chatbot a human name or adding humanlike features to its avatar might not be enough to win over a user if the device fails to maintain a conversational back-and-forth with that person, according to researchers. In fact, those humanlike features might create a backlash against less responsive humanlike chatbots.
In a study, the researchers found that chatbots that had human features — such as a human avatar — but lacked interactivity, disappointed people who used it. However, people responded better to a less-interactive chatbot that did not have humanlike cues, said S. Shyam Sundar, James P. Jimirro Professor of Media Effects, co-director of the Media Effects Research Laboratory and affiliate of Penn State’s Institute for CyberScience (ICS).
High interactivity is marked by swift responses that match a user’s queries and feature a threaded exchange that can be followed easily, according to Sundar.
“People are pleasantly surprised when a chatbot with low anthropomorphism — fewer human cues — has higher interactivity,” said Sundar. “But when there are high anthropomorphic visual cues, it may set up your expectations for high interactivity — and when the chatbot doesn’t deliver that — it may leave you disappointed.”
On the other hand, improving interactivity may be more than enough to compensate for a less-humanlike chatbot. Even small changes in the dialogue, like acknowledging what the user said before providing a response, can make the chatbot seem more interactive, said Sundar.
“In the case of the low-humanlike chatbot, if you give the user high interactivity, it’s much more appreciated because it provides a sense of dialogue and social presence,” said lead author of the study, Eun Go, a former doctoral student at Penn State and currently assistant professor in broadcasting and journalism, Western Illinois University.
Because there is an expectation that people may be leery of interacting with a machine, developers typically add human names to their chatbots — for example, Apple’s Siri — or program a human-like avatar to appear when the chatbot responds to a user.
The researchers, who published their findings in Computers in Human Behavior, currently online, also found that just mentioning whether a human or a machine is involved — or, providing an identity cue — guides how people perceive the interaction.
“Identity cues build expectations,” said Eun Go. “When we say that it’s going to be a human or chatbot, people immediately start expecting certain things.”
Sundar said the findings could help developers improve acceptance of chat technology among users. He added that virtual assistants and chat agents are increasingly used in the home and by businesses because they are convenient for people.
“There’s a big push in the industry for chatbots,” said Sundar. “They’re low-cost and easy-to-use, which makes the technology attractive to companies for use in customer service, online tutoring and even cognitive therapy — but we also know that chatbots have limitations. For example, their conversation styles are often stilted and impersonal.”
Sundar added the study also reinforces the importance of high interactivity, broadly speaking.
“We see this again and again that, in general, high interactivity can compensate for the impersonal nature of low anthropomorphic visual cues,” said Sundar. “The bottom line is that people who design these things have to be very strategic about managing user expectations.”
The researchers recruited 141 participants through Amazon Mechanical Turk, a crowdsourced site that allows people to get paid to participate in studies. The participants signed up for a specific time slot and reviewed a scenario. They were told that they were shopping for a digital camera as a birthday present for a friend. Then, the participants navigated to an online camera store and were asked to interact with the live chat feature.
The researchers designed eight different conditions by manipulating three factors to test the user’s reaction to the chatbot. The first factor is the identity of a chatbot. When the participant engaged in the live chat, a message appeared indicating the users was interacting either with a chatbot or a person. The second factor is the visual representation of a chatbot. In one condition, the chatbot included a humanlike avatar and in another, it simply had a speech bubble. Last, the chatbots featured either high or low interactivity when responding to participants, with the only difference being that a portion of the user’s response was repeated in the high condition. In all cases, a human was interacting with the participant.
While this study was carried out online, the researchers said that observing how people interact with chatbots in a laboratory may be one possible step to further this research.
- Featured Researcher: Nick Tusay
- Multi-institutional team to use AI to evaluate social, behavioral science claims
- NSF invests in cyberinfrastructure institute to harness cosmic data
- Center for Immersive Experiences set to debut, serving researchers and students
- Distant Suns, Distant Worlds
- CyberScience Seminar: Researcher to discuss how AI can help people avoid adverse drug interactions
- AI could offer warnings about serious side effects of drug-drug interactions
- Taking RTKI drugs during radiotherapy may not aid survival, worsens side effects
- Cost-effective cloud research computing options now available for researchers
- Costs of natural disasters are increasing at the high end
- Model helps choose wind farm locations, predicts output
- Virus may jump species through ‘rock-and-roll’ motion with receptors
- Researchers seek to revolutionize catalyst design with machine learning
- Resilient Resumes team places third in Nittany AI Challenge
- ‘AI in Action’: Machine learning may help scientists explore deep sleep
- Clickbait Secrets Exposed! Humans and AI team up to improve clickbait detection
- Focusing computational power for more accurate, efficient weather forecasts
- How many Earth-like planets are around sun-like stars?
- Professor receives NSF grant to model cell disorder in heart
- SMH! Brains trained on e-devices may struggle to understand scientific info
- Whole genome sequencing may help officials get a handle on disease outbreaks
- New tool could reduce security analysts’ workloads by automating data triage
- Careful analysis of volcano’s plumbing system may give tips on pending eruptions
- Reducing farm greenhouse gas emissions may plant the seed for a cooler planet
- Using artificial intelligence to detect discrimination
- Four ways scholars say we can cut the chances of nasty satellite data surprises
- Game theory shows why stigmatization may not make sense in modern society
- Older adults can serve communities as engines of everyday innovation
- Pig-Pen effect: Mixing skin oil and ozone can produce a personal pollution cloud
- Researchers find genes that could help create more resilient chickens
- Despite dire predictions, levels of social support remain steady in the U.S.
- For many, friends and family, not doctors, serve as a gateway to opioid misuse
- New algorithm may help people store more pictures, share videos faster
- Head named for Ken and Mary Alice Lindquist Department of Nuclear Engineering
- Scientific evidence boosts action for activists, decreases action for scientists
- People explore options, then selectively represent good options to make difficult decisions
- Map reveals that lynching extended far beyond the deep South
- Gravitational forces in protoplanetary disks push super-Earths close to stars
- Supercomputer cluster donation helps turn high school class into climate science research lab
- Believing machines can out-do people may fuel acceptance of self-driving cars
- People more likely to trust machines than humans with their private info
- IBM donates system to Penn State to advance AI research
- ICS Seed Grants to power projects that use AI, machine learning for common good
- Penn State Berks team advances to MVP Phase of Nittany AI Challenge
- Creepy computers or people partners? Working to make AI that enhances humanity
- Sky is clearing for using AI to probe weather variability
- ‘AI will see you now’: Panel to discuss the AI revolution in health and medicine
- Privacy law scholars must address potential for nasty satellite data surprises
- Researchers take aim at hackers trying to attack high-value AI models
- Girls, economically disadvantaged less likely to get parental urging to study computers
- Seed grants awarded to projects using Twitter data
- Researchers find features that shape mechanical force during protein synthesis
- A peek at living room decor suggests how decorations vary around the world
- Interactive websites may cause antismoking messages to backfire
- Changing how government assesses risk may ease fallout from extreme financial events
- Algorithm aims to alert consumers before they use illicit online pharmacies
- Deep learning may help doctors choose better lung cancer treatments
- Using cues and actions to help people get along with artificial intelligence
- Multi-university NSF grant to boost research computing expertise