News & Events

ICDS News

What is ChatGPT and how can it be used ethically?

What is ChatGPT and what can it be used for? An interview with S. Shyam Sundar and Shamir Wilson

Posted on April 24, 2023

UNIVERSITY PARK, Pa. — ChatGPT, a natural language processing tool driven by artificial intelligence (AI) technology, has been driving headlines since it was developed and released by OpenAI in November 2022. Since then, the technology has raised many questions, including what unintended consequences might arise and how ChatGPT can be used ethically.

S. Shyam Sundar, the James P. Jimirro Professor of Media Effects and director of the Center for Socially Responsible Artificial Intelligence at Penn State, and Shomir Wilson, assistant professor of information sciences and technology and director of the University’s Human Language Technologies Lab, explain what ChatGPT is and what it can be used for.

What is ChatGPT?

Wilson: ChatGPT is the result of research in a field called natural language processing (NLP), which is a branch of artificial intelligence that’s all about getting computers to understand human language. This technology has been around for decades in various forms but has grown rapidly in recent years. For example, search engines and virtual assistants such as Apple’s Siri and Amazon Alexa devices all use NLP to understand your queries and pull the information you want.

Sundar: That same technology is behind the autocomplete function in iPhones and email clients. ChatGPT, however, belongs to a new class of generative AI, where it can not only search and imitate existing content but actually create new content. That is what all the excitement is about. ChatGPT is a member of the generative pre-trained transformer (GPT) family of language models, which generates new content based on millions of pieces of text written by humans throughout history. ChatGPT added the chat interface to the GPT technology, making it more conversational and easier for the wider population to use.

What are some ethical uses of ChatGPT?

Sundar: It’s a good first-draft assistant. ChatGPT is well-versed at following the general traditions of letter writing that millions of humans have followed throughout time. It follows the letter writing formula and then can be customized or added to by the person using it. People in advertising or public relations can use it to generate draft content such as ad copy and press releases, which can then be edited and crafted into final products. If used in a professional capacity, it is good practice to disclose that GPT technology was used as part of the process. I think we are headed in the direction of human-AI collaboration. It’s not that machines are going to replace us, but rather take away some of the tedium and allow us to focus more on creativity and high-level embellishments.

How are people unethically using ChatGPT?

Sundar: No single technology is by design socially responsible or irresponsible. It depends on how the user puts it to use. As an educator, one of my early concerns was the potential for students to cheat on term papers or even in their college application essays, where they are given a prompt and asked to write a coherent essay that college admissions committees and professors use to judge the students. Those kinds of tasks can be easily outsourced to chatbots, and there’s no easy way to tell whether a student wrote it themselves or had Chat write it for them. It is very difficult to detect. You can’t just run plagiarism software and expect the software to detect it because GPT generates new content every time. It requires a very vigilant professor to invest time they normally don’t have into grading a paper.

Wilson: The threat to cybersecurity from large language models is increasingly growing. In addition to things like phishing attempts, there’s been some research on creating malware using ChatGPT to write the code for the malware to find exploits. It also makes it easy for potential bad actors to generate a lot of articles on a particular viewpoint that they want to advance in online discourse. You used to have to hire a person to write these articles or opinion pieces. Now, it’s possible to generate a lot of these articles in a short amount of time, and then presumably post them to several different platforms. So, if someone searches for public opinion, they will see all these articles that seem to be written by people but in fact aren’t.

What do people need to understand about ChatGPT?

Sundar: Machines don’t have souls and they don’t have creativity or originality in the same way that a human does. Therefore, whatever is generated by ChatGPT is really a cobbling together of words that may sound right but may not be factually correct. The ability of this generation of NLP makes the text sound authoritative because it’s written in the correct style, and our instinct is to trust fully formed sentences instead of robotic responses that we became used to in the previous era of AI.

People should understand that machines can be a source of communication but that they should not fall for “machine heuristics,” or the tendency to believe that machines are always accurate, objective or unbiased. Machines can be easily manipulated. Machines do rely on a corpus of human language texts and form sentences based on the frequency of past occurrence of words without really understanding the meaning behind what they are producing.

What are some challenges with ChatGPT that need to be addressed moving forward?

Wilson: We don’t want to destroy these technologies or outlaw them, but it is appropriate to create them ethically, and that’s the area that we need to further explore. We also need to start talking more about the concerns surrounding bias and the potential for large language model-based technologies to exclude marginalized groups that are underrepresented in the training data.

If we want everyone to be able to take advantage of these language technologies and not be left out, we need to closely examine how they respond to text produced by different groups, because we do write differently. And they also need to be responsive to differences in the text that they generate. For example, GPT needs to be able to recognize that engineers and doctors are also often women and nonbinary folks, in addition to men. We have made progress on that front with language technologies, but every time there’s a new generation, we have to reexamine those questions and make sure that we’re still doing the right thing.

Sundar and Wilson are affiliates of the Institute for Computational and Data Sciences.

Penn State News

Share

Related Posts