News & Events

ICDS News

ethical ai

A research team is proposing a new form of governance to monitor the use of ethical AI algorithms. Image: Markus-Spiske/Unsplash

Researchers propose ‘troll for good’ approach to guide ethical use of AI

Posted on February 16, 2023

UNIVERSITY PARK – Artificial intelligence (AI) and machine learning (ML) technologies are emerging and expanding so quickly that regulations and policies to guide use are lagging far behind, according to a team of researchers.

In an article in the current issue of Science, the team suggests merging two radical approaches – copyleft and patent trolling – to help “good trolls” enforce the ethical use of AI training datasets and models.

“Efforts to promote ethical and trustworthy AI must go beyond what is legally mandated as the baseline for acceptable conduct,” said Jennifer Wagner, assistant professor of law, policy and engineering, Penn State and Institute for Computational and Data Sciences affiliate. “We can and should strive to do better than what is minimally acceptable.”

The research team labels their approach CAITE, which stands for Copyleft AI with Trusted Enforcement. CAITE uses the unlikely tools of copyleft licensing and patent trolling methods, said Wagner, who worked with  Cason Schmit, assistant research professor of health policy and management and director of the program in Health Law and Policy, Texas A&M University and Megan Doerr, director of Applied ELSI Research, Sage Bionetworks.

Copyleft licensing helps people share copyrightable material with limited conditions or terms. In this model, creators who share their work under a copyleft license allow others to use it to create their own works, but only under the same or compatible license as the original. Patent trolls, while often criticized for using legal threats to enforce intellectual property rights, are primarily companies or organizations that acquire and enforce patents, but do not manufacture or sell any products or services based on those patents. They generate revenue by licensing or enforcing their patents against alleged infringers.

In the case of ethical AI, CAITE uses a copyleft approach to create an enforcement entity – or “troll” – that leverages an ethical use license defined by six core legal terms. These terms include restrictions and conditions that permit only ethical use of an AI model or training dataset, requirements for derivative products to be subject to the same ethical use limitations and assignment of enforcement rights to a designated entity.

“There’s no need to recreate the wheel when searching for AI governance strategies,” said Wagner. “Similarly, we don’t need to place extensive burdens on AI developers and scientists that would distract them from the important work they are trying to do to advance health and science. My coauthors and I propose that CAITE would create ‘trolls for good’ to promote and enforce adaptive ethical terms and conditions for AI objects through copyleft licensing.”

The ethical use license creates standards for ethical AI that perpetuate as licensed objects are reused and also pools enforcement rights in a single trusted entity. The researchers also envision a quasi-governmental regulation agency that can leverage individual rights to further social objectives as a single and coherent regulator.

“One of the best features is that this approach could incorporate diverse terms and conditions appropriate for particular contexts and could adapt as those ethical norms change over time,” said Wagner.

Growing Concern

According to Wagner and her colleagues, the growing regulatory deficiency raises concerns about the potential for discrimination, particularly in AI’s commercial and Big Tech use, employment, policing and medicine. AI’s current application suffers from problems such as opacity, inaccuracy, inappropriate application and lack of community engagement or meaningful consent. Guidance for AI use in the health field is an area of particular concern, said Wagner.

“For biomedical AI in particular, it is critical that we establish a culture of vigilance to make sure that biomedical AI tools perform properly and are free from a wide range of ethical problems,” said Wagner. “For example, we need to make sure that biomedical AI data sets and models are not biased or perhaps even dangerous when used.”

Despite the ominous implications of falling behind in the AI regulatory race, Wagner is encouraged by recent efforts to address ethical AI.

“It’s an exciting time to see ethical AI mature with not only the recently issued Blueprint for an AI Bill of Rights from The White House OSTP and the AI RMF 1.0 issued by NIST but also with the CAITE governance model we’ve proposed,” said Wagner.

Wagner said the team was helped through participation in the intensive five-day “Innovation Lab: A Data Ecosystems Approach to Ethical AI for Biomedical and Behavioral Research” that was hosted by the NIH Office of Data Science Strategy last year. Only about 30 individuals were selected for this Innovation Lab, with Wagner and her colleagues among them—something for which Wagner said she is both “honored and grateful.”

Share

Related Posts