News & Events

ICDS News

Credit: Penn State Dickinson Law

Daryl Lim proposes “equity by design” framework in Duke Law Technology Review

Posted on February 4, 2025

Editor’s Note: A version of this article was published on Penn State News.

CARLISLE, Pa. — Daryl Lim, Penn State Dickinson Law associate dean for research and innovation, H. Laddie Montague Jr. Chair in Law and Institute for Computational and Data Sciences (ICDS) co-hire, has proposed an “equity by design” framework to better govern the artificial intelligence technology and protect marginalized communities from potential harm.

Lim’s approach was published on Jan. 27 in the Duke Law Technology Review.

According to Lim, who is also a consultative member of the United Nations Secretary General’s High-Level Advisory Body on Artificial Intelligence, responsibly governing AI is crucial to maximizing the benefits and minimizing the potential harms — which disproportionally impact underrepresented individuals — of these systems. Governance frameworks help align AI development with societal values and ethical standards within specific regions while also assisting with regulatory compliance and promoting standardization across the industry.

Lim said that being socially responsible with AI means developing, deploying and using AI technologies in ethical, transparent and beneficial ways.

“This ensures that AI systems respect human rights, uphold fairness and do not perpetuate biases or discrimination,” Lim said.

Lim said that social responsibility extends to accountability, privacy protection, inclusivity and environmental considerations, and by prioritizing that, we can mitigate risks such as discrimination, biases and privacy invasions as well as build trust.

“Equity by design means we should embed equity principles throughout the AI lifecycle in the context of justice and how AI affects marginalized communities,” Lim said. “AI has the potential to improve access to justice, particularly for marginalized groups.”

For example, if someone who may not speak English is looking for assistance and has access to a smartphone with chatbot capabilities, they can input questions in their native language and get information that they need to get started. Lim also suggests that there are risks such as perpetuating biases, or the algorithmic divide, which refers to the disparities in access to AI technologies and education about these tools. Biases can also be introduced, even unintentionally, by the data that these systems are trained with or by the people who are training the systems.

The ultimate goal of Lim’s work is to shift the focus to proactive governance by proposing this equity-centered approach that enhances transparency and tailored regulation. His research explores how AI can both improve access to justice and entrench biases while seeking to provide a roadmap for policy makers and legal scholars to navigate the complexities and advancements of this technology.

Lim also suggests equity audits as a solution, ensuring there are checks and balances for those who create AI systems prior to algorithms being released. He also notes the impact on the rule of law, which in this case involves assessing whether our current legal frameworks address these challenges or if reforms are necessary to uphold the rule of law in the age of AI.

“Emerging technologies like AI can influence fundamental principles and values that underpin our legal system,” Lim said. “This includes fairness, justice, transparency and accountability. AI technologies can challenge existing legal norms by introducing new complexities in decision-making processes, potentially affecting how laws are interpreted and applied.”

In September 2024, the “Framework Convention on Artificial Intelligence,” was signed by the United States and European Union (EU). This treaty establishes a global framework to ensure that AI systems respect human rights, democracy and the rule of law. The treaty specifies a risk-based approach which requires more oversight of high-risk AI applications in areas such as health care and criminal justice. It also details how different areas have different approaches to AI governance, emphasizing the importance of global collaboration to address these challenges.

Lim’s work “embeds the principles of justice, equity and inclusion throughout AI’s lifecycle.” This aligns with the overarching goals of the treaty. Lim also emphasizes that AI should advance human rights for marginalized communities and that there should be more transparent and protective audits.

Share

Related Posts