
Credit: Penn State Dickinson Law
Daryl Lim proposes “equity by design” framework in Duke Law Technology Review
Posted on February 4, 2025Editor’s Note: A version of this article was published on Penn State News.
CARLISLE, Pa. — Daryl Lim, Penn State Dickinson Law associate dean for research and innovation, H. Laddie Montague Jr. Chair in Law and Institute for Computational and Data Sciences (ICDS) co-hire, has proposed an “equity by design” framework to better govern the artificial intelligence technology and protect marginalized communities from potential harm.
Lim’s approach was published on Jan. 27 in the Duke Law Technology Review.
According to Lim, who is also a consultative member of the United Nations Secretary General’s High-Level Advisory Body on Artificial Intelligence, responsibly governing AI is crucial to maximizing the benefits and minimizing the potential harms — which disproportionally impact underrepresented individuals — of these systems. Governance frameworks help align AI development with societal values and ethical standards within specific regions while also assisting with regulatory compliance and promoting standardization across the industry.
Lim said that being socially responsible with AI means developing, deploying and using AI technologies in ethical, transparent and beneficial ways.
“This ensures that AI systems respect human rights, uphold fairness and do not perpetuate biases or discrimination,” Lim said.
Lim said that social responsibility extends to accountability, privacy protection, inclusivity and environmental considerations, and by prioritizing that, we can mitigate risks such as discrimination, biases and privacy invasions as well as build trust.
“Equity by design means we should embed equity principles throughout the AI lifecycle in the context of justice and how AI affects marginalized communities,” Lim said. “AI has the potential to improve access to justice, particularly for marginalized groups.”
For example, if someone who may not speak English is looking for assistance and has access to a smartphone with chatbot capabilities, they can input questions in their native language and get information that they need to get started. Lim also suggests that there are risks such as perpetuating biases, or the algorithmic divide, which refers to the disparities in access to AI technologies and education about these tools. Biases can also be introduced, even unintentionally, by the data that these systems are trained with or by the people who are training the systems.
The ultimate goal of Lim’s work is to shift the focus to proactive governance by proposing this equity-centered approach that enhances transparency and tailored regulation. His research explores how AI can both improve access to justice and entrench biases while seeking to provide a roadmap for policy makers and legal scholars to navigate the complexities and advancements of this technology.
Lim also suggests equity audits as a solution, ensuring there are checks and balances for those who create AI systems prior to algorithms being released. He also notes the impact on the rule of law, which in this case involves assessing whether our current legal frameworks address these challenges or if reforms are necessary to uphold the rule of law in the age of AI.
“Emerging technologies like AI can influence fundamental principles and values that underpin our legal system,” Lim said. “This includes fairness, justice, transparency and accountability. AI technologies can challenge existing legal norms by introducing new complexities in decision-making processes, potentially affecting how laws are interpreted and applied.”
In September 2024, the “Framework Convention on Artificial Intelligence,” was signed by the United States and European Union (EU). This treaty establishes a global framework to ensure that AI systems respect human rights, democracy and the rule of law. The treaty specifies a risk-based approach which requires more oversight of high-risk AI applications in areas such as health care and criminal justice. It also details how different areas have different approaches to AI governance, emphasizing the importance of global collaboration to address these challenges.
Lim’s work “embeds the principles of justice, equity and inclusion throughout AI’s lifecycle.” This aligns with the overarching goals of the treaty. Lim also emphasizes that AI should advance human rights for marginalized communities and that there should be more transparent and protective audits.
Share
Related Posts
- Featured Researcher: Nick Tusay
- Multi-institutional team to use AI to evaluate social, behavioral science claims
- NSF invests in cyberinfrastructure institute to harness cosmic data
- Center for Immersive Experiences set to debut, serving researchers and students
- Distant Suns, Distant Worlds
- CyberScience Seminar: Researcher to discuss how AI can help people avoid adverse drug interactions
- AI could offer warnings about serious side effects of drug-drug interactions
- Taking RTKI drugs during radiotherapy may not aid survival, worsens side effects
- Cost-effective cloud research computing options now available for researchers
- Costs of natural disasters are increasing at the high end
- Model helps choose wind farm locations, predicts output
- Virus may jump species through ‘rock-and-roll’ motion with receptors
- Researchers seek to revolutionize catalyst design with machine learning
- Resilient Resumes team places third in Nittany AI Challenge
- ‘AI in Action’: Machine learning may help scientists explore deep sleep
- Clickbait Secrets Exposed! Humans and AI team up to improve clickbait detection
- Focusing computational power for more accurate, efficient weather forecasts
- How many Earth-like planets are around sun-like stars?
- Professor receives NSF grant to model cell disorder in heart
- SMH! Brains trained on e-devices may struggle to understand scientific info
- Whole genome sequencing may help officials get a handle on disease outbreaks
- New tool could reduce security analysts’ workloads by automating data triage
- Careful analysis of volcano’s plumbing system may give tips on pending eruptions
- Reducing farm greenhouse gas emissions may plant the seed for a cooler planet
- Using artificial intelligence to detect discrimination
- Four ways scholars say we can cut the chances of nasty satellite data surprises
- Game theory shows why stigmatization may not make sense in modern society
- Older adults can serve communities as engines of everyday innovation
- Pig-Pen effect: Mixing skin oil and ozone can produce a personal pollution cloud
- Researchers find genes that could help create more resilient chickens
- Despite dire predictions, levels of social support remain steady in the U.S.
- For many, friends and family, not doctors, serve as a gateway to opioid misuse
- New algorithm may help people store more pictures, share videos faster
- Head named for Ken and Mary Alice Lindquist Department of Nuclear Engineering
- Scientific evidence boosts action for activists, decreases action for scientists
- People explore options, then selectively represent good options to make difficult decisions
- Map reveals that lynching extended far beyond the deep South
- Gravitational forces in protoplanetary disks push super-Earths close to stars
- Supercomputer cluster donation helps turn high school class into climate science research lab
- Believing machines can out-do people may fuel acceptance of self-driving cars
- People more likely to trust machines than humans with their private info
- IBM donates system to Penn State to advance AI research
- ICS Seed Grants to power projects that use AI, machine learning for common good
- Penn State Berks team advances to MVP Phase of Nittany AI Challenge
- Creepy computers or people partners? Working to make AI that enhances humanity
- Sky is clearing for using AI to probe weather variability
- ‘AI will see you now’: Panel to discuss the AI revolution in health and medicine
- Privacy law scholars must address potential for nasty satellite data surprises
- Researchers take aim at hackers trying to attack high-value AI models
- Girls, economically disadvantaged less likely to get parental urging to study computers
- Seed grants awarded to projects using Twitter data
- Researchers find features that shape mechanical force during protein synthesis
- A peek at living room decor suggests how decorations vary around the world
- Interactive websites may cause antismoking messages to backfire
- Changing how government assesses risk may ease fallout from extreme financial events
- Penn State’s Leadership in AI Research
- Symposium at U.S. Capitol seeks solutions to election security
- ICS co-sponsors Health, Environment Seed Grant Program
- Differences in genes’ geographic origin influence mitochondrial function