Skip to content Skip to Search
Skip navigation

Bias a challenge for AI ethics, says Silicon Valley analyst

AI advocate Jessica Groopman speaks at the AGBI AI event in Dubai AGBI
AI advocate Jessica Groopman speaks at the AGBI AI event in Dubai
  • Humans ’embed’ biases
  • ‘Predictive policing’ warning
  • Exclusive AGBI event in Dubai

The ethical considerations surrounding artificial intelligence (AI) should focus on the inherent biases introduced during the design phase by those crafting the technology, a Silicon Valley analyst and AI advocate, Jessica Groopman, told an event organised by AGBI last week. 

“When we create these tools, we inevitably embed biases in them, reflecting our human biases,” she told delegates at an invitation-only event at Dubai’s Capital Club.

To address this challenge, Groopman stressed the importance of intentionally including diverse perspectives and considering potential harms and worst case scenarios. 

Predictive policing, a set of algorithms designed to figure out where and when crimes are likely to happen, is an example where ethical questions loom large.

The global predictive policing market is expected to reach $14.8 billion by 2030, according to GII Research.

Groopman warned that AI systems that rely on biased historical data for predicting criminal behaviour risk perpetuating disparities. 

In sectors such as healthcare, she said, transparency and explainability were vital for establishing trust in AI-driven decisions.

She also raised questions about ensuring the correctness of AI. “How can a doctor trust what AI is saying or what drug to prescribe or what surgery or therapeutic to administer?” Groopman asked.

The AGBI AI event in Dubai

Groopman, who lives in San Francisco, emphasised the central role of ethical AI development, and highlighted the need for careful consideration of data sources, acquisition methods and individual awareness throughout the technology lifecycle.

While she acknowledged that it was “developers’ responsibility”, she emphasised that societal involvement was crucial in shaping the ethical landscape of AI. 

Robust regulations, influenced by voters and policymakers, can guide ethical AI use, Groopman said. 

Recent advances in the UAE, such as M42’s Med42, a generative AI model, demonstrated the potential benefits of AI in healthcare, but the need for transparency and the assurance of correctness remained paramount, she said.

“As AI becomes increasingly pervasive, safeguarding individual and collective interests requires continuous vigilance and active participation from all stakeholders” Groopman stressed.

These concerns are likely to be some of the issues addressed by the International Centre for Artificial Intelligence Research and Ethics, which Saudi Arabia announced this month that it would be setting up in the kingdom.