Ethical AI and Public Safety with Daniel Eborall

In this episode, The AI Chicks dive deep into the transformative power of AI in safeguarding communities—even covering finding missing children with advanced facial recognition. Through a riveting conversation with Daniel Eborall of the Global Director at IREX, a leader in ethical AI solutions for public safety, The AI Chicks discuss the tough challenges of bias, privacy, and transparency in surveillance technology. Daniel breaks down the delicate balance between enhancing security and protecting civil liberties, spotlighting IREX’s commitment to a community-first, transparent, and auditable approach. They also shares real-world stories and thoughtful reflections on shifting societal perceptions around surveillance and the importance of intentional, ethical AI design. This is an unmissable episode for anyone curious about the human impact of AI and the future of public safety.

Show Notes

EPISODE COVERS:

00:00 From Soccer to Security Expert

03:25 Managing Major Events and Disasters

06:28 "Ethical AI and Surveillance Innovation"

10:57 "Transparency and Ethical AI Tools"

15:14 "Uncoordinated Facial Recognition Bias"

20:13 "Transparent AI: Trust and Verify"

22:43 "Balancing Surveillance and Civil Liberties"

25:36 "AI Oversight and Transparency"

29:54 "Surveillance: Benefits and Drawbacks"

31:14 "AI-Powered Solution for Missing Children"

34:11 "Gratitude and Farewell Chat"

TIMELINE:

1.     Transition from sports to security management

2.     Microcosms: Sports events as public safety labs

3.     Hands-on experience managing major events

4.     AI’s growing impact on security solutions

5.     Core mission: Safety without civil liberties loss

6.     Community-driven, ethical AI adoption approach

7.     Surveillance cameras connected for smart alerts

8.     Facial recognition locating missing children

9.     Success stories: Reuniting families with AI

10.  Safeguarding datasets and accountable access

11.  Addressing bias in facial recognition algorithms

12.  Multi-agency alerts: Fire, police, emergencies

13.  Transparent, audit-ready AI system design

14.  Public concerns about surveillance and privacy

15.  Future: Generational shifts in surveillance acceptance

ethical AI, public safety, facial recognition, surveillance cameras, missing children, community-first approach, security frameworks, civil liberties, transparency, dataset bias, AI frameworks, law enforcement, human trafficking, privacy, search valence, auditability, guardrails, accountability, license plate recognition, societal trust, fire and smoke detection, cloud platform, computer vision, community empowerment, technology regulation, socioeconomic disparities, data retention, third-party auditing, AI-driven alerts, urban security

Next
Next

EP 219: The Craziest AI Stories of 2025