Community-led
AI Audits

Empowering Communities, Ensuring Accountability

In a world increasingly shaped by artificial intelligence, the need for responsible and inclusive AI practices has never been greater.

Irresponsible AI systems can marginalize entire communities, deny opportunities, and undermine public trust in innovation. The Eticas Foundation is at the forefront of addressing these challenges through Community-Led AI Audits—a transformative approach to ensuring transparency and accountability in AI technologies.

List of CLA
already published:

Automating (In)Justice?: An adversarial audit of RisCanvi

A tool designed to assess inmates' risk of recidivism should be robust and reliable. This adversarial audit, the first done of an AI system used in the criminal justice system in Europe, makes some shocking discoveries.

The case of Viogén: Can AI solve gender violence?

VioGén is an algorithm that determines the level of risk faced by a victim of gender-based violence and establishes her protection measures in Spain.

Auditing TikTok. Social media´s treatment of migrants

What is the impact of social media on the representation and voice of migrants and refugees in Europe?

Auditing YouTube. Social media´s treatment of migrants

What is the impact of social media on the representation and voice of migrants and refugees in Europe?

What is a Community-Led AI Audit?

A Community-Led AI Audit is a systematic evaluation of AI systems, integrating the voices and experiences of those most affected. Unlike traditional audits, which may be conducted in isolation, our approach prioritizes community engagement. We assess entire AI systems—including multiple algorithms—within their unique contexts, ensuring our audits reflect real-world implications.

The Importance of Community-Led AI Audits

With global AI regulations evolving to demand heightened transparency and accountability, our community-led audits serve as a vital step. By involving the very people affected by AI technologies, we foster deeper conversations about their impacts and promote a risk-mitigating approach to AI development.

GitHub

At Eticas Foundation, we believe that transparency is key to fostering accountability and trust in the development and use of AI systems. That’s why all the data from our Community-Led AI Audits and Public-Interest Audits are published and made openly accessible through our GitHub.

Publishing our data allows others to build on our work, encourages collaboration, and helps drive responsible innovation that centers on human rights and public interest. We see open access to this data as a vital step in creating oversight mechanisms for AI systems that impact everyday lives.

Our Methodology:
Rigorous and Adaptive

Our audits employ a socio-technical framework, recognizing the intricate interplay between algorithmic processes and social dynamics. Key features of our approach include:

02

Bias and Inefficiency Evaluation

Through comprehensive data analysis, we identify biases and inefficiencies, spotlighting areas for improvement within AI systems.

01

Qualitative Contextual Analysis

We engage with stakeholders to understand the operational environment of algorithms, ensuring our audits are grounded in lived experiences.

03

Community Involvement

By reverse engineering system processes with input from affected parties, we create a holistic assessment of algorithmic impacts, ensuring transparency where it matters most.

Amplifying Voices for Lasting Change

Engaging with communities affected by AI technologies is crucial to our mission. By actively listening to their experiences, we gain invaluable insights that inform our audit efforts and address real-world challenges. This commitment to empathy and integrity empowers us to foster fairness, equity, and accountability in algorithmic decision-making.

At the Eticas Foundation, our dedication to conducting rigorous, community-led audits is unwavering. However, we cannot achieve this alone. We invite organizations committed to social justice, human rights, and responsible technology to join us in this critical work.

We cannot achieve
this alone.

Collaborate
with Us

Whether you represent a civil society organization, research institution, advocacy group, or a community impacted by AI, your insights and expertise are vital. If you have knowledge or data related to an AI system and are passionate about making a difference, we want to hear from you.

Together, we can scrutinize AI systems, pinpoint areas for improvement, and advocate for meaningful change. Let’s work collaboratively to ensure AI technologies serve the greater good and uphold the values of equity, justice, and transparency.