AI courses

How to Build Ethical AI Products: Insights on Responsible AI Development

ai governance Oct 15, 2025
 

 

Part 2 of SheAI's AI Governance Series in Collaboration with The Bloom 

Read Part 1: AI Governance: Ethical Frameworks for Human-Centered Artificial Intelligence in 2025

 

As AI systems increasingly shape our lives, businesses, and societies, a critical question emerges: who gets to decide how these powerful technologies are built and deployed?

The answer matters more than most people realize. When AI development happens in homogeneous spaces dominated by specific demographics and worldviews, the resulting systems reflect narrow perspectives that can perpetuate bias, exclude vulnerable populations, and create unintended harms at scale. This is where ethical AI product design comes into play, to ensure the development and deployment of responsible AI.

In this conversation, part 2 of SheAI's AI Governance Series in collaboration with The Bloom, we sat down with two leading experts who are pioneering ethical AI development from different angles: a professor of AI design working in Europe and an NGO practitioner working in the United States, bringing together theory and practice, offering unique perspectives that made for a fascinating conversation.

 

Meet Our Experts

Marihum Pernia is a professor and a hybrid designer with a background in strategic design. Born in Venezuela and now working as a professor at Politecnico di Milano in Italy, she brings a unique international perspective. She is also a founder of labcoexist: a tech education hub for AI development with a critical, transdisciplinary approach.

Ashley Khor is a researcher-practitioner living in Washington DC. She leads the EAAMO academic working group, evolving it into Living Labs, where nonprofits and researchers co-experiment to test what is "safe enough to try" with generative AI. She also runs AI in Common, a public knowledge system exploring ethical and survivor-centric AI design. Ashley has dedicated her work to human-centric products designed to help people who have gone through trauma and difficult experiences.

Together, they offer perspectives from both sides of the Atlantic, navigating the regulatory differences between the US and EU while grappling with the same fundamental challenge: how do we build AI that serves humanity rather than harming it? And, can we agree on what global AI ethics looks like?

 

 

Why is Diversity in AI Critical to Ethical AI Design

 

The AI industry has a representation problem that directly affects product safety and ethics. Women comprise only 22% of AI talent globally as of 2024, and their share falls below 15% in senior executive positions. This isn't just a statistic, it's a warning about who gets to shape the technology transforming our world. According to McKinsey's 2024 "Women in the Workplace" report, companies with gender-diverse AI teams outperform male-dominated ones by 26% in identifying algorithmic bias in AI models during product testing. UNESCO’s 2023 study showed that 80% of top AI systems exhibit regressive gender stereotypes, putting women and minorities at increased risk of algorithmic exclusion.

The diversity prediction theorem proves what many have long suspected: diverse groups consistently outperform homogeneous teams in problem-solving and innovation. Eastern European countries, notably Latvia and Finland, maintain over 40% female representation in AI, outpacing major economies like Germany and the U.S. This balance directly correlates with improved AI fairness outcomes.

"Diverse groups will always outperform an individual expert or a homogeneous group, because the diverse voices altogether create an inclusive view and reduce the individual biases," explains Maja Zavrsnik, Co-founder and CMO of SheAI.

 

EU vs. US: Two Paths to AI Regulation

 

Europe's Preventive Approach

In this fast-paced world, how can we ensure responsible AI? The regulatory landscape reveals a fundamental tension between innovation speed and safety. The EU AI Act, effective August 2025, sets a gold standard for risk-based regulation. Over 2,000 organizations across Europe began active compliance as of Q3 2025, responding to phased obligations targeting everything from high-risk models to General Purpose AI (GPAI).

"In the EU AI Act, there's a heavy ex-ante emphasis, and this focuses on preventative measures. We see a lot of detailed risk-based obligations that need to be met before the AI can be deployed," said Ashley.

 

America's Reactive Model

The U.S. has taken the opposite approach: minimal federal regulation, sector-specific laws, and voluntary guidance. "The US has traditionally been ex-post, focusing less on preventative instructions and more on monitoring and penalties," Ashley notes. The US did launch the National AI Research Resource (NAIRR) in 2024, offering ethical research infrastructure for AI, aiming to close governance gaps. However, only 18 individual states have introduced AI-specific laws addressing harm as of late 2024, per Stanford AI Index.

This lighter touch regulation fosters innovation, but we also see attention on catastrophic risks, with very much less focus on people harm mitigation. This divergence sets real consequences: "This year, there's a very clear direction from the administration where a lot of funding is focused on national security and defense... Meanwhile, AI companions are flooding the market," Ashley observes.

As Marihum highlighted: "What are the trade-offs to warranty ethics in a society that right now is being bombarded by a lot of information, a lot of bias, and is creating a lot of harm?"

 

 

The New Harms of Generative AI

 

Generative AI introduces complexities that extend far beyond traditional machine learning concerns. Here are the most pressing issues facing ethical genAI today.

 

Hallucinations and Eroding Trust

AI hallucinations, when systems generate plausible but entirely false content, threaten information integrity at scale. Recent analysis finds that large language models have average hallucination rates of 3–5% on general queries, with certain tasks (legal or scientific) seeing 6–17% rates for even top models. Harvard Kennedy School (2025) outlined that nearly 61% of generative models introduced hallucinated citations when given academic-style prompts, risking harm in sensitive sectors like health or education.

Recent examples include the Chicago Sun-Times publishing a summer reading list with invented books, and fake citations appearing in high-profile policy documents.

"ChatGPT is well-known to be a yes-man," Maja warns. "It will instantly reconfirm your beliefs. You'll always be in the right, and you'll always be left feeling satisfied. Most AI solutions prioritize engagement over customer safety, which leads to harm and real-life consequences. "

 

Psychological Harms

Research from Stanford and MIT (2025) clustered examples of amplified self-harm and delusions caused by unmonitored AI tools - specifically, AI chatbots, with urgent calls for more robust safety frameworks. Headlines like "ChatGPT caused me to have delusions" and "ChatGPT encouraged suicide" reveal disturbing new risks. As Ashley explains: "The amplified self-harm of having an AI that can converse with you and almost come across as a person is a real social problem that we don't really see solutions for yet."

 

Cultural Flattening

Modern large language models, often trained on Reddit, Wikipedia, or mainstream platforms, inherit and perpetuate dominant cultural narratives. MIT and Georgia Tech studies found that 75% or more of LLM outputs align closely with English-speaking and Western European cultural values, causing what researchers call "cultural flattening" in user experiences. "We can see cultural biases and stereotypes coming out in quite elegantly conveyed speech," Ashley notes.

 

The Catastrophic Risk Paradox

Catastrophic harms seem to be dominating the attention and receiving a lot of funding, while everyday human harms are under-resourced. This disproportionate focus creates a paradox. While visionary efforts prioritize stopping worst-case catastrophic scenarios (and rightly so), everyday human harms—like misinformation spread, biased decision-making in hiring or lending, psychological impacts from conversational AI, and harms from accessibility barriers remain critically under-resourced. These day-to-day issues affect millions directly but receive less attention and funding for prevention or education on safe AI use.

 

 

Why Organizations Fail at AI Ethics

 

Technical Dominance Without Diverse Input

Now let's examine the various ethical considerations when deploying AI technologies, and why we're seeing so many errors. As mentioned before, one persistent problem is "technical dominance", where engineers alone shape AI design. How can we expect the ethical development of AI when only 12% of AI-focused companies had formal ethics oversight officers in 2025 (according to PwC), despite 82% listing ethics and deploying strong ethical standards as a core priority in the companies' annual reports.

The 2014 Amazon recruiting tool scandal illustrates this failure, having systematically favored job applications of male candidates due to biased training data. Although the company tried to fix the algorithm, it kept coming back with the same biases, which led to the project being scratched entirely.

 

The AI Literacy Gap

Many leaders recognize the need for action but lack process awareness. AI is not just a tool, but a paradigm shift needing technical, ethical, and output oversight. Meaning responsible AI practices are not the default, but actually require a lot of education and human oversight.

 

Data Accountability

Artificial Intelligence is fueled by data; its quality, provenance, and structure directly impact bias creation. "After all, data is the fuel of any AI system," Marihum emphasizes.

 

Model Maintenance Failures of AI Tools

AI models evolve unpredictably after deployment. Without vigilant maintenance, including ongoing bias assessment, new forms of harm inevitably emerge.

 

 

Developing Ethical AI Products: A Design Shift

 

IDEO’s 2023 publication "Ethical Systems by Design" advises designers to explicitly map foreseeable harms and run “prevention case” blueprints in customer journeys, aligning with Marihum’s preventive approach.

"We need to shift from the best experience design to 'what could go wrong design', Marihum insists. "First, you need to understand AI, then you need to understand what happens when AI goes wrong, and finally, you need to understand how society is influencing and being influenced in that situation."

Acknowledge that AI bias is inevitable. World Economic Forum’s 2025 "Responsible AI Playbook insists that managing bias, not eliminating it, must be the realistic goal of ethical companies.

 

A Case Study in Ethical Artificial Intelligence: SurvivorAI

 

Ashley shared a powerful example from her work with CHEN, a global nonprofit providing online healing services to survivors of gender-based violence. The organization wanted to develop its first generative AI tool, but faced a critical question: how do you introduce AI into work with trauma survivors without causing additional harm?

 

The Challenge: Supporting Survivors Without Re-Traumatizing

Survivors of image-based sexual abuse face a devastating reality: intimate images shared without their consent can spread rapidly across social media platforms. Taking them down requires submitting formal requests to each platform, a process that forces survivors to repeatedly engage with their trauma.

Traditional support often meant survivors had to recount traumatic details to police, family members, and care workers, experiences that Ashley notes were "really bad experiences" that compounded their trauma. The question became: could AI help without making things worse?

 

Ethical AI Practices: Minimizing the Risk First

Rather than trying to solve everything at once, the team chose a carefully bounded use case: generating takedown request letters for social media platforms.

"To submit a take-down request, you only need to provide limited trauma history disclosure about the image itself and how it was used without your consent," Ashley explains. This made it lower-risk than other potential applications that would require survivors to share extensive trauma histories.

The team didn't just rely on policies, they built privacy into the system's architecture. They decided to create a two-layered approach:

  • Layer 1 - Data Collection: A form collects information from survivors about the image and situation
  • Layer 2 - Data Filtering: Before anything reaches the AI, a filter strips out non-essential details, passing only what's legally required for the takedown request

This architectural approach means privacy isn't dependent on hoping the AI behaves correctly; it's structurally impossible for the AI to access unnecessary personal information.

 

The Guardrail Challenge: When AI Gets Creative with Harm

The team implemented guardrails against re-traumatizing language, blocking terms like "sextortion" and "revenge porn." But they quickly discovered a problem unique to generative AI. "You can tell the AI, don't use these words as it's traumatizing language,'" Ashley explains. "But the problem with using a blacklist is that the AI is generative. It's constantly trying to find new ways to communicate, pulling from its wide knowledge, which includes Reddit."

The AI would find new, equally traumatizing ways to describe situations, ways the team hadn't anticipated. So even though the team did their best to minimize harm, it became very clear that ethical AI requires continuous monitoring, not one-time configuration.

The continuous improvement system includes:

  • Redress avenues: Clear paths for reporting when something goes wrong
  • Feedback loops: Survivor input integrated back into design
  • Accountability circles: Following up with people who were harmed

 

The Solution? Collaborative Human-AI Reasoning

Rather than AI replacing judgment, Ashley advocates for partnership. In her research, they experimented with role prompts by giving it gave it three different roles: emotionally intelligent, trauma-informed, and survivor-centric. Small changes in role definition significantly affected the classification and reasoning.

So, what's the solution to developing ethical AI? We should be working towards collaborative human-AI reasoning, where AI surfaces patterns quickly, and then humans work hand-in-hand, case by case, bringing context, ethics, and lived experience on how to interpret those patterns responsibly. "Instead of AI automating decisions or replacing judgment, we're using AI to extend what's possible when paired with care."

 

 

Key Lessons for Entrepreneurs

 

The SurvivorAI case study offers a blueprint for ethical AI development:

  1. Start narrow: Choose low-risk use cases first, prove safety, then expand
  2. Design for privacy architecturally: Don't rely on AI behavior; build structural safeguards
  3. Anticipate creative harm: Generative AI will find new ways to cause problems you didn't foresee
  4. Define values-based metrics: Technical accuracy isn't enough; measure what actually matters to your users
  5. Test participatively: Involve your community, but only when it's safe enough to try
  6. Commit to continuous improvement: Ethics isn't a launch checklist; it's an ongoing practice

For ethical AI companies Interested in responsible innovation, especially those serving vulnerable populations, SurvivorAI demonstrates that ethical AI is both possible and practical. It does, however, require intentional design, continuous vigilance, and genuine commitment to user well-being over speed to market.

 

Conclusion: Moving Forward in Responsible AI

 

The future of AI needs more women shaping development, more diverse voices in decision-making, and more commitment to ethics at the center of innovation. That future starts with education, engagement, and action.

AI will not wait for perfect policy or perfect teams. It is concrete, urgent work. The way to get started is to grow your practical literacy and start testing responsibly. Sign up for the SheAI community (completely for free) and stay in the loop regarding the latest and greatest AI developments, find collaborators and colleagues, and be the first to know about our expert events and panels that will keep you at the top of your game.

 

 

Resources for Continued Learning

 

Essential Reading:

Communities:

  • SheAI: AI education platform for women
  • The Bloom: Community for social impact professionals
  • EAAMO: Participatory research spaces for ethical AI experimentation
  • labcoexist: Tech education hub for AI development with a critical, transdisciplinary approach

Key Frameworks:

Key Recent References (2023–2025):

 

Frequently Asked Questions About Ethical AI

 

What is ethical AI?

Ethical AI refers to developing artificial intelligence systems that prioritize fairness, transparency, accountability, and human values. It's not a one-time audit but an ongoing process throughout an AI system's lifecycle. As our experts emphasize, ethical AI actively reduces risks and adverse outcomes for individuals and society while optimizing beneficial impacts.

 

Why does diversity matter in AI development?

Diverse teams consistently outperform homogeneous groups in building safer AI systems. Women represent only 10% of AI research teams, contributing to systemic bias. Research shows diverse teams identify potential harms earlier, consider broader use cases, and create more inclusive products.

 

What are the main ethical concerns with generative AI?

Generative AI introduces new challenges: hallucinations spreading misinformation at scale, psychological harms from conversational AI relationships, cultural biases inherited from training data like Reddit, erosion of critical thinking, and privacy violations. 

 

How do the US and EU approaches to AI regulation differ?

The EU uses an ex-ante (preventative) approach with the comprehensive AI Act requiring risk assessments before deployment. The US takes an ex-post (reactive) approach with decentralized, sector-specific regulations and voluntary guidance. The US lighter-touch regulation fosters innovation but focuses much less on people harm mitigation, while Europe prioritizes safety over speed.

 

What is the biggest mistake organizations make with AI ethics?

The biggest mistake organizations make with AI ethics is allowing engineers to lead AI development with often biased data and without meaningful input from ethicists, social scientists, designers, and domain experts. This leads to models that reflect narrow worldviews, amplify biases, and disregard broader ethical and social risks, as seen in the 2014 Amazon recruiting tool that discriminated against women due to a lack of HR oversight.

 

How can small organizations implement ethical AI without big budgets?

Start narrow with one low-risk use case and expand carefully. Leverage existing frameworks rather than creating from scratch. Build small interdisciplinary teams with diverse perspectives. Focus on participatory design involving your community. Prioritize values-based metrics that matter to your users. 

 

What is "harm literacy" and why does it matter?

Harm literacy means understanding how AI fails and what social factors amplify failures, beyond just technical knowledge. It requires three layers: understanding how AI works, what happens when it goes wrong, and how society influences those situations. This enables designers to shift from "best experience design" to proactively identifying potential failures before launch.

 

Can AI ever be completely unbiased?

No. It's inevitable that an AI product goes to the market without bias. The goal isn't perfection but transparent communication about limitations, clear roadmaps for expanding inclusivity, continuous monitoring as models evolve, and mechanisms for addressing harm. Ethical AI requires acknowledging that bias exists and actively working to reduce it over time.

 

What is collaborative human-AI reasoning?

Collaborative human-AI reasoning is a partnership where AI systems rapidly surface patterns and process vast amounts of data, while humans bring essential context, ethical judgment, and lived experience. Instead of replacing human decision-making, the most effective approach involves AI and humans working hand-in-hand, combining AI's speed and cognitive processing power with human creativity, insight, and ethical responsibility. This collaboration enhances productivity, supports better decisions, and enables greater breakthroughs.

 

What's the difference between AI governance and AI ethics?

AI ethics provides the moral principles, fairness, transparency, accountability, privacy, and safety. AI governance operationalizes those principles through organizational structures, policies, processes, and oversight mechanisms. Ethics provides the "why" and "what"; governance provides the "how." Experts warn that many frameworks excel at describing ethics but are "lighter on the how", where practical governance becomes essential.

 

Want to learn how to grow your business or career with AI?

Take a look at our courses today! 100% for free!

Courses

Stay connected with news and updates!

Join our mailing list to receive the latest news and updates from our team.
Don't worry, your information will not be shared.

We hate SPAM. We will never sell your information, for any reason.