AI Governance: Ethical Frameworks for Human-Centered Artificial Intelligence in 2025
Jun 18, 2025
Artificial intelligence is transforming nearly every industry - from medical diagnostics and loan approvals to predictive policing and workforce automation. As these technologies grow in capability and reach, questions of power, control, and ethical accountability become central. Who defines the ethical boundaries of AI? Whose values shape the code that governs decision-making?
AI governance is the practice of developing and enforcing frameworks to guide the ethical development and use of artificial intelligence. "We can't automate our way out of inequality," warns AI ethics pioneer Timnit Gebru. "The problems AI is amplifying are human problems." Her words ring true as we dive into the core reason governance matters: technology reflects and often magnifies societal inequalities as well as biases of its creators.
Core Principles of AI Governance
Now let's get to the core of the issue. AI systems should be designed to assist, not replace, human judgment, especially in critical decisions. Stakeholders must understand how decisions are made by AI, particularly in sensitive areas like criminal justice or healthcare, where AI transparency is crucial. Accountability follows naturally - clear structures must exist to assign responsibility and offer redress when harm occurs. This is what we mean when referring to human-centered AI governance: prioritizing the dignity, agency, and well-being of individuals.
Stakeholders in AI Governance
AI governance involves a dynamic interplay of actors. Technology companies hold immense influence through control over foundational models and data. As Giovanna Jaramillo-Gutierrez notes, "The power is really in the hands of a few companies developing the systems and the resources that go with it."
Governments create enforceable regulatory frameworks. Academia offers independent research and insight. Civil society organizations advocate for transparency and equity. And users themselves shape AI's evolution through behavior and choices.
Meet the Leaders in AI Governance
Petruta Pirvan: The Legal Framework Architect
With 17 years of experience in IT law and data protection, Petruta has been at the forefront of implementing the EU AI Act. As founder of EU Digital Partners, she provides educational and fractional Data Protection Officer (DPO) services, helping organizations navigate the complex regulatory landscape of European tech regulations. Petruta emphasizes, "The regulation aims to stimulate the uptake of responsible AI, trustworthy AI."
Dr. Giovanna Jaramillo-Gutierrez: The Ethical Implementation Expert
A molecular biologist turned data scientist, Giovanna brings 15 years of experience from outbreak response in humanitarian settings to AI governance. Through her consultancy, Milan Associates, and her work with the Center of AI and Digital Policy, she focuses on algorithmic audits and ensuring AI systems serve the public interest while aligning with the UN Sustainable Development Goals.
As she puts it, "AI governance is not only about the AI - it's about data protection, it's about cybersecurity, it's about the data science team."
Nicky Verd: The Human-Centered Futurist
Based in Johannesburg, South Africa, Nicky is a digital futurist and author of "Disrupt Yourself or Be Disrupted." She champions the human aspect of AI development, emphasizing that "technology without humanity is incomplete" and urging people to ask, "What is AI learning from me as an individual?"
Nicky is also a TEDx speaker, highlighting the changes that AI is bringing to our human experience, with her latest talk on: How embracing AI can empower your career.
Comparing Global AI Regulatory Models
AI regulation varies significantly by region, reflecting different legal traditions, cultural values, and geopolitical goals. These differences influence not only how AI is governed but also how it's built, deployed, and monetized.
European Union: Risk-Based and Rights-Driven
The EU AI Act is the most comprehensive AI regulation to date, adopting a tiered, risk-based approach that classifies AI applications from minimal to unacceptable risk. High-risk systems, such as those used in law enforcement, education, and healthcare, are subject to strict oversight, including AI transparency requirements and post-market monitoring.
Petruta, a legal expert deeply involved in the Act's implementation, explains, "The regulation aims to stimulate the uptake of responsible AI, trustworthy AI." The EU’s approach is grounded in fundamental rights and is designed to proactively prevent harm while fostering innovation within safe boundaries.
United States: Decentralized and Market-Led
The U.S. lacks a unified federal AI law, instead relying on a patchwork of sector-specific regulations and guidance from agencies like the FTC, FDA, and NIST. Executive orders have called for the development of AI principles, but compliance is largely voluntary.
This decentralized model encourages rapid innovation and market competition but often results in inconsistent protections. It also places more responsibility on private companies to self-regulate, which can lead to gaps in accountability.
China: Centralized, State-Driven Innovation
China’s AI governance strategy blends rapid innovation with strong central control. The country has introduced regulations targeting specific domains like deep synthesis and algorithmic recommendation, framed around goals such as social stability and ideological control.
China’s government not only regulates AI but actively shapes its direction through industrial policy, funding, and public-private partnerships. This top-down model enables swift implementation but raises concerns about surveillance and human rights.
Global South: Adaptive but Resource-Constrained
In many countries across Africa, Latin America, and Southeast Asia, AI governance is still emerging. These regions often face the paradox of low regulatory capacity but high exposure to imported AI systems designed without local context.
Despite these challenges, there are promising signs of adaptive governance. Initiatives in countries like Kenya, Brazil, and India are experimenting with ethical AI standards, open data policies, and regional coalitions. However, implementation remains uneven, and international support will be crucial to ensure these frameworks are not just aspirational.
Comparing these models reveals a fragmented global landscape. Companies building AI systems must navigate a maze of legal obligations and ethical expectations, tailoring their products for different jurisdictions. As such, governance doesn’t just shape policy - it shapes the architecture of AI itself.
Emerging Challenges in AI Governance
Generative AI presents new risks due to its scale and dual-use potential. Few organizations have the resources to develop and monitor such systems, further centralizing power. These models can support creativity just as easily as they can generate disinformation.
The alignment and control problems - how to ensure AI systems remain safe and human-aligned- require global cooperation. Ethical oversight must also address emerging human rights concerns, including surveillance, discrimination, and erosion of individual dignity.
Sector-Specific Challenges in AI Governance
In healthcare, AI offers improved diagnostics and treatment, but can perpetuate inequities without diverse training data and privacy safeguards. Clinical validation and human oversight remain essential.
In criminal justice, risk assessment algorithms may reinforce historical bias. Transparent methodologies and adherence to due process are key to preserving constitutional rights.
In finance, AI-driven decisions must be explainable and lawful. Responsible systems should prevent systemic risks and comply with anti-discrimination standards.
In employment, hiring algorithms must ensure equity and explainability. Nicky reminds us, "AI is the only technology that has to learn from humans. All previous technologies, humans had to learn them."
Best Practices for Organizational AI Governance
Companies must begin with values-based policies. Effective governance requires interdisciplinary collaboration among legal experts, technologists, and ethicists. Regular audits help uncover bias or performance drift. Giovanna’s insight is especially relevant: "AI governance is not only about the AI - it's about data protection, it's about cybersecurity, it's about the data science team."
Third-party vendors should be held to the same ethical standards. Feedback loops must be built into systems, allowing users to raise concerns and shape AI development.
Participation and Technology-Enabled Oversight
Innovative policy tools like algorithmic impact assessments and regulatory sandboxes provide space for safe experimentation. Compliance tools and bias-detection systems enhance transparency and accountability.
However, when speaking about AI governance, we’re not only referring to governments, institutions, and organizations, but also to the use of AI by individuals. From civic education to consultation processes, we as individuals play a meaningful role. Nicky encourages us to ask ourselves, “What is AI learning from me?”
Rethinking AI Governance: Public Consent, Critical Design, and the Politics of Technology
AI is not neutral - it encodes the priorities, assumptions, and ideologies of those who build and deploy it. As such, AI governance must be seen not only as a technical or legal issue but as a political one. Governance frameworks should explicitly consider who benefits from AI deployment, who is at risk, and whether communities have consented to these technologies.
Public participation must go beyond consultation. It should involve active, structured engagement in determining where, when, and how AI systems are used. Particularly in sectors like education, healthcare, and justice, affected communities should have the ability to question the deployment of AI and propose alternatives.
Organizations and regulators must ask not just "Can we automate this?" but "Should we?" - acknowledging that sometimes the most ethical choice may be not to use AI at all. Interdisciplinary and diverse governance bodies are essential to ensure these decisions are made thoughtfully, inclusively, and with moral clarity.
Human Rights at the Center: Lessons from the UN on AI Procurement and Deployment
In May 2025, the United Nations released a report emphasizing that AI is no longer just a technological issue; it's a human rights imperative. The report warns that artificial intelligence is already affecting nearly every human right, from privacy and equality to freedom of expression and the right to a healthy environment. Yet states and businesses alike are deploying AI systems without adequate safeguards, transparency, or stakeholder consultation.
The UN calls on states to close legal gaps, ensure transparency in procurement, and prevent bias, especially in public services and justice. It also urges businesses to conduct thorough human rights due diligence across the AI lifecycle, from design to deployment. Practices such as real-time facial recognition and social scoring are flagged as fundamentally incompatible with international human rights and should be prohibited.
This call for enforceable, inclusive, and rights-based AI governance aligns closely with the themes explored throughout this article. It underscores the need for independent oversight, transparent processes, and accountability at every stage, not just technical compliance, but a commitment to human dignity.
A Future-Oriented Vision for AI Governance
AI governance must confront global and societal inequalities. Nicky issues a stark reminder: "Not everyone will be saved... Not everyone is going to be carried along in this AI journey." Inclusive strategies must prioritize vulnerable populations, broaden digital access, and prepare for labor market transitions.
This includes reskilling, education, and maintaining systems that allow human choice and dignity. As technology advances, ethical AI must be shaped by participatory, adaptive, and values-driven governance.
Conclusion: The Ethical Imperative of AI Governance
AI governance is a dynamic and evolving challenge. It calls on governments, organizations, civil society, and individuals to ensure that artificial intelligence aligns with human values. The decisions we make now will determine whether AI enhances freedom and justice or undermines them.
This article was inspired by the online event on AI Governance hosted by SheAI and The Bloom in June 2025.
FAQs
What are the main principles of AI governance?
Human-centricity, transparency, accountability, fairness, privacy, safety, democratic participation, and adaptive oversight.
How is AI governed in the EU vs. the U.S.?
The EU applies a risk-based regulatory approach (AI Act), while the U.S. follows a more fragmented, sectoral strategy.
What is algorithmic bias, and how can it be prevented?
Algorithmic bias occurs when AI systems produce unfair outcomes. It can be mitigated through diverse training data, regular audits, and human oversight.
What role do individuals play in AI governance?
They shape AI systems through data, market behavior, and participation in policy-making.
What are the risks of generative AI?
Generative AI poses dual-use risks, misinformation threats, and concentrates power among a few tech giants.
Want to learn how to grow your business or career with AI?
Take a look at our courses today! 100% for free!
Stay connected with news and updates!
Join our mailing list to receive the latest news and updates from our team.
Don't worry, your information will not be shared.
We hate SPAM. We will never sell your information, for any reason.