As artificial intelligence becomes more powerful, the question is no longer whether we can build it, but how we build it responsibly – through ethical AI. Artificial intelligence is no longer a distant promise – it is already shaping decisions that affect our daily lives. From the credit we receive to the jobs we apply for, AI systems now decide who gets opportunities and who does not.
As their influence grows, so does a question that defines our relationship with technology:
Is ethical AI a contradiction – or is it a competitive necessity?
This question stood at the heart of my keynote at Moin.ai’s KI-Sprechstunde. In this article, I revisit its core message: trust in AI is not a moral luxury – it is a measurable business asset.
When Algorithms Decide Who Lives and Who Dies
When we talk about AI ethics, we often imagine futuristic scenarios – self-driving cars or sentient chatbots. Yet some of the most revealing stories are the ones that attract little attention.
In 2007, Spain launched VioGén, a digital platform to assess the risk of gender-based violence. It used statistical models to help police decide which victims should receive protection. Over the years, those algorithms influenced thousands of real decisions.
In 2019, a tragic failure exposed its limits: the system rated one case as low risk. Days later, two children were killed by their father. The algorithm had not considered a known pattern of “proxy revenge.”
VioGén was not AI in today’s sense, and that is precisely the point. It shows how easily we delegate life-and-death decisions to algorithms – long before they become “intelligent.”
The same pattern exists elsewhere. In the U.S., the COMPAS algorithm still supports judges in predicting recidivism, despite evidence of racial bias. IBM’s Watson for Oncology once made unsafe treatment recommendations, eroding trust in AI-driven healthcare.
These examples are not about bad technology. They reveal something deeper: how quickly we hand over responsibility to systems we barely understand – and how urgently we must discuss what ethical AI means in practice.
The Perception of Risk: Absolute or Relative?
Every technology changes how we perceive risk. With AI, the shift is especially visible.
When an autonomous vehicle kills a pedestrian, the outcry is immediate: “AI must not fail.” Yet human drivers cause over a million fatal accidents each year.
So do we expect AI to be flawless – or simply better than us?
That is the first key message: AI will fail, just as humans do. The difference lies in how we judge its failures.
The second message is that risks are not directly comparable.
A human driver might act under the influence – something an AI would never do. Yet a human, even impaired, would recognize a truck that some autonomous systems once failed to detect.
The same holds true for systems like VioGén. It failed tragically in one case – but overall, it improved protection for thousands of women who might otherwise have received none.
So the question is not whether AI can eliminate risk, but whether it helps society reduce the overall harm and manage risk more transparently. Ethical AI does not promise perfection. It enables progress we can monitor, question, and improve.
The Negotiation of Trust
Trust in AI is not a feature – it is a negotiation.

It cannot be programmed into code or guaranteed by compliance certificates. Trust emerges where people, organizations, and societies agree on how technology should act – and what level of uncertainty they are willing to accept.
As I told the audience during the keynote:
None of these forces works alone. They are never in balance – and they must be constantly renegotiated, in business, in politics, and in society.
That is the essence of trust in technology. It is not a static property of a system, but a living process, continuously shaped by context, culture, and expectation.
This process is driven by six interdependent forces: regulation, understanding, trial and error, normalization, risk and reward, and accountability.
They are in constant motion, influencing how we negotiate our relationship with technology. Trust, therefore, is never achieved once and for all – it must be earned, adjusted, and renewed continuously. That is what makes ethical AI a process, not a product.
The Business Imperative: Why Ethical AI Defines Success
At Verged, we see trust not as a philosophical concept but as the foundation of AI adoption.
A Deloitte study found that companies investing in AI transparency and fairness outperform competitors and reduce operational risk. Conversely, those that implement AI without addressing trust face low adoption rates, employee resistance, and regulatory scrutiny.
Meanwhile, a 2025 MIT study revealed that 95 percent of all GenAI pilot projects fail – not due to technical issues but because users, customers, or leaders simply did not trust them.
Trust determines whether AI delivers measurable value. It is the bridge between innovation and business results.
That is why at Verged, we help organizations turn ethical principles into operational success – because trust means different things to different stakeholders.
For customers, it means believing that AI is used to their benefit and fairly. They must feel that every recommendation, every interaction, serves them rather than exploits their data.
For employees, trust means knowing that AI supports their work rather than replacing it – that automation enhances human capabilities instead of eroding them.
And for partners, trust means confidence that collaboration happens responsibly, securely, and ethically – that the organization’s use of AI reflects shared values, not just efficiency goals.
Building and maintaining this multifaceted trust is not a side project. It is the new foundation of competitive strategy and the true differentiator of ethical AI in business. See how we can help your organization.
Building Trust in Ethical AI: From Principles to Action
Trust in AI can be designed – if approached through clear principles and ongoing practice. At Verged, we focus on four areas where organizations can act decisively: Explainability, Fairness, Robustness, and Responsibility.
Explainability and Transparency
Transparency is not a technical option – it is the foundation of trust.
AI systems must be understandable, explainable, and open in their communication – not perfect, but comprehensible. The term Explainable AI (XAI) goes back to a 2015 research program by DARPA, describing the ability of AI systems to explain their decisions in a way humans can follow. Where early models were often black boxes, today the goal is to build systems that are both powerful and interpretable – especially in sensitive areas like finance or healthcare.
Organizations should make explainability a standard. Every AI-assisted decision should, in principle, be traceable. This means documenting which factors influence outcomes and designing processes that make them reviewable later. Transparency also means showing sources – when a chatbot provides an answer, it should display where the data came from. And perhaps most importantly, systems should be honest about their limits. People are more likely to trust technology that acknowledges uncertainty than one that pretends to know everything.
Finally, explainability must become a communication skill. Teams need to learn to translate technical processes into everyday language – for customers, regulators, and colleagues alike. Clarity builds confidence.
Fairness and Unbias
Fairness is not automatic – it is the result of deliberate design, testing, and communication.
When we talk about trust in AI, we are not only asking whether decisions are explainable but whether they are just. Fairness is the second foundation of trust – and often the hardest to achieve.
Organizations should define what fairness means in their context – equal treatment, equal opportunity, or balanced outcomes. Once these criteria are clear, they can be measured. Tools like IBM’s AI Fairness 360 or Microsoft’s Fairlearn can help detect and reduce bias, but fairness is not a one-time audit. It must be monitored continuously, just like quality assurance.
Equally vital is maintaining human control. AI can prepare decisions but should not make them alone. Keeping a “human in the loop” ensures that context, empathy, and ethical reasoning remain part of every process. And fairness must also extend to communication: people affected by AI – whether candidates, customers, or employees – deserve to know when and how it is used. Transparency is not just compliance; it is the foundation of an honest relationship between humans and systems.
Robustness and Reliability
Trust also depends on reliability.
AI must function not only under ideal conditions but when data, environments, or objectives change – which they always do. Robustness means stability through uncertainty. Systems should be designed to adapt while maintaining integrity, to handle new inputs without breaking, and to be retrained responsibly when necessary.
In short: robust AI earns trust by performing reliably when reality changes.
Responsibility and Ownership
Ultimately, trust in AI is always about responsibility.
Technology alone cannot generate trust – people do. Every AI system needs a clear owner, someone visibly accountable for its use and outcomes. Responsibility cannot be anonymous; it needs a name, a role, and a face.
Ethics must also be embedded in governance structures – through committees, external advisors, and review processes that turn principles into verifiable practice. Real accountability means traceability: decisions, validations, and outcomes must be documented.
And beyond structures, responsibility must become part of the organization’s culture. When people at every level take ownership for how AI is used, trust no longer depends on rules – it grows from conviction.
Through these principles and our [consulting services → link to services page], we enable organizations not only to use AI responsibly but to turn trust into tangible business value.
Conclusion: Trust and Ethical AI are the True Measure of Innovation
AI changes everything – and that makes leadership more important than ever.
As algorithms take over more decisions, leaders must take back ownership: to define how AI acts, what values it serves, and how it strengthens human relationships rather than diminishing them.
When we look back at the dimensions of trust we discussed, one truth becomes clear: behind every aspect of trust lies an act of leadership.
Transparency is not a technical feature – it requires leaders who create clarity.
Fairness does not emerge from algorithms – it comes from leaders who set standards.
Robustness is not built in code alone – it takes leadership to design safety.
And responsibility cannot be delegated to machines – it needs leaders who show integrity.
Trust does not arise by itself. It grows when leaders actively shape it – as creators, as communicators, as role models.
Because in the end, leadership is the constant that turns what is technologically possible into something profoundly human: trust.
Without trust, there is no adoption.
Without adoption, there is no value.
And without value, there is no future for AI.
But with trust, AI becomes more than technology – it becomes a catalyst for growth, resilience, and human progress.
Because in the end, AI does not define the world we live in – we do.
Check out the first edition of our AI – Briefing here!

