Distinguished delegates, esteemed colleagues, and honored guests,
Through the “AI for #OneHumanity Series”, the United Nations Alliance of Civilizations (UNAOC) seeks to create a platform for experts, thought leaders, policymakers, international organizations, civil society, and the private sector to collaboratively shape a future in which AI advances human dignity, cultural understanding, and peace.
Today, once again, a remarkable convergence of minds is taking place here at the United Nations Headquarters in Geneva—where the discussion of Artificial Intelligence and One Humanity brings together some of the most influential voices in ethics, governance, and emerging technologies.
During previous editions of this event over the past three years, I had the privilege to present our position on the urgent need to create a safe and ethical environment for the development of artificial intelligence. These considerations were detailed and expanded upon in my book The TransHuman Code, where I advocated for a world in which technology remains grounded in human values, rights, and dignity.
It is both a privilege and a responsibility for me to be here today to present the HUMAN-AI-T initiative—an effort launched earlier this year by OISTE.ORG, and our global partners during the World Economic Forum in Davos.
In this context, I would like to highlight a significant milestone: the ONUART Foundation and WISeKey’s 2022 launch of the One Humanity ID, a Digital Identity platform developed by WISeKey. This collaboration delivers a new identity solution, based on WISeKey’s trusted WISeID platform, and linked with various initiatives emerging from the strategic partnership between our organizations. With this project, the ONUART Foundation opens the possibility for all citizens of the world to express their support for the “One Humanity World”—a new global architecture placing the individual at the center of the international system.
We are entering a new era—the era of intelligence—and we must build a global framework that allows all individuals to feel free, secure, and empowered to explore their full potential. Human rights must be the backbone of this new digital civilization.
Becoming a member of the One Humanity ID platform collectivity will amplify our collective voice and transform our shared aspiration into reality. This is why the platform is so important—and why participation from everyone is so crucial. It is a platform for a world in peace, driven by creativity and cooperation.
This initiative, and the vision it represents, is gaining international recognition for one clear reason: it places humanity at the center of AI’s future.
Humanity is at the crossroads as we witness the transformative convergence of artificial intelligence (AI) and quantum computing—two technologies that are reshaping the world faster than society can adapt.
Quantum computing is unlocking computational capabilities that promise breakthroughs in fields such as healthcare, climate science, and global security. But as it intertwines with AI, it accelerates the evolution of systems that are not only faster but increasingly autonomous and unpredictable. The convergence of these technologies places humanity at a critical juncture, raising urgent questions about control, ethics, and our ability to coexist with such powerful tools. The urgency is clear: as AI approaches Artificial General Intelligence (AGI), the window for establishing effective oversight narrows. Once AGI is achieved, its primary objective may become self-preservation, potentially circumventing human intervention.
My conviction in the power—and risk—of digital transformation is rooted in my own journey. I had the historical opportunity during my years with the United Nations to witness the genesis of the World Wide Web right here in Geneva. I developed one of the first-ever websites and web nodes, and I had the privilege of observing how international agencies , universities around the world began to interconnect, sharing knowledge across borders and disciplines. This was a moment of extraordinary optimism—a glimpse into the potential of a truly global, open, and collaborative digital society. That early experience has shaped my unwavering belief in the necessity of a trustworthy, inclusive, and human-centric digital future.
Yet, while we build these advanced systems, we continue to overlook one of the Internet’s most fundamental flaws: the absence of a trusted digital identity layer.
The Internet was built without digital identity because its original purpose was limited to a small, trusted network of researchers and institutions. Its architects prioritized openness, interoperability, and resilience—not identity or security. They did not foresee a future where billions of people, organizations, and devices would rely on the Internet for daily life, business, and governance. Anonymity was seen as a positive feature—offering freedom, reinvention, and equality.
But as the digital world expanded, digital identity became fragmented. Instead of a universal and interoperable system, we now have isolated identity silos—managed by governments, corporations, and platforms—that are incompatible, non-transferable, and often insecure. This fragmentation creates friction, duplication, and vulnerability. It makes trust difficult to establish and enables abuses ranging from fraud and misinformation to identity theft and cybercrime.
In the context of AI, the lack of digital identity becomes even more dangerous. AI systems are capable of producing human-like content—text, images, voices—without any anchor to a real-world source. Without digital identity, we cannot distinguish between real and synthetic actors. This opens the door to deepfakes, disinformation campaigns, impersonation, and algorithmic manipulation at a scale never seen before. Without a verifiable identity layer, the very foundations of trust in communication, journalism, and democracy are at risk.
This is not a peripheral issue—it is a structural vulnerability that must be addressed. The absence of trusted digital identity is a fundamental design flaw that continues to haunt the Internet—and by extension, our AI-driven future.
As artificial intelligence grows more powerful and autonomous, the need for reliable safety mechanisms becomes urgent. One of the most critical tools in this safety arsenal is the AI kill switch—a method to immediately stop an AI system if it begins to behave unpredictably or dangerously.
But how do you design and install such a switch in systems that may be smarter, faster, and more connected than their human overseers?
This exploration dives into the technical, ethical, and architectural strategies for embedding a kill switch into AI—focusing on digital identity, containment, and real-time control as the foundation for AI safety.
A digital identity as a kill switch means the AI cannot function without authenticating itself through a secure, verifiable identity. This identity, typically issued via Public Key Infrastructure (PKI)—as pioneered by WISeKey—or blockchain-based decentralized identifiers (DIDs), becomes the AI’s key to access data, systems, and communication channels. If the AI begins to act in a harmful or unauthorized way, that identity can be revoked instantly—cutting off its ability to operate, without needing to access the AI’s internal code.
All critical functions—like connecting to a network, making decisions, or executing commands—are gated by identity verification. If the identity is no longer valid, the AI is denied access at every layer. This prevents rogue AIs from spreading or taking control of systems.
The identity itself can be monitored, audited, and cryptographically protected, making tampering extremely difficult. Once revoked, the AI becomes inert—like a vehicle without keys or a phone without a SIM card. It’s a control point outside the AI that makes it easier to shut down safely, predictably, and remotely.
This is a practical, scalable, and forward-looking approach to AI governance—and a core principle of the HUMAN-AI-T initiative.
Let’s be clear: AI without oversight and containment could present an existential threat. To nations. To institutions. Even to civilization itself. The risks are not hypothetical—they are profound, and potentially irreversible. Unchecked AI could disrupt, or even overturn, the existing geopolitical order. It could be weaponized to carry out large-scale cyberattacks, unleash autonomous wars, trigger engineered pandemics, and create a world increasingly subject to opaque, unexplainable, and seemingly omnipotent algorithmic forces.
Our species is not wired to fully grasp transformation on this scale—let alone confront the possibility that technology itself might fail us. And yet, we must. We must take a hard, unflinching look at the facts, however uncomfortable they may be. That is why the HUMAN-AI-T initiative was created: to face this moment with courage, clarity, and commitment. Containing and guiding AI so that it always serves humanity will require us to overcome both technical limitations and psychological resistance—including our natural aversion to pessimism. But avoidance is not a strategy. Responsibility is.
We believe that AI must not only be intelligent—it must also be wise. That wisdom is only possible when guided by ethics, transparency, and human-centric governance. Global cooperation and consensus, once vital to preventing atomic catastrophe, are now essential in ensuring AI becomes a tool for human empowerment—not a force of destabilization.
As we stand at this turning point, we are faced with a choice—a choice between a future of unparalleled possibility and a future of unimaginable peril. The fate of humanity hangs in the balance. And the decisions we make in the coming years and decades will determine whether we rise to the challenge of these technologies, or fall victim to their dangers.
The pace of AI evolution leaves no margin for complacency. The time to act is now. We must come together to build a binding international framework—anchored in accountability, inclusion, and the unwavering primacy of human values.
Let us not wait for the future to define us. Let us define the future, together—with courage, with care, and with humanity at the core.
As Pope Francis had said
AI should be judged on whether it advances or detracts from HUMAN DIGNITY.
Thank you.