International Data-Based Systems Agency IDA at the UN
IMAGINE: Humans and the planet flourish sustainably, and human rights are respected not only offline but also online and in the domain of “Artificial intelligence (AI)” (which can more appropriately be called “data-based systems (DS)”) …
WE CAN ACHIEVE THIS by sustainable and human rights-based data-based systems (HRBDS) and by establishing an International Data-Based Systems Agency (IDA) at the UN following the model of the International Atomic Energy Agency (IAEA).
These two concrete proposals for action – sustainable and human rights-based data-based systems (HRBDS) and the creation of an International Data-Based Systems Agency (IDA) at the UN – are based on a six-year research-project by Peter G. Kirchschlaeger started at Yale University and finalized at the University of Lucerne published in the book “Digital Transformation and Ethics: Ethical Considerations on the Robotization and Automation of Society and the Economy and the Use of Artificial Intelligence” (Nomos: Baden-Baden 2021, 537 pages).
Why
Humanity and the Planet Are In Imminent Danger
In the year 2024 it is still possible, e.g., to put on the market an app that sexualizes children’s images,[1] and the only thing that happens to this company is that it makes a lot of money from it. We need to do something about this!
In the year 2024 it is now possible with “social media” to destabilize within a few hours a peaceful, functioning, and wealthy country (e.g., the UK far-right riots 2024).[2]
In the year 2024 it is now possible with “Generative AI” to destroy a politician with a “Hollywood-quality” deep fake.[3]
Urgent action is needed.
Yale University: Keynote on “Striving for a Sustainable and Human Rights-Based Future – An International Data-Based Systems Agency IDA” by Professor Dr Peter G. Kirchschlaeger (Ethics-Professor and Director of the Institute of Social Ethics ISE at the University of Lucerne / Visiting Professor at the ETH Zurich)
See Heikkilä, Melissa (2022): “The viral AI avatar app Lensa undressed me – without my consent”. In: MIT Technology Review (December 12, 2022). Online: https://www.technologyreview.com/2022/12/12/1064751/the-viral-ai-avatar-app-lensa-undressed-me-without-my-consent [21.8.2024].
See Snow, Olivia (2022): “’Magic Avatar’ App Lensa Generated Nudes From My Childhood Photos”. In: Wired (December 7, 2022). Online: https://www.wired.com/story/lensa-artificial-intelligence-csem/?bxid=5cc9e15efc942d13eb203f10 [4.7.2024].
[1] See Cadwalladr, Carole (2024): “‘A polarisation engine’: how social media has created a ‘perfect storm’ for UK’s far-right riots.” Online: https://www.theguardian.com/media/article/2024/aug/03/a-polarisation-engine-how-social-media-has-created-a-perfect-storm-for-uks-far-right-riots [21.8.2024].
[1] See Verma, Pranshu/Zakrzewski, Cat (2024): “AI deepfakes threaten to upend global elections. No one can stop them.” Online: https://www.washingtonpost.com/technology/2024/04/23/ai-deepfake-election-2024-us-india/ [21.8.2024].
How
All Humans and the Planet Should Flourish with DS
Human Rights-Based Data-Based Systems (HRBDS)
The UN General Assembly has recently adopted the resolution “Seizing the opportunities of safe, secure and trustworthy artificial intelligence systems for sustainable development”[1] aiming for ‘safe, secure and trustworthy artificial intelligence systems.’ It is now urgent to implement and build on the UN General Assembly resolution by promoting sustainable and human rights-based data-based systems (HRBDS) and by establishing urgently an International Data-Based Systems Agency (IDA) at the UN.
Innovation and Human Rights Going Hand in Hand
Concrete Example: Purpose-Driven Data Use
An Analogy Shows: It Works!
To illustrate this approach in its feasibility, the following analogy serves: when one goes to the doctor, one also shares personal data so that the doctor knows who she has in front of her, and one tells her about one’s illness in order to hopefully experience relief from suffering as well as healing, without either the doctor being allowed to resell this data or the patient being offered to sell this data in order to receive better medical treatment. The doctor may also keep the patient’s file with the medical history strictly confidential – exclusively for the purpose of better treatment of the patient. It is also possible to share completely anonymized data for research purposes if the patient gives informed consent to this sharing.
What
Sustainable and Human Rights-Based Data-Based Systems (HRBDS)
Sustainable and human rights-based data-based systems (HRBDS) are meant to ensure that human rights serve as the basis of DS. In other words, HRBDS seek to ensure that human rights are respected, protected, implemented, and realized within the entire life cycle of DS and the complete value-chain process of DS (in the design, the development, the production, the distribution, the use, or the non-use of DS because of human rights-concerns). HRBDS strives for protecting the powerless from the powerful.
International Data-Based Systems Agency (IDA) at the UN
An International Data-Based Systems Agency (IDA) urgently needs to be established at the UN as a global platform for technical cooperation in the field of DS, fostering human rights, safety, security, and peaceful uses of DS promoting HRBDS, as well as a global supervisory and monitoring institution and regulatory authority in the area of DS responsible for access to market approval.
Given the areas of convergence between DS and nuclear technologies, the International Atomic Energy Agency (IAEA) model would seem the most appropriate one for responsible global AI governance as it represents an UN-agency with “teeth”.
The establishment of an IDA is feasible because humanity has already shown that we are able to avoid “blindly” pursuing and implementing things that are technical possible, but that we are also able to exercise caution when the welfare of humanity and the planet are at stake. For example, humans researched the field of nuclear technology but then humans substantially and massively limited research and development in the field of nuclear technology, in order to prevent even worse consequences. This suppression was successful mainly due to an international regime, concrete enforcement mechanisms, and thanks to the International Atomic Energy Agency (IAEA) at the UN.
See Heikkilä, Melissa (2022): “The viral AI avatar app Lensa undressed me – without my consent”. In: MIT Technology Review (December 12, 2022). Online: https://www.technologyreview.com/2022/12/12/1064751/the-viral-ai-avatar-app-lensa-undressed-me-without-my-consent [21.8.2024].
See Snow, Olivia (2022): “’Magic Avatar’ App Lensa Generated Nudes From My Childhood Photos”. In: Wired (December 7, 2022). Online: https://www.wired.com/story/lensa-artificial-intelligence-csem/?bxid=5cc9e15efc942d13eb203f10 [4.7.2024].
[2] See Cadwalladr, Carole (2024): “‘A polarisation engine’: how social media has created a ‘perfect storm’ for UK’s far-right riots.” Online: https://www.theguardian.com/media/article/2024/aug/03/a-polarisation-engine-how-social-media-has-created-a-perfect-storm-for-uks-far-right-riots [21.8.2024].
[3] See Verma, Pranshu/Zakrzewski, Cat (2024): “AI deepfakes threaten to upend global elections. No one can stop them.” Online: https://www.washingtonpost.com/technology/2024/04/23/ai-deepfake-election-2024-us-india/ [21.8.2024].
[4] UN General Assembly (2024): “Seizing the opportunities of safe, secure and trustworthy artificial intelligence systems for sustainable development”. 24 March 2024. Online: https://daccess-ods.un.org/access.nsf/Get?OpenAgent&DS=A/78/L.49&Lang=E [4.7.2024].
[5] The Elders (2023): “The Elders urge global co-operation to manage risks and share benefits of AI.” Online: https://theelders.org/news/elders-urge-global-co-operation-manage-risks-and-share-benefits-ai [4.7.2024].
[6] Pope Francis (2024): “Artificial Intelligence and Peace”. Message of Pope Francis for the 57th World Day of Peace. 1 January 2024. Online: https://www.vatican.va/content/francesco/en/messages/peace/documents/20231208-messaggio-57giornatamondiale-pace2024.html [4.7.2024].
[7] UN Secretary General (2023): “UN Chief Backs Idea of Global AI Watchdog Like Nuclear Agency”. June 2023. Online: https://www.reuters.com/technology/un-chief-backs-idea-global-ai-watchdog-like-nuclear-agency-2023-06-12/ [4.7.2024].
https://press.un.org/en/2023/sgsm21832.doc.htm [4.7.2024].
[8] UN Secretary General (2023): “Secretary-General’s remarks to the Security Council on Artificial Intelligence”. July 18, 2023. Online: https://www.un.org/sg/en/content/sg/speeches/2023-07-18/secretary-generals-remarks-the-security-council-artificial-intelligence [4.7.2024].
[9] UN High Commissioner for Human Rights (2023): “Artificial intelligence must be grounded in human rights, says High Commissioner”. Statement Delivered by Volker Türk, UN High Commissioner for Human Rights, at the High Level Side Event of the 53rd Session of the UN Human Rights Council on July 12, 2023. Online:
[10] UN Human Rights Council (2023): “Resolution New and emerging digital technologies and human rights”. No. 41/11. 13 July 2023. Online: https://documents.un.org/doc/undoc/gen/g23/146/09/pdf/g2314609.pdf?token=dLWzJnULXDGNJTLOJg&fe=true [4.7.2024].
[11] UN General Assembly (2024): “Seizing the opportunities of safe, secure and trustworthy artificial intelligence systems for sustainable development”. 24 March 2024. Online: https://daccess-ods.un.org/access.nsf/Get?OpenAgent&DS=A/78/L.49&Lang=E [4.7.2024].
[12] Santelli, Filippo (2024) Sam Altman: “In pochi anni l’IA sarà inarrestabile, serve un’agenzia come per l’energia atomica”. Online: https://www.repubblica.it/economia/2024/01/18/news/sam_altman_in_pochi_anni_lia_sara_inarrestabile_serve_unagenzia_come_per_lenergia_atomica-421905376/amp/ [4.7.2024].