The UN Secretary-General’s Roadmap for Digital Cooperation provides a basis for discussing Artificial Intelligence (AI) issues around inclusion, coordination, and capacity-building for UN Member States and other stakeholders by outlining the 5 pillars of AI (trustworthiness, human rights, safety, sustainability and promotion of peace). Looking at the first pillar, it is important to understand the need for AI to be trustworthy to harness its full potential and ensure that AI design, development, and deployment are inclusive. At UNU Macau, we aim to design an AI Governance model that embraces ethical values at its core and conduct cross-cultural analyses of AI narratives to build an inclusive AI discourse.
Trustworthiness is generally defined as the ability to be relied on as honest or truthful. This appeals to the need to provide understanding, clarity, and trust on how AI systems are designed and operate. Generally, the inner workings of Artificial Intelligence are not visible nor accessible to users. The lack of understanding of how AI systems are designed and how they function give rise to the fear of the unknown and lack of trust towards what AI is capable of delivering and how they can ensure user satisfaction. To build trust in AI, co-designing and active engagement of users from the early stages of planning to implementation is needed. Trust is a two-way street.
In order to harness the potential of AI to achieve the Sustainable Development Goals(SDGs), we need to build trust in AI. Currently, there are academic efforts to develop technical solutions like ‘explainable AI’ whose final objective is to provide an understandable explanation of how AI makes decisions in certain situations. On the other hand, there are multidisciplinary efforts to establish the global standards of AI Ethics that guide the model of trustworthy AI. These efforts emphasize that ethics need to be considered in the AI design, development, deployment, and operational phases collectively. All these efforts support users to fully engage in AI, instead of mystifying AI.
At UNU Macau, we are developing several research activities to build trustworthiness in AI. We approach AI systems and policies, as socio-technical-environmental (STE) systems where they are deeply interlinked with the environment, society, and economy and can help achieve the SDGs on several levels. A multi-disciplinary approach can help us understand the nature of these inter-relationships.
An interesting solution to address AI ethical concerns is to engage citizens, especially marginalized people in the design process of AI. Participatory agent-based simulation where participants collectively explore a complex reality is among the tools that can help to empower local communities by enhancing their trust and understanding of AI. UNU Macau’s Sustainable Decision Making Tech research team is currently exploring which ethical concerns should be added to such simulation.
Trustworthy AI policies should be built on trustworthy data. This is especially true in Global South contexts where there is a lack of accountable data. Reusing datasets from western countries, for example, to palliate this gap will replicate western fairness bias and only reinforce the lack of trustworthiness about digital technologies and AI among marginalized populations. UNU Macau’s Data and Sustainable Development team aims at rebuilding trust in social indicators data by promoting the role of bottom-up citizen-generated data and people’s empowerment.
Finally, to build trust in the AI system, AI policies and discourse should be more inclusive and localized in developing countries. Our team aims to conduct a cross-cultural analysis of AI narratives to build an inclusive AI discourse.
This article is the first in the blog series on responsible AI. The blog series was inspired by the call for artificial intelligence that is trustworthy, human-rights based, safe and sustainable and promotes peace in the UN Secretary General’s Roadmap for Digital Cooperation.