The UN Secretary-General’s Roadmap for Digital Cooperation provides a basis for discussing Artificial Intelligence(AI) issues around inclusion, coordination, and capacity-building for UN Member States and other stakeholders by outlining the 5 pillars of AI (trustworthiness, human rights, safety, sustainability, and promotion of peace). One of the key points in the Roadmap is the need for Artificial Intelligence(AI) that ensures no harm for human rights in its design, development, and deployment. At UNU Macau, we aim to expand the institute’s current work on digital inclusion by examining the role of AI in enhancing equality.
Emphasizing human rights prioritizes humanity when it comes to algorithmic decision-making processes and outcomes. Human rights is enshrined in the Universal Declaration of Human Rights (UDHR) and then further detailed in the International Covenant on Civil and Political Rights (ICCPR) and the International Covenant on Economic, Social and Cultural Rights (ICESCR). These international human rights treaties uphold the basic rights and fundamental freedoms that are inherent to all human beings.
The core principles of human rights include universality, interdependence and indivisibility, equality and non-discrimination. These core values should exist online, as they do offline, as well as in the design and implementation processes of AI. The lack of consideration for human rights results in AI reflecting historical inequalities and biases, for example, the inaccuracy of identifying the gender of faces in facial recognition technology. In addition, individuals are often not given the rights to choose what AI systems can and cannot do with personally identifiable data, for example, profiling users for credit rating, risk profiling, and tastes in social media, as this extends the existing human biases and the magnitude of the impact. It is concerning that surveillance is seamlessly being extended through AI capabilities.
A Human rights-based approach to AI, like the one developed by UNESCO, is a reference tool for adoption in light of addressing a number of human rights requirements, but more practical frameworks and tools for on-the-ground implementation are also needed. The lack of integration of human rights in AI systems could bring about a number of issues:
1) Biases in the data used to train AI algorithms
2) Biases in the lifecycle of AI systems
3) Complex and unanticipated interactions of AI systems with the environment
The first issue is related to the quality of the data that is used to train AI algorithms. Anonymisation could be recommended as some of the best practices in light of respecting human data privacy but this is sometimes not enough. A recent study shows that, for example, even after the anonymisation process, 98.99% of Americans could still be re-identified in datasets using demographics attributes. At UNU Macau, we are exploring ways to empower the people who are most of the time, the source of these data. This is the approach of the small data research team of our institute.
Biases occur not only due to data but might happen along the typical lifecycle of AI systems: from the data collection phase to machine learning model development and deployment stages. To solve the second issue, one idea is to introduce human values, as defined by Schwartz theory like benevolence, security, self-enhancement, into the software itself. Another idea that we are considering is to engage more deeply with stakeholders. This could be done through a range of methods like interviews or surveys but we are looking for more engaging approaches like participatory co-design or modelling where stakeholders with different points of view can explore alternative AI visions and marginalized people can be involved at the beginning of the design process.
To solve the third issue, we need to study the positive and negative interactions of AI on human rights at the level of the whole socio-technical-environmental system. In order to understand new complex and dynamic scenarios where AI systems have consequences not only on human rights but also labor market or climate change, we need to adopt systems thinking which is a holistic approach that focuses on the way that a system’s constituent parts interrelate and how systems work overtime and within the context of larger systems.
This article is the second in the blog series on responsible AI. The blog series was inspired by the call for artificial intelligence that is trustworthy, human-rights based, safe and sustainable and promotes peace in the UN Secretary General’s Roadmap for Digital Cooperation.