Artificial Intelligence and its impacts on our current information societies have recently been in the spotlight of discussions in the field of digital governance. The development of these technologies offers both opportunities and challenges when developing responses to offline world issues. How can they be used to achieve sustainable development? And what are the implications of the use of AI for human rights? These are some of the issues raised by the 2019 Global Information Society Watch (GISWatch) report themed “Artificial Intelligence: human rights, social justice, and development,” launched at the 2019 United Nations Internet Governance Forum (IGF), an annual multi-stakeholder meeting that discusses the challenges and advancements of internet governance-related issues.
GISWatch aims to monitor the efforts being taken towards a more inclusive information society. Launched at the 2019 IGF, its latest report was driven by the lack of inclusion of the Global South in the discussions regarding AI and human rights. The United Nations University Institute in Macau (formerly UNU-CS) contributed to the Thailand Chapter on using AI to unmask situations of forced labour and human trafficking. Below, we share some key insights that emerged from the report.
Photo credit: Luísa Franco Machado
The challenges to internet governance: inclusion, data governance, security and safety
Who governs the internet? As power asymmetries persist in global internet governance, it results in the lack of equal inclusion of all stakeholders. Connection and participation in digitalisation still witness a top-down approach in which internet is widely seen as something to be delivered rather than something in which we are all actors. The technical, legal and organisational aspects of transferring data, known as data governance, has raised questions worldwide regarding privacy and data protection, but the stakeholders included in this discussion are still far from reaching an agreement on where privacy should start and where it should end.
A Global South driven approach to decolonise AI
Although growing internet and mobile penetration in developing countries represent progress in getting everyone online, there is still a lack of digital inclusion. GISWatch 2019’s report highlights the urgency of decolonising AI and deploying local technologies for local issues. This raises questions regarding the governance model that should be used and what kind of regulation should be imposed upon these new technologies, especially in the Global South.
A human rights-based approach to AI
If opaque and unaccountable, AI technologies can unethically harvest and use our data to influence decisions regarding our lives, spread “fake news” and disinformation, or be used for surveillance and to discriminate people. From job applications to voting behaviour or immigration requests, unregulated AI can negatively impact our human and digital rights to privacy, safety, fairness and non-discrimination. Dominant approaches to AI regulation such as normative or technical approaches have been criticised for falling short on prioritising the protection of human rights impacted by AI. Alternatively, as put forward by the United Nations University Institute in Macau in the GISWatch 2019’s report, a human rights-based approach to AI could hold the design and deployment of such systems accountable and grounded in human rights using frameworks such as the Universal Declaration of Human Rights (UDHR).
The Institute’s contribution to the report presents an evidence-based country report of Thailand, introducing Apprise, an AI tool for unmasking situations of forced labour and human trafficking. It is estimated that 24.9 million people today live in situations of forced labour and human trafficking, yet only less than 1% of victims are identified. Apprise aims to improve the initial victim-identification stage by supporting frontline responders to screen for indicators of exploitation while enhancing the agency and self-awareness of potential victims, particularly migrant workers in vulnerable situations. Furthermore, there is potential for the use of machine learning in this system for predicting changing patterns of exploitation from screening responses. For this, the authors reflect on the use of UDHR to pre-assess the human rights impact of machine learning. This could help mitigate any discrimination and harmful human rights impact resulting from opaque and unaccountable predictive AI systems.
But how can the ethical implications of systems such as Apprise – a multiple stakeholder and cross-border initiative – be effectively regulated? Who is responsible for enforcing the protection of human rights throughout the design and deployment of such AI systems, and keeping them transparent and accountable? It is still unclear where the governance of AI ethics lies. More than ever, it is critical that global south voices and views, including governments and civil society organisations, are at the heart of global responses to AI governance, in order to build truly democratised and inclusive information societies.
About the authors
Francisca Sassetti is a Research Assistant at the United Nations University Institute in Macau, where she conducts policy-oriented research on the Migrant Technology research lab. Her research interests include ICT for Development, public policy, social justice and democracy.
Luísa Franco Machado is a senior Political Science student at Sciences Po Paris, and is currently a visiting student at Freie Universität Berlin. Her research focuses on digital policy, human rights, mediated information, and political processes.