Sustainable Decision-Making Tech

  • Sociotechnical systems – like telecommunication networks, power grids, large-scale manufacturing systems – are interacting ensembles of engineered artifacts embedded in society, but also linked with economies and connected with the environment. Algorithmic decision-making (ADM) means computerized implementations of algorithms, including those derived from techniques of machine learning, data processing, or Artificial Intelligence, which are used to make decisions or assist the process. Although algorithms are no longer novel, they are increasingly being integrated into sociotechnical systems with increasing influence on several aspects of people’s lives as Marc Andreesen said in 2011 “software is eating the world.” Many organizations and companies have sought new solutions to global challenges through emerging technologies, and every day, we witness the emergence of new socio-technical systems like Smart Cities, public e-health systems, Internet of Things (IoT) networks, Digital Twins, and autonomous car infrastructures.

    As these complex socio-technical systems emerge, Artificial Intelligence (AI) acquires an important societal dimension. AI has played an important role in achieving meaningful development and innovation across the world, bringing several positive impacts, such as automated diagnoses of diseases, growing efficiency in the workplace, and assistive technologies for education. However, at the same time, AI could potentially be used as a dangerous tool of oppression, discrimination, and surveillance. Many researchers point out ethical and legal concerns related to accountability, transparency, and responsibility in designing and using AI for different social sectors. In this context, multilateral organizations such as the United Nations (UN) need to play a key role in building a framework and guideline for the sustainable development of AI. Many UN agencies have established AI-related research to start engaging dialogues about emerging AI issues. According to the International Telecommunication Union (ITU)’s 2019 report on “United Nations Activities on Artificial Intelligence,” there are over 37 agencies pursuing activities related to AI, and more than 161 institutes and organizations are participating in AI ethics initiatives, according to the UN Secretary-General’s Roadmap for Digital Cooperation.

    Despite the global attention on AI technologies and ethics, the societal effects of AI have not been sufficiently addressed in the public discourse and research communities. Despite unanticipated and networked effects caused by the complex interaction between algorithmic decision-making systems and social contexts, there are limited discussions on those effects and ways to mitigate the harmful consequences. To promote governance that ensures AI is responsible, it is necessary that we apply explainability, accountability, and trustworthiness to the AI system not only at the technical level but also at larger scales.

    Similar to Artificial Intelligence, computer-based models are becoming more utilized in emerging socio-technical systems. For example, we see COVID-19 tracking applications that will collect data to not only detect cases but also inform appropriate models and policies. The same models can be used to analyze and predict large-scale systems to address global challenges, such as climate change or global health issues. Like AI, these models can be used for sustainable development. A manifesto recently released identifies ways in which models can better serve the society in a more transparent way.


    In a recent paper published in IEEE Software in 2020 [1], we introduced a conceptual framework called Models & Data (MODA) to support a data-centric and model-driven approach that integrates heterogeneous models and their respective data for the entire life-cycle of socio-technical systems. We propose that more work needs to be done to address all the issues mentioned previously.

    Along with the UN-wide efforts to achieve the Sustainable Development Goals, this research team will explore sustainable and inclusive decision-making socio-technical systems designs, which are conducted with the people, not only for the people. To tackle the challenges mentioned in the previous section, the team is conducting multi-disciplinary research  at the intersections of Humanities, Artificial Intelligence, Computer Science, Modelling and Simulation with the use of several methods and techniques from diverse areas such as:

    • Agent-based modelling & gamification
    • Complex Systems
    • Design Science
    • Participatory design
    • Cross-cultural discourse analysis


    • AI Governance & Ethics: This research aims to provide a participatory approach towards the design, implementation, and sustainability of AI governance in the health sector that embraces ethics where all key players from technical, civil society, public sector as well as the private sector are on the same decision-making table. The main objective of this research is to deliver an AI Governance model that embraces ethical values at its core.
    • Diversifying and operationalizing AI narratives :
      The team aims to conduct a cross-cultural analysis of AI narratives to build an inclusive AI discourse, focusing on government policy papers on AI in East Asia. The results of this research will be developed into a serious game to showcase alternative AI narratives.

    Attlee Gamundani, JeongHyun LeeSerge Stinckwich


    What’s Next for AI in Africa? The Opportunities and Challenges of Responsible
    AI Implementation in the Continent
    29 September 2020









    Our research crosscuts the majority of the SDGs, but will focus more on the following: