Blog | AI Governance Incomplete without Considering the Impact of Carbon and Social Footprints

  • 2020•08•21

    by Dr Attlee Gamundani

    Current insights from the research community on AI research, practice and policy need to be studied further to be inclusive of socio-technical issues such as ethics, bias, transparency, accountability and governance. Beyond these issues, we also need to embrace the impact of AI on the environment in governance discussions.

    Though quite rare in the many discussions on AI, a number of key issues has been raised around the environmental and social impact of AI systems. In order for AI governance to be more comprehensive, it is important for the research community, policymakers and supporting entities to consider the following aspects:

    Non-Eco-Friendly and Power-Hungry Algorithms: We need to take into account the power source of computational capacity and machines that are needed to process data in AI systems. Are they renewable or sustainable sources of energy?  On the social footprint, it is important to ask how many ethical violations, values, privacy and security loopholes are being created by the large amount of data powering our AI systems.

    Profitable in the Short-Term: Policymakers should map the impact of machine learning solutions on the SDGs by weighing both their positive and negative effects towards the realisation of the respective goals. How much will it cost to remedy some of the effects of choosing short-term profits of business inclined AI solutions and AI monetisation over the bigger picture of ensuring a true win-win approach that is profitable and sustainable in the long run?

    Centralised AI Systems Not Easily Accessible: Are AI solutions widely reachable by the majority of end-users? Given the computational power and supporting resources needed to make AI systems work and ready for consumption, currently AI solutions are only accessible to big organizations because they have the resources. This undoubtedly is creating rifts among various stakeholders. It is time that we pursued decentralised AI driven solutions rather than centralised designs to ensure that these solutions are accessible to those in the Global South.

    No Certification System in Place:  In order for the AI governance framework to be strengthened, AI solutions being deployed in the market need to have an automated quality check process which is standardised towards the realisation of a complete quality check and confirmation.  The certification mandate will ensure due diligence is considered at all stages of the AI development life cycle.

    AI has enormous potential to address challenges facing humanity and the environment but to embrace its full potential, we need to tread carefully as there are grey areas yet to be addressed.  In addition to privacy and security concerns of AI, the environmental costs and social impact of current AI systems require immediate attention in policy discussions.

    Figure 1: Summary from the SECure: A social and Environmental Certificate for AI Systems (Gupta A, Lanteigne C and Kingsley, A; 2020)

    About the author

    Dr Attlee Gamundani is a Young ICTD Fellow at the United Nations University Institute in Macau. His research interests revolve around the issues of Artificial Intelligence of Things (AIoT), Cybersecurity, ICT for development and Sustainable Development Goals (SDGs).