Blog Post

Human-Like AI vs A Human Rights-Based Approach to AI

International communities need to consider AI narratives that provide ways in which we can live with AI in sustainable ways.

In the global effort to prepare for the post-pandemic world, artificial intelligence (AI) has been getting more spotlight and many anticipate that the COVID-19 pandemic will bring the age of artificial intelligence forward. The question of whether developing artificial intelligence is right or wrong has become outdated. Ahead of the post-pandemic world, we need to ask what the proper AI narrative is to demystify the potential of AI in our near future.

Human-Like AI

The term artificial intelligence was coined at the Dartmouth Summer Research Project on Artificial Intelligence in 1956. The initial proposal indicated that the project aimed to brainstorm how to develop a machine that “behaves in ways that would be called intelligent if a human were so behaving.” Even before the Dartmouth Conference, Alan Turing designed the well-known Turing test, a way of identifying the ‘intelligent’ machine when a person interacts with it. This approach emphasized the simulation of the human mind and/or behaviour, implying the expectations of human-like AI. To develop a machine that acts like humans, researchers have studied how humans think, move, and feel. In the process, human beings are treated as the signal processing machine, similar to how a computer functions. These efforts turned out to be our narrow understanding of artificial intelligence—the anthropomorphic AI that deceives and emulates humans.

A Human Rights Based Approach to AI

Despite the significant worldwide interest in human-like AI, it is important that international communities consider alternative AI narratives that provide ways in which we can live with AI in sustainable ways. The UN Secretary-General’s Roadmap for Digital Cooperation underlines that emerging technologies, including artificial intelligence, must be aligned with the human rights values enshrined in the UN Charter, the Universal Declaration of Human Rights, and the norms and standards of international law. These human-rights frameworks emphasize fundamental values such as equality, security, privacy, respect and autonomy, which could potentially be abused by several models of human-like AI.

UNESCO has embarked on a long process of setting the first global standards on the ethics of artificial intelligence. In addition, as of December 2019, more than 35 UN agencies have developed AI ethics, applications, and policy guidance emphasizing human rights.

Human rights based AI initiatives have been overshadowed by human-like AI the past 50 years. If we do not take proactive efforts to conduct research and operationalize human rights based AI frameworks, human-like AI will inevitably become a part of our daily lives and obstruct our sustainable coexistence with emerging technologies. We need to take concrete actions to change or diversify the current AI narrative.

 

Suggested citation: Lee JeongHyun., "Human-Like AI vs A Human Rights-Based Approach to AI," UNU Macau (blog), 2020-09-04, https://unu.edu/macau/blog-post/human-ai-vs-human-rights-based-approach-ai.

Related content

Brief

Framework for the Governance of Artificial Intelligence

This technology brief offers a concise summary of considerations for the appropriate oversight of artificial intelligence technologies.

19 Apr 2024

A female farmer next to a farm looking at data on her tablet device, with environment-related icons overlayed on top

Announcement

Bonn AI & Climate 2024

26 Mar 2024