Blog | Human-Like AI vs A Human Rights Based Approach to AI

News
  • 2020•09•04

    by Dr JeongHyun Lee

    In the global effort to prepare for the post-pandemic world, artificial intelligence (AI) has been getting more spotlight and many anticipate that the COVID-19 pandemic will bring the age of artificial intelligence forward. The question of whether developing artificial intelligence is right or wrong has become outdated. Ahead of the post-pandemic world, we need to ask what the proper AI narrative is to demystify the potential of AI in our near future.

    Human-Like AI
    The term artificial intelligence was coined at the Dartmouth Summer Research Project on Artificial Intelligence in 1956. The initial proposal indicated that the project aimed to brainstorm how to develop a machine that “behaves in ways that would be called intelligent if a human were so behaving.” Even before the Dartmouth Conference, Alan Turing designed the well-known Turing test, a way of identifying the ‘intelligent’ machine when a person interacts with it. This approach emphasized the simulation of the human mind and/or behaviour, implying the expectations of human-like AI. To develop a machine that acts like humans, researchers have studied how humans think, move, and feel. In the process, human beings are treated as the signal processing machine, similar to how a computer functions. These efforts turned out to be our narrow understanding of artificial intelligence—the anthropomorphic AI that deceives and emulates humans.

    A Human Rights Based Approach to AI
    Despite the significant worldwide interest in human-like AI, it is important that international communities consider alternative AI narratives that provide ways in which we can live with AI in sustainable ways. The UN Secretary-General’s Roadmap for Digital Cooperation underlines that emerging technologies, including artificial intelligence, must be aligned with the human rights values enshrined in the UN Charter, the Universal Declaration of Human Rights, and the norms and standards of international law. These human-rights frameworks emphasize fundamental values such as equality, security, privacy, respect and autonomy, which could potentially be abused by several models of human-like AI.

    UNESCO has embarked on a long process of setting the first global standards on the ethics of artificial intelligence. In addition, as of December 2019, more than 35 UN agencies have developed AI ethics, applications, and policy guidance emphasizing human rights.

    Human rights based AI initiatives have been overshadowed by human-like AI the past 50 years. If we do not take proactive efforts to conduct research and operationalize human rights based AI frameworks, human-like AI will inevitably become a part of our daily lives and obstruct our sustainable coexistence with emerging technologies. We need to take concrete actions to change or diversify the current AI narrative.


    About the author

    Dr JeongHyun Lee is a Young ICTD Fellow at the United Nations University Institute in Macau. Her research critically examines political/ethical issues around digital storage and algorithmic processing of emerging media operations. She currently investigates a set of connected histories around face-recognition AI and its algorithmic rules across Asian countries, focusing on how its algorithmic operations are reinforcing or transforming the inequality of gender, class, and race in the context of globalization.