Rethinking Human-AI Relations: A Philosophical Investigation into the Ethics of Artificial Intelligence (AI) and Human Dignity

Abstract

The rapid encroachment of artificial intelligence (AI) on the moral space raises critical questions about its impact on human dignity, as it portends great dangers to fundamental human values. This project aims to investigate the challenges of AI explosion in the digital age on human beings in their essential dispositions. It addresses the many obstacles posed by AI to human autonomy, personal identity, self-sufficiency, creativity/work, privacy, moral agency, responsibility, integrity, and virtue. Employing a multidisciplinary approach, combining philosophical, legal, and technological perspectives, this research examines the effects of AI on human nature and actions. The lead questions are: what are the implications of emerging AI technologies for human dignity, autonomy, privacy, and human flourishing, and how can these implications be addressed through a rethinking of human-AI relations? How can AI producers and AI consumers interact and/or contribute to sustainable AI projects and systems? This project aims to investigate these issues and propose practical policies for resolving them. The findings reveal that AI can both enhance and erode human dignity depending on the limits of its application. Hence, we recommend that AI technocrats and policymakers should prioritise human-centred design in AI development and use. In this way, it will be ensured that AI systems respect human dignity and promote values of conscience, moral agency, privacy, responsibility, integrity, and virtue. 

Keywords: Artificial intelligence, human dignity, ethics, moral agency

Introduction

The contemporary world is currently celebrating/contending with the AI regime. This regime is replete with rarefied digitisationof information and behaviour with algorithmicconsequences.The human person, on the other hand, is a reality beyondthe mathematicalabstractionof the algorithm and digits. He lives a hylemorphicreality,having his essence as rationality. It is in this essence that his spirituality and intellectual faculties are constituted. This constitution gives/entitles him to a unique and unassailable moral dignity. This dignity is now being made vulnerable by extreme reliance on digitalisation and algorithmic procedures. The explosion of AI in unregulated forms makes this human dignity vulnerable to material sciences and technology. It reduces man to a means to an (technological) end. Notwithstanding its apparent benefits, it is axiomatic that one cannot allow second-order benefits to impeach the “first order” and essential reality of “being.” In metaphysics, we recognise the priority of being over becoming, over accidents, and over application (use). This is precisely the “sin” of AI: that it prioritises becoming, techniques and accidents over being. It compromises human autonomy, personal identity, self-sufficiency, creative originality, privacy, moral agency, responsibility, integrity, and virtue. These categories vulnerable to AI are fundamental to human existence. Over-reliance on AI will compromise autonomy, privacy, creative originality, virtue, etc., all of which comprise the nature of man as man, not man as what he can do or what he can experience, but man as man, merely turning man into a robot. This paper offers a critique of this dangerous reductionism and proffers remedies.

Methodology

The adopted research methodology is critical analysis, which will be based on a multidisciplinary approach, integrating philosophical and technological perspectives. This is a method of inquiry in which one seeks to assess complex systems of thought by “analysing” them into simpler elements whose relationships are thereby brought into focus (Baldwin 1998). Through critical analysis, I will evaluate and interpret key arguments regarding the ethics of artificial intelligence. By adopting a multidisciplinary approach, I will study the nature and rethink the tensions between technological progress and human flourishing, leading to a philosophical examination of the implications for human metaphysics, identity, autonomy, and creativity.

Findings

The findings reveal that AI can both enhance and erode human dignity and creative originality, depending on the limits of its application. The over-reliance on generative AI limitshuman originality. It is what is generated by man that forms the database of AI. What this means is that the capacity and sources of AI are the autonomous originality of human cogitations that have been made available to the database. If men stop their original thinking and cease putting them in an AI database, only relying on what is already in the database, human knowledge will become limited, and there will be no advancement. At this point, epistemology collapses, and we will no longer return to the original sources. Once originality is impinged on, there will be no advancement. Knowledge will only be recycled. Even the advancement of AI is dependent on original human cogitations.If we allow AI to limit human originality, AI will limit itself.

Also,we risk losing the endeavour, frustration, and fulfilment that are intrinsic parts of any learning process by over-reliance on generative AI. It is this fulfilment that pushes man to further exploration. But when he is totally excluded from the thinking process, he loses that push for further exploration and ingenuity; instead, he will be more inclined to buy bigger software that can do the thinking faster, killing creativity and originality.

Interdisciplinary implications

Employing an interdisciplinary approach that combines philosophical, legal, and technological perspectives, this research examines the effects of AI on human nature and actions. Its significance lies in its ability to connect the traditional divides between the humanities and the technological landscape, which carries substantial consequences, particularly within Nigerian educational contexts. It underscores the necessity for students, researchers, and AI developers to exercise caution in the use and development of generative AI systems.  

Conclusion

In sum, we note that the explosion of AI in unregulated forms makes human dignity and creativity vulnerable to material sciences and technology. It reduces man to a means to an (technological) end. Over-reliance on generative AI will compromise autonomy, privacy, creative originality, etc., all of which comprise the nature of man as man, not man as what he can do or what he can experience, but man as man, merely turning man into a robot. Therefore, we recommend that developers ensure that AI systems remain human-centred, ethically grounded, and directed towards the promotion and advancement of human dignity. As articulated by Pope Francis (L’Osservatore Romano 2023), “We cannot allow algorithms to limit or condition respect for human dignity, or to exclude compassion, mercy, forgiveness, and above all, the hope that people are able to change.”AI systems should be designed and set up in a way that protects the physical and mental health of human beings, as well as their cultural sense of identity. We recommend adopting Aristotle’s principle of the Golden Mean, which emphasises moderation, as a guide framework for AI developers and users.

Acknowledgements

I greatly appreciate the Research Round team for organising this fellowship, which has provided me with the invaluable opportunity to learn and interact with scholars from diverse fields and spheres. To Ololade Faniyi, Habeeb Kolade, and Khadijah, I say thanks a million times for your support and guidance throughout this fellowship. Next, I wish to acknowledge the significant influence of Dr Chinasa Okolo in shaping my research concept. Her lecture on AI governance in Africa and ethical considerations for AI regulation and development for the global majority resonates deeply with me and remains a highlight of my experience.

References

Baldwin, Thomas. Analytical Philosophy. Routledge Encyclopedia of Philosophy. 1998. https://www.rep.routledge.com/articles/thematic/analytical-philosophy/v-1.

L’Osservatore Romano. 2023. “We Cannot Allow Algorithms to Limit or Condition Respect for Human Dignity.” L’Osservatore Romano. March 31, 2023. https://www.osservatoreromano.va/en/news/2023-03/ing-013/we-cannot-allow-algorithms-to-limit-or-condition-respect-for-hum.html.