Why Artificial Intelligence will only be helpful if used ethically

April 11, 2024

In recent days, several reports related to artificial intelligence (AI) have caused more than one shock. It is no longer about the occurrence of content that causes disinformation on social networks but about real attacks on people’s dignity, as has happened with the case of minors who have spread on social networks fake nudes generated by AI, of several girls and young women from a town in Spain (Almendralejo). The debate on the limits that should be placed on AI is back on the table.

A few years ago, I attended a conference on AI and communication. I was struck by the journalists attending who asked for an explicit label for every news story produced by an AI.

The journalists’ request was not simply a defense of their jobs. AI, which can free them from heavy and repetitive tasks at work, has also become a threat because of the possibility of generating fake content: faces that have never existed, speeches that have never been made, and bodies that have never been shown.

The emergence of new technologies entails new risks due to the possible misuse of new potentialities. With AI’s omnipresence in our social fabric, the risks become even greater, especially because of the possibility of establishing unprecedented destructive synergies when AI meets an unfortunate human purpose.

We are not only talking about the “alignment problem”: due to the complexity of AI algorithms and optimization strategies, its products may contain biases and purposes undesired by its creators.

This problem has led to the formation of interdisciplinary groups debating how to have safer AI. The problem I am referring to is conceptually simpler but potentially more dangerous: what to do when you have a very powerful tool at the fingertips of any human being, capable of harming himself or others?

Necessary legal frameworks

As a society, we seek ways to protect ourselves from the misuse of technology. We protect our personal data, fight with copyrights against piracy, and put filters on the Internet to prevent access to harmful content.

The unstoppable development of AI demands new legal frameworks that many institutions have been working on for some time. There is a growing sensitivity in this regard in practically all sectors of society, and steps are being taken in the right direction.

However, establishing updated legal frameworks in the face of the potential risks of AI, while necessary and essential, should not make us forget what is at stake. No matter how well-intentioned it may be, legality alone cannot prevent the misuse of AI at any cost.

Law always comes after life, and here we are confronted with the misuse of AI in human life. A life transited by AI, in which new possibilities are continually arising.

The empowering effect of AI on human activity, the positive impact of which very few doubt, makes the ethical dimension of our actions even more important. When we talk about ethics in AI, we are not simply considering how to implement some ethical rules in machines. In its most profound sense, the ethical dimension of AI refers to how we recognize and treat ourselves as people when using this powerful tool.

Ethics always involve life and personal performance, which is also present in this field. Paradoxically, AI challenges us to understand ourselves better as people. Its impressive potential makes us realize that every human being can use it for good or evil. As the philosopher Charles Taylor explains, it is impossible to be “selves” without a reference to good and evil. AI has no such reference, but we do.

The need for ethical education

At the beginning of the century, Benedict XVI prophetically warned of the imbalance between technological growth and our society’s ethical maturity. The challenge ahead of us, to which AI places us with no possible escape, is ethical education. I am not just referring to teaching ethics to our children but to the ethical education of each one of us, which cannot be delegated in any way.

AI opens up a range of possibilities for action to unsuspected limits. Implicit in each is the question of what it means to be a person and do what is good here and now. Conversation between scientists, philosophers, and lawyers is necessary for the safe use of AI, but even more so is personal education, which cannot be imposed but can only inspire. Educating is a perennial task: it means bringing out the best in each person. Can we rely on AI for this?


Javier Sánchez Cañizares is a researcher at the Culture and Society Institute and the ‘Science, Reason and Faith’ Group at the University of Navarra.

This article was initially published in Spanish in The Conversation. Read the original.

Leave a Reply

Your email address will not be published.