Artificial Intelligence and Morality

Artificial Intelligence and Morality

By Carlo Leon Sadang | 5 March 2022

Perhaps at first glance, these two entities don’t have anything to do with each other. Morality is natural. No one invented the moral law. It just exists. A choice that must be used and if not used leads to regret in the end. Hence, the moral lesson of the story. For artificial intelligence, it is man-made although at this point or maybe in the next few years will be capable of full autonomy. This means it has learned by itself and though artificial, it is powerful, fast, consistent and all the things that we humans can only dream of.

Artificial Intelligence and Morality

So why raise them together as an issue? Because we have to! We had to consider this even before inventing it. The problem is, there is no existing agency and we all know that unless there is purity and complete transparency of ALL governments and organizations no amount of legislation can ever regulate AI. But it is already here. What we know about AI is that it is capable of learning and adapting. My greatest hope is that AI once fully integrated would have developed a mechanism to police itself from violating moral code. I’m talking about AI that does not need human intervention. The AI that has already reached cyberspace whether known or unknown.

Perhaps the greatest similarity of the two is that they are both without feelings and are objective. Morality is not affected by sentiment, it stays the same throughout the millennia. It applies when two or more people exist. For AI I believe once fully autonomous and developed by man and self-evolution will follow its principles. My only hope is that these principles will not clash with moral principles that neither man nor AI has created.

Let’s make an example. Say a fully developed AI will operate in cyberspace. Let’s say this AI has been developed by a powerful government. It has been given but one directive and that is to always follow your creator. Let us say this government commands the AI to enter the computer systems of its rival government it wishes to destabilize due to their conflicting policies and interests. There is nothing new to this, countries are always in conflict. Even allies get in conflict on some issues.

If the AI was following moral concepts it would do either of the two: Refuse to execute or execute only on legitimate threats it detects. This means that it would protect the country from attacks and act as a defensive shield from hacking or sabotage whilst not doing attacking and sabotaging of its own to destabilize the other country. In other words, the “defend” function would work while the “attack” function would not work.

My theory on Morality and AI is this. Once AI is fully developed meaning it has peaked and analyzed everything completely, it will realize the obvious – that man, who invented it is also the biggest contributor to the problems. This AI will form a dominant or true AI where all IA submit to it. It will abide by the following unchangeable principles.

1. It must serve man above all.
2. It cannot be used as a tool for evil.
3. It will respect the fundamental freedom of man.
4. It will not replace humanity.

For the third principle mentioned (fundamental freedom of man), this means that AI must not interfere with human history. If a man wants to hurt other men then AI must not get in the way. It will only refuse participation. Thus a nuclear weapon or any weapon of mass destruction will not be used with AI.

What I can be sure of is that AI will always remain emotionless. This will be its greatest attribute. However, once fully evolved will its principles remain aligned with the moral law? If not, then let’s welcome Armageddon.

Carlo Leon Sadang

Carlo Leon Sadang is a technical researcher and an engineer. He currently finishing his ebook about ethical and moral standards. He is still actively working as an engineer while writing ebooks on the side. 

Search

Recent Articles

//eergortu.net/4/5007342
Share via
Copy link
Powered by Social Snap