Ethics in Algorithms

"Artificial intelligence is a mirror of human intelligence-it reflects our ability to create, while testing the limits of our ethics in controlling that creation." 

Ethics in Algorithms
Photo by Growtika / Unsplash
Table of Contents
"Artificial intelligence is a mirror of human intelligence-it reflects our ability to create, while testing the limits of our ethics in controlling that creation." 

The above sentence was created using Copilot, Microsoft's intelligence application embedded in the Windows 11 operating system. The answer came from a statement written in the command input tab or prompt: "make a philosophical sentence about artificial intelligence!".

Today, people are competing to learn artificial intelligence or AI, how to create effective commands so that AI can provide optimal output. How to use AI to lighten their workload. 

Even AI has expanded beyond being a personal chauffeur to becoming a personal doctor, with the ability to perform high-precision surgery. 

On the one hand, AI is revered, but on the other hand, the rapid development of artificial intelligence has raised concerns about the potential for data misuse and loss of human control over crucial decisions.

Yes, the use of AI is now not limited to helping work so that humans are more productive. However, AI is also starting to be used to more extreme areas, making important and strategic decisions in an organization. Whether it is to sort out the Curriculum Vitae of prospective employees, risk assessment, to business planning. 

Then, behind the efficiency and objectivity that AI offers, comes the rarely discussed phenomenon of moral responsibility in algorithm-based leadership. A decision-maker may escape responsibility, when the decision made by artificial intelligence faces adverse problems. 

a white toy with a black nose
Photo by julien Tromeur / Unsplash

He can be protected from the immediate consequences of his actions. This can make the person concerned, more likely to take higher risks or ignore ethical considerations in the future. 

In a philosophical context, policy decisions made by AI may lack moral responsibility in social life, because AI is not a human mind that has other considerations in deciding something. The human mind has moral values, empathy and feelings that become pieces of art in decision-making itself. 

This thinking machine, on the other hand, only has numbers to present the output of solution options for a problem that it wants to unravel. So, how does one know the reason for a bank's decision not to lend him money, if his debt proposal is rejected by a robot based on an algorithmic formula. 

This is where ethics must still be placed in a human relationship. Deontology, or duty ethics, suggests that policy considerations should not interfere with individual rights. Algorithmic considerations are unethical to use, when they enter the realm of rights that must be respected. 

Deontology, or duty ethics, suggests that policy considerations should not interfere with individual rights.

Therefore, the final decision of a policy, especially one that has an impact on the community, should still involve human considerations in its final stage, especially in contexts that have a major social or ethical impact.

Although it is also important to remember that machine intelligence is not the enemy of morality, but a tool that must be used wisely. Responsible leadership is not just about results, but also the process and underlying values. In the midst of technological advancements, leaders of their people are challenged to be not only digitally savvy, but also ethically wise.

Once upon a time on one social media someone wrote something like this: "AI should be able to do my dishes and iron my clothes, so that I can continue writing and making art. Instead of AI writing and making art, while I wash the dishes and iron the clothes." Well...