Artificial intelligence systems aim to make our lives better in every way possible. This intelligence is evolving day by day. Human-like mental states, capable of collecting and analyzing information and accumulating experiences from situations, such as self-driving vehicles.
A robust AI system is capable of self-learning without human assistance as it generates its knowledge, so a person will not be able to know the capabilities of this intelligence or the decisions he or she will make. Ethics, is it possible to program the ethics of artificial intelligence? Can this intelligence respect human values? This is what we will discuss in our article.
The dangers of self-learning
Problems of learning algorithms
Data is the fuel that fuels artificial intelligence, but what happens if that data contains flaws, or if the learning algorithms are not appropriately adjusted to evaluate this data, then things can take a completely wrong turn. Tay was named to interact with Twitter users, but things didn’t go well.
A group of users mistakenly exploited the bot algorithm and corrupted it by teaching many racist and degrading ideas to the convictions and beliefs of others. His quick adoption of such ideas was a hard lesson.
Black box problem
In the case of narrow artificial intelligence, engineers at the design stage can implement a series of decisions to set specific rules, and can easily predict the behavior of this intelligence. Its results cannot be accurately predicted, as the results vary according to the data and therefore its behavior is not always predictable.
The smart machine behaves in such a situation as a black box that does not know its internal performance, that is, one cannot know the reason for making a decision. Therefore, it is necessary to have algorithms that play the role of artificial intelligence ethics to explain the reason for making decisions and distinguish between true and false.
The problem of bias for a particular race, gender, or age is one of the most difficult ethical challenges facing AI. This problem arises mainly from the data itself, not the learning algorithms, where the quality of the model is determined by the quantity and quality of the data presented to it. What’s in previous data.
Take security systems that are used to predict perpetrators, for example, because they are trained to discriminate based on an individual’s race or gender, not their actions and movements, and face recognition systems that lack specimen diversity, cannot identify people to the same race. Training sample, and many other examples that lead to bigger problems.
Ethics of artificial intelligence
Incorporating ethics to artificial intelligence is a very difficult process, due to the absence of a generally accepted ethical framework. While some people prefer the ethical principles contained in religious texts such as the Bible, the Qur’an, etc. Of people or behave rightly.
Also, to build artificially behaving AI systems, ideas about values, right and wrong must be accurate enough to be applied to computer algorithms, but this accuracy is not present in current ethical principles. What can be right for some people may be wrong? Consider another category.
Mohamad Maaz, in a study conducted by the Internet Society on the ethics of artificial intelligence, suggests that the best solution is to maximize the Utility Function, which is used to determine the value of outcomes or decisions. It maximizes utility and/or reduces damage to the maximum extent possible, and thus the AI algorithm makes the right decisions.
For example, suppose a self-driving car is in an unavoidable situation, and either two men hit the road or hit a wall, killing the passenger inside. Based on utilitarian ethics, it is the decision that will minimize the damage that will kill as few people as possible. Therefore, the car should crash and the occupant dies inside to save the two men from pedestrians, as the utility function (tool function) of this decision is the highest.
Artificial intelligence can be more ethical than humans
To explain this, Maaz also discussed the state of the drones, saying:
Major ethical dilemmas are linked to the fact that democracy cannot use these drones without precise rules, because of its emphasis on human values in its concept. The small dilemmas are the operational operations carried out by these aircraft. For example, do cameras provide a clear view of the subject? If not, what should the operator do?
The military doctrine asserts that moral choices are ultimately a matter of values regulated by military bases, and therefore it would be illogical to consider the artificial factor to be able to build an ethical decision rather than a human factor because ethics are precisely what results from human intelligence. And give it the right to kill, but artificial intelligence can be a condition for achieving a better morality because the drones suffer less pressure than a human pilot.