Survival of the Traitorous

Fat Mouse, Pawel Kuczynski, 2013

This painting perfectly communicates the idea Schiller was getting at in On the Aesthetic Education of Man. The cat attacking the mouse appears to be a common occurrence. However, there is also a well-fed and well-dressed mouse sitting on top of the cat, dangling the other mouse in front of the cat’s face. Replacing the animals with people, the real monster appears to be the mouse who sacrificed the other one, not the cat itself, which is expected to attack the prey that’s being dangled in front of it. Without any words, we can see the artist’s communication of the idea that in order to get ahead, people would sacrifice others that they have a lot in common with, because they care only about themselves.

This merges the sensual, how we visually see the cat and the two mice, and the intellectual, how we perceive all of them in relation to one another. The fact that this image makes us feel disgust toward the happy mouse sacrificing the other one molds our conscience: we are more conscious of our own actions and not using others, to their detriment, for our benefit. We realize that cooperation and kindness are most important, not being on the top at the expense of others. The artist intends to make us reflect on our actions and this in itself makes us better morally.

Will AI possess moral values?

For an author like Kant, who is a human by the way, it is easy to argue for reason and how they lead to the moral values we have. His argument about moral laws being products of rational reasoning makes sense for all of us as we have grown up being taught about some common values and certain behaviors that we should abide by. However, it is not obvious to me that there is just one kind of reasoning in this universe. I think Kant is ignoring the fact that there is a wild difference between different forms of reasoning.

An example of a wildly different form of reasoning can be realized in an artificial intelligence medium. While it is possible for us to create AI models that think like us and follow the same rules as we do, this is merely imposing our cultural biases and prejudices onto these models and making them think like us. Extensive research has shown that, in reality, we don’t understand an AI model’s logic for decision-making. When visualized, we have seen that AI models can arrive at similar conclusions as humans in certain scenarios (like diagnosing tumors from CT scans of the brain). However, they are able to do so with a significantly different methodology compared to what human doctors to by focusing on key aspects differing from those used by humans. So, if AI models’ way of reasoning is different to ours, we should also expect them to arrive at different conclusions on what’s right and what’s wrong and build different moral values.

The part about Machine Ethics is relevant in the following page:

https://en.m.wikipedia.org/wiki/Ethics_of_artificial_intelligence

INtent vs IMpact?: What’s More IMportant

Immanuel Kant has a widely debated notion of ethics and morality that is rooted in the idea that intent is more important than consequences when it comes to what defines a morally good deed. Actions that stem from a rational and autonomous duty holds more worth because it weighs more importance on our ability to do something because it is right not because we will get something out of it or do it out of ulterior motive.

This philosophical/ ethical concept has always been something that I’ve questioned because someone’s actions may have good intentions but still end up hurting others or cause more harm than what was intended. If that happens then why is that action/person still considered morally good? Consequences to Kant may not be important but in a greater perspective, there may be something more important than respect to a subjective idea of a universal moral law. He assumes that we as humans all have the ability to characterize what is good and what is evil.

For example, microaggressions might come from a place of “good Intention” but may result in being something extremely offensive and harmful. When my Muslim father goes on airplanes, it might seem like someone’s good intention to be suspicious and ask the flight attendant to double-check his bags. What it’s really is, is a racially motivated microaggression that perpetuates harmful stereotypes and humiliates people just based off of their race. In this situation, Kant may justify a person’s actions because it seems like they had good will but what it is really doing is negating the feelings and harm done to the people involved. Just because the intent was there, the impact was still negative, and there should be an acknowledgment of the wrongdoing of their actions. Personally, I believe how you impact others should be a considering factor in what determines the morality of someone’s actions.

https://health.howstuffworks.com/mental-health/human-nature/behavior/microagressions.htm

On reason and morals

Kant makes the following remark on the purpose of reason: “For since reason is not sufficiently serviceable for guiding the will safely as regards its objects and the satisfaction of all our needs…its (reason’s) true function must be to produce a will which is good, not as a means to some further end”. According to this Kant perceives reason as a mean to produce good will, the unconditional good that ensure morals. His reasoning can be summarized as follow:1.Everything in nature work in a purposive manner. 2.It is not a purpose for reason to create a will satisfying all our needs. 3.Reason has influence on our will. With premise 1 and that reason exists, one derives that reason has a purpose. Given this and premise 2, 3, one may concludes that reason purposefully influence our will, but not for satisfying our needs. Kant claims that this purpose of reason is to produce good will. However premise 1 could be problematic, and if one is to follow Hume’s view of knowledge, that one ought to proportion trust in claims according to the strength of evidence, then 1 is clearly flawed. Despite how many things we have studied, we can only find purposes in finite number of things, never enough evidence to justify a claim infinite. If 1 is to limited to a number of things in nature, excluding reason, then the argument would not be valid. Alternatively we may simply define will guided solely by reason, free from inclination, as good will, although this will make the idea of a good will irreverent to experiences or common sense, for reason is considered a prior by Kant.

Under Kant’s definition of good will Abraham making attempt on Issac’s life is certainly not an act with good will, and hence an immoral act. Abraham’s act is unreasonable: he cannot provide a reason to others why he would try to kill his son, and in the end all he show through the event is obedience without using one’s own capacity to reason, effectively prevent any possibility for good will. Was there any reasoning for the universal which Abraham could use to justify his act, he should be able to communicate it to others, for reason is the same to all human. Would Abraham use reason to judge his action, he must see that one should not kill another person, for this cannot be an universal law: if everyone is to kill another person, there would be no human left to kill or to be killed.

https://www.biblestudytools.com/bible-stories/abraham-and-isaac-bible-story.html