For an author like Kant, who is a human by the way, it is easy to argue for reason and how they lead to the moral values we have. His argument about moral laws being products of rational reasoning makes sense for all of us as we have grown up being taught about some common values and certain behaviors that we should abide by. However, it is not obvious to me that there is just one kind of reasoning in this universe. I think Kant is ignoring the fact that there is a wild difference between different forms of reasoning.
An example of a wildly different form of reasoning can be realized in an artificial intelligence medium. While it is possible for us to create AI models that think like us and follow the same rules as we do, this is merely imposing our cultural biases and prejudices onto these models and making them think like us. Extensive research has shown that, in reality, we don’t understand an AI model’s logic for decision-making. When visualized, we have seen that AI models can arrive at similar conclusions as humans in certain scenarios (like diagnosing tumors from CT scans of the brain). However, they are able to do so with a significantly different methodology compared to what human doctors to by focusing on key aspects differing from those used by humans. So, if AI models’ way of reasoning is different to ours, we should also expect them to arrive at different conclusions on what’s right and what’s wrong and build different moral values.
The part about Machine Ethics is relevant in the following page:
https://en.m.wikipedia.org/wiki/Ethics_of_artificial_intelligence