Morals

Daniel Langdon
3 min readMay 22, 2021

--

One thing that is hard to define is moral behavior. A multitude of historical points of view, religions, traditions and social norms seem to be in competition, ranging from very broad philosophical theories to very specific rules, many of which seem to be hard to justify (prohibitions to consume certain animals but not others, etc.). Personally I’ve been most influenced by these:

  1. Kant’s Categorical Imperative — “Act only according to that maxim by which you can also will that it would become a universal law”. Or paraphrasing, do yourself only those things that it would be OK if everyone did them. Especially since I see around a lot of rationalization and self-justification of bad behavior just because it is small in scale or rather pervasive, so that small sins can be simply assumed to be relatively harmless to the status-quo.
  2. Rawls’ Veil of Ignorance — that is, choosing as if you did not know what your place in society would be, putting yourself in everyone’s shoes.
  3. The Golden Rule — “Treat others as you would like others to treat you”. Advocated by many religions and cultures and self-explanatory.

Interestingly, these have something important in common: they all define morals in terms of other people. Even utilitarianism, which has a grain of truth but scares me a little by measuring only consequences and not the means used to achieve them, speaks of “The greatest good for the greatest number” and not only “the greatest good”. This might seem basic, but for me is a little of an epiphany, the notion that being a good person is, after all, not such a complicated endeavor, even if not an easy one. Just think of others.

It is also surprising for its contrast with some pretty predominant selfish traditions like capitalism where common good is supposed to arise spontaneously from every individual thinking on their own benefit, or the debunked idea that companies should only maximize share value. Even in fields like Artificial Intelligence, where attempts are being made to create friendly agents that do not end up converting all of us into paperclips, the danger is in an AI fulfilling its own goals, disregarding the goals of others. Perhaps we need to teach AI empathy.

On the other hand, a focus on others also illuminates some of our current conundrums: respect, equality, even taking care of the environment, are important because we empathize. In the latter case, even with other species and with unborn generations.

So it seems that at the end, moral is empathy. Perhaps by being better social animals we can also become moral animals.

This does not mean that all moral conflicts are immediately solved. When we think of others we might discover that they do not share our same values, or may rank them differently. Different principles are bound to clash, as in for instance when people that believe in both life and freedom end up in very distinct pro-life and pro-choice camps. It does nonetheless make us appreciate this fact all the more. It is no longer what I believe is right. It is not about me after all.

It also seems to validate freedom, a recognition that what is right emanates directly from what each of us is and thinks, as one of the most fundamental, if not the most fundamental, of all human values. But then again, that is just what I believe, lets have a conversation.

--

--