I beseech you, in the bowels of Christ, think it possible you may be mistaken. - Oliver Cromwell

A lot of current politics, if not all, is a dialogue between Kant and Nietzsche.

Kant attempted to put Christian ethics on a rational basis: do unto others as you would have them do unto you; act in a way that you could wish it to be universal law.

Kant’s categorical imperative is a useful meta-morality heuristic. For instance, it solves the prisoner’s dilemma: everyone should just cooperate!

But it is open to critiques. For one thing, how should we respond to defection from the other party? Well, Jesus says, by turning the other cheek. But is this always optimal in the presence of bad people? If you always cooperate in prisoner’s dilemma, because that’s what you think should be the universal law, then you expose yourself to maximum exploitation by defectors. Maybe you even maximally incentivize them. So maybe the universal law really should be tit-for-tat? Or some mixed strategy that leads to cooperation if everyone cooperates? 1

Another critique of Kant’s categorical imperative is that it doesn’t get you to specific guidance for personal action, or policy / coordination problems, without a complete, valid world model. Should people wear masks and be vaccinated? This turns out to depend a lot on what you think the facts on the ground are. If some people think COVID is just a cold, and vaccines are a Bill Gates mind-control plot, it’s going to be hard to get everyone to agree to cooperate. You see others defect and ask why you should inconvenience yourself with a mask that mostly helps other people, and it all goes tits-up.

Then, universal law also depends on what you value. If you’re a fundamentalist Christian and the purpose of life is to embody God’s will, then a universal law might involve a dash of authoritarianism to make people follow it. If you’re a musician, a hippie, a swinger, what you value, and hence universal law, might be different.

And even if people agree on facts, and values, finding the universal law that maximizes the objective is… computationally intractable. For every action I might consider, I have to compute a global equilibrium to determine if everyone is best off in the long run under that sort of action? Does universal law favor big government or small government? Maybe there are multiple equilibria? Maybe there can be pretty good libertarian, small-government societies, like the old Hong Kong, and also pretty good welfare-state, big-government societies, like Sweden and France? Maybe it depends on historical and cultural endowments, education and socialization, which are maybe endogenous in the long run?

When you do multi-agent reinforcement learning to make agents play soccer or hide-and-seek, sometimes they evolve really weird but effective strategies. In the infinitely complex game of life, how can one even say a particular strategy is optimal? And maybe the perfect strategy for one state of affairs is brittle and doesn’t adapt well if technology or climate changes, and you need to back off from a local maximum to get to a long-term maximum?

Nietzsche’s main challenge to Kant is, man wants to achieve and create, not be poor and turn the other cheek. Man is crooked timber, but also Kant and Jesus are trying to cut against the grain. Maybe building rockets is what I like to do for self-realization. Maybe that’s because extending the light of humanity, avoiding human extinction, is something that I think should be universal law. Or maybe I just like rockets shaped like penises. If I have to pay a lot of taxes, or give to charity, or occupy my mind with pronouns, and this prevents humans going to Mars, and humanity goes extinct, then that’s bad.

I’m no Nietzsche scholar, I had to rely on the moral equivalent of Cliff’s notes to get through college. I view Nietzsche through the lens of Ayn Rand and John Galt as the apotheosis of this sort of thinking. Their critique of Kant is really, what about the value of human freedom and self-realization? If I have to view every action through the lens of what universal law will provide the greatest good for the greatest number of people, how can we progress? Aren’t great artists and engineers shackling themselves? What if I just want to go to Mars and those are my values? Why do I even have to justify it as long-termism or effective altruism for the greater good? In reinforcement learning terms, there is an exploration/exploitation tradeoff, if you want to improve humanity as quickly as possible, every action does not have to conform to universal law as you currently know it, you must also sometimes boldly go where no one has gone before.

Nietzsche and Ayn Rand risk a slippery slope to no moral compass at all. Individuals can do whatever they want, and assume spontaneous order will emerge, or whatever can-opener or spherical cow makes it globally optimal. Or not, maybe I’m just a superman and what’s good for me is what matters and achieves progress.

There are a lot of problems with ‘long-termism’ or ‘effective altruism’. Prediction is hard, especially about the future, and more or less impossible about the ‘long term’ future. Who knows what will be most effective or what might happen in the long term? You can come up with any model you want to describe what will happen in the long term, or what will be ‘effective’. You can possibly justify almost anything. Maybe commingling client funds is good, because if you get away with it you can commit more resources to climate change, or mosquito nets. Or maybe unless ‘wokeism’ is defeated then progress is shackled, so defeating ‘wokeism’ is so important that nothing else matters.

By Elon logic, telling people they can’t use certain language, or what pronouns to use, is ‘woke’ and therefore must be defeated at any cost. Free speech for me is good. I cannot be shackled in extending the light of humanity. So I can freely call anyone a groomer or pedo guy. Free speech for you is another matter, if you want to be woke, or to talk about where my private jet is, or apparently report on anything I don’t like.

SBF was apparently not a big believer in the Kelly Criterion, which says how much risk you should take to maximize long-term gain, assuming you have a given edge. It’s as close to Kant’s universal law as there is in finance.

You know what? Given his assumptions, his math checks out? If you think you have an infinite number of bets, or just a lot of bets, the Kelly Criterion is optimal, and you don’t bet all your marbles on a single throw of the dice. On the other hand, Fred Smith supposedly couldn’t meet the FedEx payroll in the early days, went to Vegas and bet it all and won, and FedEx survived to live another day. If the alternative is death, if you have seconds left on the clock, the Hail Mary is the rational play, the only play you have left.

If you have a finite number of bets and you need to save the world from climate catastrophe, then maybe the highest EV move, the most effective and moral, is to cut corners, commingle client funds, and go all-in.

I don’t have much of a synthesis, but one is this: humans are individuals, and also part of larger groups, and humanity at large. If you’re like most people, you think about yourself, and your family, and other groups you are part of, and humanity at large. There is a continuum: some people care mostly about their own experience and are hedonists, or psychopaths. And some care greatly about a collective good, about being part of a narrative larger than themselves. Maybe they become Marines, or communists – horseshoe theory is real. But humans exist as individuals and as part of larger social structures, contra Margaret Thatcher. ‘Wokeness’ seems essentially just a general awareness of this fundamental human condition, and the definition of Kant’s categorical imperative in particular.

Also, ethics is NP-hard.

If you are saying everything is at stake, there is no long-term unless you win in the short term… or the ends are so important, they justify any means… or you have to crash the cockpit door, and defeat the enemy because otherwise nothing matters… or any risk is justifiable, then you have fallen victim to extremism.

The political problem is, how to create governance systems and social structures that durably support human values like freedom and fairness and progress.

If you think the only value that matters is fairness, you might be a socialist. If you think all that matters is progress and national greatness or exalting some great leader or cause, you might be a fascist. If you think all that matters is freedom, you might be a libertarian. They are all based on legitimate human values that can be taken too far. Most of the time, you actually do need to trade a little of one for another.

We are going to make it, but only if we avoid the sort of extremism that makes people bet it all on red, or blue, or black. These are weird times, but everyone needs to step back and take a deep breath.

  1. Why stop there, maybe all of Jesus’s teachings should be intepreted asymptotically? Like, don’t literally turn the other cheek in repeated prisoner’s dilemma, but follow a mixed strategy that leads to maximal long-term cooperation. Similarly, don’t literally give everything to the poor, halting all the moonshots and spending on science and art, but follow policies that maximally improve the lives of the least advantaged over the long run, including incentives for the most productive to work hard, contribute to society, create excellent schools and other public goods, and do the science and art that enriches everyone’s lives?