There ain't no such thing as a free option
I would not give a fig for the simplicity this side of complexity, but I would give my life for the simplicity on the other side of complexity. - Oliver Wendell Holmes
Keep it simple, stupid - Anonymous
This post is motivated by this story of automakers up in arms about Trump deregulating emissions standards. Surely reducing regulation cannot harm automakers? What would that say about free markets?
Absent the regulatory rollback, manufacturers have to make low-polluting vehicles. If the Feds deregulate, they have a choice of making low-polluting or high-polluting vehicles. And in any optimization problem, removing or easing constraints can only improve an outcome. Why would they prefer an outcome that doesn’t make them better off?
Here’s the rub: California will keep stringent standards. Some manufacturers will produce high polluting vehicles and exit California, where the market will be less competitive and car prices may increase. The market will be split into California-legal and non-California-legal. People on the border will register cars with their friends in Oregon. There will be multiple regulatory regimes and an unholy mess, and the likelihood that future administrations will tinker with it.
It would be better for everyone, including the car manufacturers, if everyone could find a way to agree on the same standard.
Free-market fetishists are going to say, well, that’s an artifact of crazy regulation, a free option can only be a good thing, and if you give people freedom of choice, they will choose the right solution. WRONG! Time for some game theory…
Consider Braess’s Paradox. You have a road network and 4000 cars traveling from START to END over 2 different routes, say on either side of a body of water. Initially, the equilibrium is 2000 cars over each route and a travel time of 65 minutes. (Traffic/100 = 20 minutes + 45 minutes.) It’s a Nash equilibrium because no one has an incentive to switch. If for any reason more people take one route, it slows down and people have an incentive to switch back to the faster route.
Then, we add a bridge between A and B. Suppose it’s instantaneous to keep things simple. You can even suppose it goes both ways, but no one will choose the 90 minute option.
Initially, it’s faster to take the connection (40 minutes). But eventually, too many people take it. What happens when 3300 people take the new route? The time is 66 minutes, slower than before! And the people going along the other routes are even slower so they switch too and slow it down further! Amazingly, the new Nash equilibrium is 80 minutes, everyone takes the bridge.
- 4000/100 + 4000/100 = 80 minutes (4000 people)
- 45 + 4000/100 = 85 minutes (0 people)
- 4000/100 = 85 minutes (0 people)
- 45 + 45 = 90 minutes (0 people)
Adding this route option cost everyone 15 minutes. (If you add more people, eventually above 4500 people all the new people switch to the 90 minute route. If you don’t follow or want to know more, read the Wikipedia article, also Evolution of Trust is the most awesomest intro to game theory ever.)
The thing is, changing the rules changes the game, which can change the whole equilibrium. The law of unintended consequences. Letting everyone act freely according to their own best interest does not lead to the best outcome. You also need to set the game up right, with a market design that is engineered to be fit for purpose. (And even then, no guarantees of optimal outcome). In this case, everyone should agree to blow up that bridge between A and B.
There ain’t no such thing as a totally free market. You have to come up with a market design that achieves the desired objectives. If you choose not to decide, and let the market be designed to protect the right of the stronger instead of other objectives, you still have made a choice. There is no spontaneous order except that which sensible people work very hard to engineer. (See also this on Silicon Valley pseudo-libertarianism.)
Apocryphally, Cortés burned his ships so he would not have the option to retreat. The history is more complicated, but eliminating your own options can be a winning strategy. If you are in a game of chicken with a maniac, both of you driving toward each other at 100 mph, the one who throws the steering wheel out the window first usually wins, by credibly eliminating the option to swerve.
In college I took a course from Seymour Melman (Columbia’s Noam Chomsky) where I had to read the nuclear doomsday theorists, Herman Kahn, George Kennan and whatnot. I was a little shook up when I read about Kissinger saying “I would say in retrospect that I wish I had thought through the implications of a MIRVed world more thoughtfully in 1969 and 1970 than I did.” and “In retrospect, I think if one could have avoided the development of MIRVs, which means also the testing of MIRVs by the Soviets, we would both be better off.” You had one job, Henry. You’re not supposed to have a conscience but you’re supposed to understand strategy FFS.
The thing about the MIRV (Multiple Independent Rentry Vehicle) is … suppose each side has 1000 missiles and they have 50% effectiveness. One side launches all its missiles, catches the other side napping… and only destroys 50% of the other side’s missiles, while using all of its own.
Now suppose you have 10 warheads on each missile. The side that launches first destroys all of the opponent’s missiles with only half of its own.
The Nash equilibrium shifts from, ‘nobody launches missiles’, to ‘everyone launch your missiles first, or immediately without fail at any detection of launch from the other side.’
If the schmartest man in the world can screw this up, what hope is there for the rest of us?
You can’t just assume that giving yourself more options is a good thing. You build a bomb because the Nazis might do it first. You use it because it might save millions of lives compared to an invasion. You build a powerful military because the world is a dangerous place, you don’t want Germany and Japan re-arming, and an overwhelmingly superior military in the hands of a stable democracy is a great thing for global security. Then one day you have a Điện Biên Phủ and a commander in chief can be faced a very tough choice. You have to do everything in your power to save American lives, right? Even at the cost of escalation and possible retaliation down the road, right? Or one day a stable Western democracy isn’t quite so stable, you have people in charge who say, what’s the point of having nuclear if you can’t use it to get what you want? A no-first-use commitment seems like a good idea, but then the other side has more tanks … it all gets very complicated.
It pays to simplify.
To make the right move, you have to understand the game, the meta-game, the game beyond the meta-game. You need second-level thinking to succeed, you need to think strategically. You also want positive convexity, situations that that have the potential to go really really well but don’t cost you much downside. (Think the poker equivalent of calling from the big blind with 6-7 suited).
But in poker, in investing, in life, you also need keep it simple, stupid. You are often better off limiting your options. Even if you think you’re the smartest player at the table, you want to avoid marginal situations, where you may have to make a big decision in an unclear situation. And if you’re not the smartest or most confident person at the table, the people who are will force you to make a very tough decision at a time when they have the edge.
Reality is too complex, best-laid plans of mice and men, a pound of principle is worth a ton of guile.
And IMHO Western ethics are mostly strategic thinking carried to their ultimate conclusion. If you are strategic but your goal is in the Kantian sense to do what would be best if everyone did it, strategic thinking is indistinguishable from altruism. In an iterated game, on a long enough time horizon, the most altruistic is the most strategic.
Ethics = strategic thinking + love. If you care so much about your fellow players that their payoff is your payoff, you get to altruism.
In the real world, people are boundedly strategic and boundedly rational, but the more we can build market designs and institutions that unite the two, the better off we will be.
If you want cooperation, security, stability, you need strength but also honesty and clarity of purpose and communication, looking for win-win situations… you don’t go to the mat against friends for small victories. What goes around, comes around. Sow the wind, reap the whirlwind. But that is a topic for another day.