I'm sure in the future we'll find stuff to disagree about, but here I'm totally on board.
Lots of policies, I think, are fruitfully viewed through the lens of trying to better align the profit motive with the social good. The law and economics tradition basically takes this attitude towards law writ large. Pigouvian taxes and subsidies are one obvious example--because we think there are significant extra social benefits from installing solar panels beyond the benefits to homeowners of the energy they get, we subsidize solar panels. Another is tort law, where it's common to see the point as forcing people to internalize the social costs of their actions, so that it's unprofitable to cause harm.
While I admit there are perspectives from which this seems alien--it's a lot more natural to someone with a broadly consequentialist bent of mind than a deontological one--it strikes me as pretty attractive. I guess to the extent that I want to qualify it, it's for broadly public choice, non-ideal-theory reasons. There are lots of cases where the profit motive isn't *perfectly* aligned with social good. In principle, externalities are ubiquitous, as no transaction only affects the parties transacting. (And even setting aside externalities, individuals aren't perfect judges of their own good, as pageturner notes above.) But that doesn't mean I think government should try to price all those externalities. The perfect shouldn't be the enemy of the good, and given the realities of politics, absent a reasonably strong default *against* intervening in the market, I think what we'd likely actually get would be too close to central planning, with all its faults. While I generally want to avoid getting too topical here, tariffs are probably a good case in point. You can find pretty plausible, economically defensible rationales for this or that limited use of tariffs. (e.g., national security, or as part of some narrow, targeted industrial policy.) But when tariffs are a tool of real-world politicians, they tend to get used well beyond their defensible economic justifications, and we'd probably be better off with a strong norm against using them at all. So while I'm happy with some uses of targeted taxes and subsidies aimed at bringing the profit motive better into alignment with the social good, the general style of argument makes me uneasy, as I think it's often applied far beyond its rightful bounds.
I agree that misalignment plays some role in the explanation of the opioid crisis. But I am surprised by your claim that corporate values and behavior are misaligned with social or consumer values. Aren't you locating the misalignment in the wrong place? Misalignment is a defect of a system, a fault. But capitalism was not functioning in an unusual or aberrant way in the opioid crisis. The system efficiently and vigorously satisfied consumers' revealed preferences, as it is designed to do.
The more fundamental problem is that consumer desires are often misaligned with their objective interests, their good, and their flourishing. Once you recognize that there is misalignment at the level of the individual, phenomena like the opioid crisis follow immediately, as does your admonition about the danger of capitalism. The problem is that capitalism is just as good at supplying people what they shouldn't have as it is at supplying them with what they really need. The opioid crisis reached the magnitude it did because the supply-side systems were, if anything, too well aligned with consumer preferences, and too well-suited to the task assigned them.
Even if human being were perfectly rational and fully informed self-interest maximizers, and even if there were no market failures, markets focused on willingness-to-pay would still be misaligned with social welfare welfare maximization, so long as you allow for stuff like a declining marginal utility of income and interpersonal comparisons of utility. You’re going to need some redistribution to maximize social welfare under these conditions, and focusing on WTP isn’t going to get you there.
But if that redistribution is done via government coercion, rather than private charity, you will reliably reduce social welfare in the long run, because spending other people's money is more likely to lead to other (non-human-flourishing-maximizing) considerations dominating the process.
Interesting point about capitalism as an alignment problem. But isn’t government policy also an alignment problem? Surely the machinery of government isn’t automatically and always aligned with the public interest, no doubt because of lots of bad incentives (including voters having little incentive to be informed or rational about complex policies). Hayek is great but there’s also James Buchanan who won the Nobel prize for developing public choice theory—the idea that there is “government failure” just like market failure. That is, perhaps, the bigger alignment problem to solve.
This is similar to the idea I meant to be getting at in saying that I don't always want government to try to solve alignment problems of capitalism, since I think the cure is often (not always!) worse than the disease, for just the kind of reasons you're talking about.
Yeah this is what I was thinking as well. Tyranny of the majority makes democratic government misaligned with caring for minorities. Regulatory capture makes regulators misaligned and serve the interest of corporations instead of consumers. Principal-agent problems mean politicians are misaligned from serving voters, executive heads are misaligned from serving the executive, and low level government employees at the bottom of the chain of misalignment are very far from serving the interests of voters.
I feel like the biggest systemic problem in America is that democratic local government is misaligned. Local government centrally plans land use and transportation policy to serve the interests of a middle or upper middle class median voter and is happy to use regulatory power to subtly push lower income undesirables out of the community. This has created a massive housing crisis that increases rents and then the cost of everything as wages have to rise to meet high rents.
The way I’ve been thinking about it for years is that a corporation or a government *is* an artificial intelligence, just one made out of people rather than silicon. Profit or votes are the value function, and they are partially, but not fully aligned with our values.
One thing I like about the post’s analogy (roughly, market : profit :: AI : narrow goal) and framing (alignment problem vs. calculation problem) is it presents capitalism as having purely instrumental value. The upshot is we should think of market interventions in terms of practical costs and benefits, not as something with the inherent downside of limiting freedom—though there are, as Greco points out, consequentialist reasons to have a thumb on the scale against intervention. This all seems uncontroversial to me, especially when you consider that “unregulated” markets are creatures of law designed to achieve goals, not natural phenomena.
The hell of it is, an AI oriented toward human flourishing could go a long way toward improving the alignment of capitalism. But someone would have to build it -- and the people building AIs right now are smack in the middle of a whole bunch of misalignments.
Worth pointing out too that the optimizing for willingness-to-pay instead of social welfare is not just an unintentional feature of market economies - sometimes, it’s a deliberate feature of policymaking! Whenever a policymaker relies on a cost-benefit analysis, they’re relying on a something that measures only WTP, not social welfare. Sometimes the economist doing the CBA is careful to distinguish between the two; sometimes they’re not.
Your example illustrating the capitalist alignment problem is a prescription drug, a product from one of the least free parts of the "free market." (FDA is a government monopoly, doctors are gatekeepers, health insurance is highly regulated, etc). Don't you need to also defend the counterfactual, "and it would have been even worse if the system here had been even more capitalist"? Otherwise, this alignment problem could be an argument *for* capitalism.
The importance of the price system has been known forever - thank you Hayek.
The fact that capitalism can result in the vigorous pursuit of profit in ways that are not necessarily aligned with what many would consider to be a social good is also long known.
So is the point just that an AI may be less aligned with any social good than most (?) human actors?
This is of course possible but surely a pretty trivial observation. And the sheet immortality and complete disregard for the week being off others that had been a commonplace of capitalist development alongside it's many benefits is extremely well documented I'm far from persuaded you could design an AI more morally unscrupulous and corrupt than some of the powerful actors inflicting having in societies around the globe today.
So tbh, with the best will in the world I do assure you, I'm unclear where the filing is in this sandwich
I'm sure in the future we'll find stuff to disagree about, but here I'm totally on board.
Lots of policies, I think, are fruitfully viewed through the lens of trying to better align the profit motive with the social good. The law and economics tradition basically takes this attitude towards law writ large. Pigouvian taxes and subsidies are one obvious example--because we think there are significant extra social benefits from installing solar panels beyond the benefits to homeowners of the energy they get, we subsidize solar panels. Another is tort law, where it's common to see the point as forcing people to internalize the social costs of their actions, so that it's unprofitable to cause harm.
While I admit there are perspectives from which this seems alien--it's a lot more natural to someone with a broadly consequentialist bent of mind than a deontological one--it strikes me as pretty attractive. I guess to the extent that I want to qualify it, it's for broadly public choice, non-ideal-theory reasons. There are lots of cases where the profit motive isn't *perfectly* aligned with social good. In principle, externalities are ubiquitous, as no transaction only affects the parties transacting. (And even setting aside externalities, individuals aren't perfect judges of their own good, as pageturner notes above.) But that doesn't mean I think government should try to price all those externalities. The perfect shouldn't be the enemy of the good, and given the realities of politics, absent a reasonably strong default *against* intervening in the market, I think what we'd likely actually get would be too close to central planning, with all its faults. While I generally want to avoid getting too topical here, tariffs are probably a good case in point. You can find pretty plausible, economically defensible rationales for this or that limited use of tariffs. (e.g., national security, or as part of some narrow, targeted industrial policy.) But when tariffs are a tool of real-world politicians, they tend to get used well beyond their defensible economic justifications, and we'd probably be better off with a strong norm against using them at all. So while I'm happy with some uses of targeted taxes and subsidies aimed at bringing the profit motive better into alignment with the social good, the general style of argument makes me uneasy, as I think it's often applied far beyond its rightful bounds.
I agree that misalignment plays some role in the explanation of the opioid crisis. But I am surprised by your claim that corporate values and behavior are misaligned with social or consumer values. Aren't you locating the misalignment in the wrong place? Misalignment is a defect of a system, a fault. But capitalism was not functioning in an unusual or aberrant way in the opioid crisis. The system efficiently and vigorously satisfied consumers' revealed preferences, as it is designed to do.
The more fundamental problem is that consumer desires are often misaligned with their objective interests, their good, and their flourishing. Once you recognize that there is misalignment at the level of the individual, phenomena like the opioid crisis follow immediately, as does your admonition about the danger of capitalism. The problem is that capitalism is just as good at supplying people what they shouldn't have as it is at supplying them with what they really need. The opioid crisis reached the magnitude it did because the supply-side systems were, if anything, too well aligned with consumer preferences, and too well-suited to the task assigned them.
Even if human being were perfectly rational and fully informed self-interest maximizers, and even if there were no market failures, markets focused on willingness-to-pay would still be misaligned with social welfare welfare maximization, so long as you allow for stuff like a declining marginal utility of income and interpersonal comparisons of utility. You’re going to need some redistribution to maximize social welfare under these conditions, and focusing on WTP isn’t going to get you there.
But if that redistribution is done via government coercion, rather than private charity, you will reliably reduce social welfare in the long run, because spending other people's money is more likely to lead to other (non-human-flourishing-maximizing) considerations dominating the process.
Interesting point about capitalism as an alignment problem. But isn’t government policy also an alignment problem? Surely the machinery of government isn’t automatically and always aligned with the public interest, no doubt because of lots of bad incentives (including voters having little incentive to be informed or rational about complex policies). Hayek is great but there’s also James Buchanan who won the Nobel prize for developing public choice theory—the idea that there is “government failure” just like market failure. That is, perhaps, the bigger alignment problem to solve.
This is similar to the idea I meant to be getting at in saying that I don't always want government to try to solve alignment problems of capitalism, since I think the cure is often (not always!) worse than the disease, for just the kind of reasons you're talking about.
Yeah this is what I was thinking as well. Tyranny of the majority makes democratic government misaligned with caring for minorities. Regulatory capture makes regulators misaligned and serve the interest of corporations instead of consumers. Principal-agent problems mean politicians are misaligned from serving voters, executive heads are misaligned from serving the executive, and low level government employees at the bottom of the chain of misalignment are very far from serving the interests of voters.
I feel like the biggest systemic problem in America is that democratic local government is misaligned. Local government centrally plans land use and transportation policy to serve the interests of a middle or upper middle class median voter and is happy to use regulatory power to subtly push lower income undesirables out of the community. This has created a massive housing crisis that increases rents and then the cost of everything as wages have to rise to meet high rents.
The way I’ve been thinking about it for years is that a corporation or a government *is* an artificial intelligence, just one made out of people rather than silicon. Profit or votes are the value function, and they are partially, but not fully aligned with our values.
One thing I like about the post’s analogy (roughly, market : profit :: AI : narrow goal) and framing (alignment problem vs. calculation problem) is it presents capitalism as having purely instrumental value. The upshot is we should think of market interventions in terms of practical costs and benefits, not as something with the inherent downside of limiting freedom—though there are, as Greco points out, consequentialist reasons to have a thumb on the scale against intervention. This all seems uncontroversial to me, especially when you consider that “unregulated” markets are creatures of law designed to achieve goals, not natural phenomena.
The hell of it is, an AI oriented toward human flourishing could go a long way toward improving the alignment of capitalism. But someone would have to build it -- and the people building AIs right now are smack in the middle of a whole bunch of misalignments.
Great post!
Worth pointing out too that the optimizing for willingness-to-pay instead of social welfare is not just an unintentional feature of market economies - sometimes, it’s a deliberate feature of policymaking! Whenever a policymaker relies on a cost-benefit analysis, they’re relying on a something that measures only WTP, not social welfare. Sometimes the economist doing the CBA is careful to distinguish between the two; sometimes they’re not.
Consumer desires are always a kind of forecast of future satisfaction, and like all forecasts, are often wrong. This is what is meant by misalignment. I look at this in some depth in https://open.substack.com/pub/kantandsmith/p/the-economics-of-desire-and-satisfaction
Your example illustrating the capitalist alignment problem is a prescription drug, a product from one of the least free parts of the "free market." (FDA is a government monopoly, doctors are gatekeepers, health insurance is highly regulated, etc). Don't you need to also defend the counterfactual, "and it would have been even worse if the system here had been even more capitalist"? Otherwise, this alignment problem could be an argument *for* capitalism.
I find I'm a little confused by this piece.
What is the key point here?
The importance of the price system has been known forever - thank you Hayek.
The fact that capitalism can result in the vigorous pursuit of profit in ways that are not necessarily aligned with what many would consider to be a social good is also long known.
So is the point just that an AI may be less aligned with any social good than most (?) human actors?
This is of course possible but surely a pretty trivial observation. And the sheet immortality and complete disregard for the week being off others that had been a commonplace of capitalist development alongside it's many benefits is extremely well documented I'm far from persuaded you could design an AI more morally unscrupulous and corrupt than some of the powerful actors inflicting having in societies around the globe today.
So tbh, with the best will in the world I do assure you, I'm unclear where the filing is in this sandwich