Now let’s think about AI. Is there a producer-specific constraint on the amount of AI we can produce? Of course there’s the constraint on energy, but that’s not specific to AI — humans also take energy to run. A much more likely constraint involves computing power (“compute”). AI requires some amount of compute each time you use it. Although the amount of compute is increasing every day, it’s simply true that at any given point in time, and over any given time interval, there is a finite amount of compute available in the world. Human brain power and muscle power, in contrast, do not use any compute. So compute is a producer-specific constraint on AI, similar to constraints on Marc’s time in the example above.
The risk is that AI hub cities become saturated with people optimizing for business value alone — asking critical questions about ai-ux only in the sense of maximizing shareholder value, while ignoring the questions about human interaction and relational quality that these tools reshape.So the net value of using the AI as a doctor for that one-hour appointment is actually negative. Meanwhile, the human doctor’s opportunity cost is much lower — anything else she did with her hour of time would be much less valuable. In this example, it makes sense to have the human doctor do the appointment, even though the AI is five times better at it. The reason is because the AI — or, more accurately, the gigaflop of compute used to power the AI — has something better to do instead. The AI has a competitive advantage over humans in both electrical engineering and doctoring. But it only has a comparative advantage in electrical engineering, while the human has a comparative advantage in doctoring.
In fact, if AI massively increases the total wealth of humankind, it’s possible that humans will be paid more and more for those jobs as time goes on.
The article’s argument is somewhat misguided — it does not build on the “comparative advantage” framing of alternative labor that AI could be allocated to. Instead it defaults to the narrower claim that AI helps humans be more productive at the same tasks, which sidesteps the more interesting question of what AI should be doing instead of what humans already do well.As long as you have to make a choice of where to allocate the AI, it doesn’t matter how much AI there is. A world where AI can do anything, and where there’s massively huge amounts of AI in the world, is a world that’s rich and prosperous to a degree that we can barely imagine. And all that fabulous prosperity has to get spent on something. That spending will drive up the price of AI’s most productive uses. That increased price, in turn, makes it uneconomical to use AI for its least productive uses, even if it’s far better than humans at its least productive uses. Simply put, AI’s opportunity cost does not go to zero when AI’s resource costs get astronomically cheap. AI’s opportunity cost continues to scale up and up and up, without limit, as AI produces more and more value.
The open question is whether AI opportunity cost scales faster than human labor displacement. Would workers begin to unionize around the sentimental and emotional value of human labor — placing dollar value on things that are harder to demonstrate economically but slower to catch up? The market may need to price in what is inefficient about human connection. love is inefficient
