First, let’s cover the scary stuff. The stated goal of the top AI labs is to make systems that are “superintelligent.”This is envisioned as a step change over even the smartest humans, similar to the leap that humans made over chimpanzees and hominids. When homo sapiens made this leap, we quickly became the apex predator everywhere we went, leading to mass death of other species. The birth of superintelligence, many reason, could result in a similar evolutionary outcome for humans
We know that LLMs represent their training data in fantastically complicated ways that we do not know how to unravel. We also know that an LLM’s internal activities can be isomorphic to (having the same structure as) human language processing (h/t the wonderful Sam Hammond).
Google’s latest frontier model, Gemini, was reportedly trained on five times the computing power of GPT-4 yet is only marginally better on standard performance benchmarks. Perhaps this will lead to a natural pause, where the frenetic pace of bigger models can slow down, and the only way to improve LLM reliability and usability will be to make advancements on understanding exactly what’s going on under the hood
Many AI worriers will respond to this argument by saying that I am simply dismissing the concept of superintelligence. I am not. I am simply saying a)that I do not believe humans can create God (for reasons mathematical, thermodynamical, and theological) and b)superintelligent AI will encounter the same problems that flummox all humans, including the geniuses: highly imperfect information, uncertainty about the future, and scarce resources.
The world isn’t like a video game, where the entity with the greatest “intelligence” simply wins by brute force. Nor is it like Go, with clear, unchanging rules. The world isn’t even like it was in prehistoric times when humans conquered the earth. Because of human ingenuity and intelligence, we have built vast industrial, legal, social, and economic apparatuses that govern the resources we care about most. To achieve a feat like conquering the world, a superintelligent AI would have to interact with those things
AI, on the other hand, will be subject to an altogether different kind of evolutionary pressure: the pressure of the marketplace. Humans will not choose to use unreliable or unsafe AI in situations where reputations and money are on the line. Behind all the hype, businesses are struggling to use current LLMs because of their propensity to hallucinate and otherwise act in unexpected ways. These systems are only valuable to the business world insofar as they are both powerful and predictable
Indeed, the only remotely plausible AI doom scenario I can imagine is one where AI research is cloistered inside of a centralized institution (and thus not subject to market pressures) and powerful AI is not broadly dispersed (so humans have no way to fight back). Ironically, this is the exact regime many in the AI doom camp advocate
