Are We Having the Wrong Nightmares About AI?
Abstract
Though seemingly opposite, doom and optimism regarding generative AI's spectacular rise both center on AGI or even superintelligence as a pivotal moment. But generative AI operates in a distinct manner from human intelligence, and it’s not a less intelligent human on a chip slowly getting smarter anymore than cars were mere horseless carriages. It must be understood on its own terms. And even if Terminator isn’t coming to kill us or superintelligence isn’t racing to save us, generative AI does bring profound challenges, well-beyond usual worries such as employment effects. Technology facilitates progress by transforming the difficult into easy, the rare into ubiquitous, the scarce into abundant, the manual into automated, and the artisan into mass-produced. While potentially positive long-term, these inversions are extremely destabilizing during the transition, shattering the correlations and assumptions of our social order that relied on superseded difficulties as mechanisms of proof, filtering, sorting and signaling. For example, while few would dispute the value of the printing press or books, their introduction led to such destructive upheaval that the resulting religious wars caused proportionally more deaths than all other major wars and pandemics since combined. Historically, a new technology's revolutionary impact comes from making what's already possible and desired cheap, easy, fast, and large-scale, not from outdated or ill-fitting benchmarks that technologists tend to focus on. As such, Artificial Good-Enough Intelligence can unleash chaos and destruction long before, or if ever, AGI is reached. Existing AI is good enough to blur or pulverize our existing mechanisms of proof of accuracy, effort, veracity, authenticity, sincerity, and even humanity. The tumult from such a transition will require extensive technological, regulatory, and societal effort to counter. But the first step to getting started is having the right nightmares.