I feel like it’s only going to be a matter of a few years before we start to see articles in The New Yorker and National Review about how America is falling behind China and others in their use of AI. But it won’t be about chips and rare earth minerals, it’ll be about the collapse of the humanities.
One of the things basically no one seems to be thinking about is that AI models actually need good prompts to grow as much as they need good data. In fact, I believe they will need more and more prompting and language sophistication in their prompts over time. Data is consumable and processable; the prompting and imagination needed to enhance the connections between all these data has not, at least in my mind, been replicable by any AI model that I have seen or read about.
I expect this crisis to happen soon. Already, the New Yorker and the Atlantic have both published long pieces on the decline of college students’ reading abilities. Anyone near a university-age student, let alone those of us who teach or learn in them, sees it, too. It’s not news; there has been a crisis in student reading abilities—made much worse, by the way, by a nihilistic teaching philosophy that has just passed students on year-after-year—for at least a decade. State and local governments, teachers unions, and administrators have all been complicit in the gradual decline in standards and standards enforcement in education. The right isn’t wrong about everything, despite what many one college campuses claim.
Artificial intelligence and large language models [LLMs] have already consumed a lot of the data on the internet. The next thing is that the processing abilities of those models, effectively their physical capacity to read and interpret data, is now the focus of big companies like Google, X, and OpenAI. That this is the new focus was made clear a few months ago when the cheaper Chinese model DeepSeek showed that these super expensive mega-power-and-chip-using behemoths may have this next phase wrong.
The phase after this processing heavy phase though is actually the toughest part of the AI mission in my view. And it seems to be the one that the companies themselves are effectively glossing over: the training and adaptation of these models to anticipate and emulate human activities in the real world.
Agents, as AI models doing tasks and jobs in the real world, is how these companies will monetize their works. And make no mistake, whatever Sam Altman says, that is what this is about. (I firmly believe that Altman is much more of a Sam Bankman-Fried than he is altruistic idealist, but more on that another time…)
Unless they can agent-ize all of these technologies, they won’t be able to sell them as replacements assistants, or replacement researchers, or replacement lawyers and re-coup the hundreds of billions Altman and others are claiming they need to make the AI-revolution happen.
And the only way they will be able to make these agents really useful is through training and prompting by people who can, you know, hone and craft clear and sophisticated instructions.
That’s why —I’m convinced of this— the next jobs revolution will be for the humanities. Today’s art history major turned barista will be the next generation’s overpaid software engineer. Her future will be bright and the universities that prepare these students should start to plan for a near future where they will be showered with new grants and promises by governments all over the place that freak out over then “humanities gap.”
I know this all sounds crazy, but once the processing-intensive phase of AI development is over. And it looks to me like that’s pretty soon, the resurrection of the humanities employee will be upon us.
Leave a Reply