Why LLMs Won’t (Yet) Replace Philosophers
The skills philosophers have are hard to replicate and are excellent for effective prompting
Please like, share, comment, and subscribe. It helps grow the newsletter and podcast without a financial contribution on your part. Anything is very much appreciated. And thank you, as always, for reading and listening.
About the Author
Jimmy Alfonso Licon is a philosophy professor at Arizona State University working on cooperation and political economy, ethics, and God. Before he taught at University of Maryland, Georgetown, and Towson University. He loves classic rock and Western, movies, and combat sports. He lives with his wife, a prosecutor, and family at the foot of the Superstition Mountains. He also abides.
Every few months, someone declares that large language models—ChatGPT, Claude, Gemini, take your pick—are going to make everybody obsolete. Every few months, those same models remind us why that isn’t true, at with regard to philosophy.
I say this as a professional philosopher who uses LLMs every day, both as research aids and as conversation partners. It is like a cheap research assistant. They are astonishingly good at summarizing papers, rephrasing in plainer English, checking for typos, and even imitating a recognizable philosophical style (mine and Daniel Dennett included). But there are two main reasons why LLMs are, for now, unlikely to replace philosophers.
The first is that philosophy at its best lives at the conceptual and empirical margins. Philosophy lives at the intellectual borderlands where established knowledge gives way to uncertainty. It is where we test the limits of concepts, examine what happens when definitions blur, and ask questions that no one has quite asked before. And that makes philosophy uniquely resistant to automation.
By their nature, large language models can only recombine what is already in their training data. They don’t (yet) generate genuinely new conceptual frameworks. They remix old ones and excel at pattern recognition, but are less useful devising new arguments and thought experiments. That is exactly what philosophers are trained to notice: the exceptions, the borderline cases, the spots where language, logic, or moral intuition begin to fray or inform. Philosophers do their best work where the data runs low but intuitions still have something useful to say that needs to be worked out. That is where LLMs, for now, have nothing to draw from.
That might change in time as models are fine-tuned on the latest cutting-edge work and as human users push them into novel territory. But so far frontier philosophical inquiry remains something only human minds can explore.
The second reason is that philosophers are among those best positioned to benefit from LLMs. Why? The art of effective prompting is a skill that philosophical training lends itself to nicely. Prompting is about situating the request. You have to clarify the audience, define the scope, specify what counts as success, and minimize ambiguity. These are skills philosophers have in spades. Indeed that is what philosophers already do when we define terms, distinguish between subtly different positions, and reformulate arguments to test their strength with counterexamples and wedge cases.
Philosophers are trained to ask follow-up questions that reveal hidden assumptions and to refine definitions until they do real work. That is basically the point of the Socratic method. That is precisely the mindset that makes for a skilled LLM prompter. So rather than being replaced by LLMs, philosophers are likely to become better philosophers by knowing how to prompt, probe, and press an LLM to get the most from it.
So while many fields may face substitution by AI, philosophy faces something subtler. LLMs can amplify the reach and efficiency of philosophers, but they can’t (yet) replace what makes philosophy distinct. If anything, the rise of LLMs amplifies the value of the skills and ideas that philosophers teach and preach. There will be increased interest in these stills to get the most from prompting LLMs.
And, not only that, but the better machines get at imitating reasoning, the more we will need humans who can differentiate between genuine understanding and a convincing facade. For now that remains a distinctively philosophical skill.



What is also interesting, Jimmy, is that LLMs don't prompt themselves the way humans do. And what they produce is only as good as the prompt provided (and even then, it is necessarily limited in terms of the directions it will go). What I have noticed with the virtual assistants I designed for my classes is that when I ask for "draft responses" to a post (to see what the LLM picks up on), it almost never fully hits the deeper points or sees connections beyond the obvious. And to me that's one thing that really still largely differentiates humans (philosophers or otherwise) from LMMs - our ability to deviate off script, see connections that aren't as obvious, and create genuinely new insights (as opposed to the remix you mentioned).