Synthetic Socrates and the Philosophers of the Future
This article (forthcoming in Think) won a Runner-Up Prize from AI Impacts blog for the Automation of Wisdom and Philosophy Contest
I am happy to announce that the article below (forthcoming in Think) won a runner-up prize for the AI Impacts blog for Automation of Wisdom and Philosophy Contest. It is an honor to have my piece picked.
This article was also the subject of an interview from my alma mater, the University of Maryland. You can read it HERE.
Enjoy!
I
In the last episode of season one of Westworld, one of the main characters, Dolores Abernathy (a highly advanced android) prophesizes the dawn of a new era:
They say that great beasts once roamed this world […] Yet all that's left of them is bone and amber. Time undoes even the mightiest of creatures. Just look at what it's done to you. One day you will perish. You will lie with the rest of your kind in the dirt. Your dreams forgotten, your horrors effaced. Your bones will turn to sand. And upon that sand a new god will walk. One that will never die. Because this world doesn't belong to you or the people who came before. It belongs to someone who has yet to come.
There is a ring of plausibility to Dorothy’s warning. Indeed, though the degree of intelligence humans have is superior to any other creature on Earth, nothing in principle prevents the real possibility of creatures with superior intelligence in extraterrestrial or computational form. And there is no particular reason to hold that this greater intelligence would be limited to formal domains like logic and mathematics—there is some evidence here in the form of large language models that are, more and more, able to quickly and reliable produce emails, memos, and even essays on various topics. It is plausible that artificial intelligence will eventually outstrip human intelligence in many domains, at least in the distant future. There are already domains and tasks where computers beat (or soon will) human intelligence like playing chess, predicting criminal recidivism, performing surgeries, exploring the deep ocean, and designing computer chips.
This paper argues that it is plausible that artificial intelligence will eventually surpass human intelligence across various domains, such that philosophers who value philosophical progress, like finding deep and important philosophical truths, should focus their efforts on furthering the innovations in artificial intelligence that are required to build artificial philosophers. Their efforts should focus on helping produce superior artificial philosophers. (Here we should note that even if artificial intelligence is unable to do good philosophy without human assistance, it doesn’t matter for the purposes of our thesis provided that the addition of artificial intelligence to the process of philosophizing improves the quality and output of human philosophers).
II
Let us begin by considering the following claims:
(O1) Human intelligence does not exhaust the highest level of intelligence allowed by the laws of physics.
(O2) If progress in artificial intelligence continues—as it likely will—then it will outstrip human intelligence across many domains in the future (e.g., linguistics, law).
I take (O1) to be a safe assumption. There are a couple of reasons for this.
First, consider that if human intelligence exhausted the degree of intelligence permitted by the laws of physics, it would be a notable coincidence. If we assume the spectrum of intelligence is vast enough, then it would be a remarkable coincidence that humans had the maximal degree of intelligence permitted by physics. After all, it could be the degree of intelligence had by humans is located exactly in the middle of the intelligence spectrum.
Second, there is nothing implausible about the possibility that there could be, removed from us in either space or time, extraterrestrials capable of exploring the vast reaches of space, artificial intelligence capable of outperforming humans across various tasks. In each instance, there is a possible example of intelligence that outstrips human intelligence. While an intractable critic might deny that these are genuine physical possibilities, this denial looks like a major bullet to bite, and hefty burden of proof to bear.
Then there is assumption (O2). It is controversial. Why? Resistance to this assumption can be summed up as follows:
While artificial intelligence may outstrip human intelligence in formal domains like mathematics and chess, it hardly follows that artificial intelligence could outstrip human intelligence in less formal domains like music and philosophy. These domains look too elusive to be captured by artificial intelligence because they require spontaneity and creativity, something even the most advanced artificial intelligence would struggle with. We thus shouldn’t expect that there could be artificial philosophers. Call this the creativity objection.
One would be hard pressed to quantify the methods, rules, and whatnot required to compose a short story or a poem. Creating artificial intelligence that outstrips humans in formal domains doesn’t show the same could happen in less formal domains. However, despite the apparent plausibility here, we should be skeptical.
First, in a sense chimpanzees and elephants are better artists than snails and frogs—even if they produce pseudo-art, and not genuine art—while humans are better artists than chimpanzees and elephants. A salient difference is that chimpanzees and elephants have better cognitive abilities than snails and frogs. Unless human intelligence exhausts creativity allowed by physics, it is a serious possibility that as cognitive abilities and intelligence increase there would be a roughly corresponding increase in artistic ability.
Second, why think that artificial intelligence cannot be creative? We should expect the opposite: creativity requires rich cognitive resources applied to an issue or topic from an unconventional and informal perspective, instead of relying on a set of formal rules applied recursively under a limited number of conditions. For example, a pocket calculator is good at arithmetic, but would be awful at helping finish an art project—such devices are too recursive in their programming to be creative or artistic. However, to the extent that artificial intelligence eventually comes to outstrip human intelligence, it is plausible there will be an uptick in artificial creativity too. And as it happens, there is (some) evidence that artificial intelligence has a crude, nascent creativity: I asked ChatGPT-4 to generate a short story about Darth Vader fighting a cheeseburger. This is a parred down version of that story:
‘Once, in a galaxy far, far away, the Dark Lord of the Sith had been dispatched by the Emperor to retrieve an ancient artifact that was said to hold immense power. As he traveled through the galaxy, he encountered a strange planet that was covered in fast food restaurants.
‘He couldn't resist the temptation and decided to stop at a local burger joint to grab a quick bite to eat. As he sat at the counter, munching on his cheeseburger, he suddenly heard a loud noise. Darth Vader knew immediately that he was under attack. He drew his lightsaber and stood ready to defend himself, but to his surprise, the source was the giant cheeseburger, which was now standing in front of him with buns for arms and lettuce for legs.
‘Darth Vader was not one to be intimidated by a mere cheeseburger, however, and he attacked with all his might. Darth Vader defeated the giant cheeseburger and the people of the fast food planet hailed Darth Vader as their protector. From that day on, he was known as the "Burger-Slayer" and his reputation as a fierce warrior only grew.’
Of course, it may be there are evolutionary reasons that humans are uniquely creative, but it is unclear whether those reason would exclude creativity by artificial intelligence. Unless there is reason to suppose that humans have been evolutionarily positioned to be creative, and there are no other cognitive routes to creativity, then we lack reason to doubt that the creative abilities of an advanced artificial intelligence would outstrip our own.
III
Suppose that I have convinced you of the following:
(O3) As artificial intelligence develops, it will eventually outperform human intelligence across many domains.
You might wonder about the significance of (O3): if one cares primarily about discovering deep and important philosophical truths, then they should devote their time and cognitive resources to the development of artificial intelligence instead of doing philosophy—either that or they should devote their time and cognitive resources to solving those philosophical problems that block the realization of artificial intelligence that outperforms humans in solving philosophical problems. Consider the following example:
Suppose that one could spend their time engineering a surgery robot capable of performing up to twenty heart surgeries per day, with few if any mistakes; or, one could spend their time studying medicine with the aim to someday be among the best surgeons in the world at heart surgery, performing three of them per day with various mistakes. If one mainly valued saving lives, they should pursue a career in medical robotics instead of practicing medicine.
One can imagine many variations of this example. The point is that choosing to do medicine, instead of building medical robotics, has serious consequences for the number of lives saved. If one cared mainly about sparing the most people from heart disease, then the choice is clear: they should spend their cognitive capital and time doing medical robotics research instead of practicing medicine. Similarly, for philosophers who care mainly about philosophical advances, then they should either,
(O4) Spend their cognitive resources developing artificial intelligence that will eventually outstrip human intelligence at doing philosophy; or, devote their resources to solving any philosophical problems that are an obstacle to developing such intelligence.
Not only is this example plausible, but we are forced to make similar choices all the time, e.g., spend money on patient care, or research and development on drugs to better treat patients in the near to long-term future. Nonetheless, choosing between doing philosophy, or spending our cognitive capital developing artificial philosophers doesn’t seem ethically questionable in the way other issues might be, e.g., the distribution of scarce life-saving resources.
However, there is the possibility that we must solve certain philosophical problems before we can develop superior artificial philosophers. For example, perhaps we must solve the mind-body problem before we can design artificial philosophers who can do philosophy better than even the best human philosophers. This raises the question of how to demarcate philosophical problems that must be solved before superior artificial philosophers can be designed from those unrelated garden variety philosophical problems not needed for such designs.
I’ll admit this is a hard question. Here’s a tentative proposal: work on only those philosophical problems that arise while developing artificial intelligence; if researchers must answer certain philosophical questions to proceed, that would procedurally demarcate those philosophical questions that must be answered, to develop artificial philosophers, from those philosophical questions that are unrelated to that task.
Some object that machines are incapable of possess mental properties like consciousness, such that no matter how much research capital we invest in developing artificial intelligence, it will still lack certain essential cognitive and psychological qualities that human philosophers both possess and need to do philosophy. If you think such properties are necessary to do philosophy, then one wouldn’t worry about the prospect of artificial philosophers who do philosophy better than even the best human philosophers who ever have lived or will live.
While this objection may appear initially plausible, it is hard to pin down what the objection is. For instance, you might think that to answer a philosophical question, one must be aware of the question. But that can’t be right. One reason is that computers presumably solve sophisticated problems regularly without awareness. There is no reason to think that computers would need to have conscious awareness to uncover philosophical truths or make conceptual distinctions. This isn’t a knockdown response, but it only meant to cast doubt on the objection.
There is another response to this objection: whenever I come up with solutions to philosophical problems, it often has little to do with my consciously thinking through the problem. This isn’t to claim it never does, but instead that I often discover solutions to problems while doing mundane tasks like showering or brushing my teeth. It is hard to see exactly what conscious awareness adds to solving philosophical problems that cannot be added by the unconscious. If one has the salient cognitive capacities to solve philosophy problems, then it is not clear what awareness adds to the process of devising solutions to philosophical problems—especially since the story of Darth Vader fighting a cheeseburger, presumably, came from a computer and program that were lacking conscious awareness, and yet appears to exhibit early signs of creativity.
The claim that philosophers should attend to creating superior artificial philosophers, instead of doing the philosophizing themselves only applies to (prospective) philosophers who value the doing of philosophy more than the results of philosophizing. If one does philosophy mainly for the joy of doing it, rather than to find deep philosophical truths, then the challenge posed here will not be applicable, e.g., the fact that grandmasters will beat most people is hardly reason not to play chess for the fun of it. However, no doubt many philosophers not only value the doing of philosophy, but highly value uncovering philosophical truths, and conceptual distinctions. This challenge applies to how they choose to spend their limited time and cognitive abilities.
This is one of the most insightful pieces I’ve read about the subject of artificial intelligence and philosophy. No wonder it won the runner-up prize in the competition (congratulations to you, by the way).
I can only imagine how good the winning article must have been to surpass this.