What is also interesting, Jimmy, is that LLMs don't prompt themselves the way humans do. And what they produce is only as good as the prompt provided (and even then, it is necessarily limited in terms of the directions it will go). What I have noticed with the virtual assistants I designed for my classes is that when I ask for "draft responses" to a post (to see what the LLM picks up on), it almost never fully hits the deeper points or sees connections beyond the obvious. And to me that's one thing that really still largely differentiates humans (philosophers or otherwise) from LMMs - our ability to deviate off script, see connections that aren't as obvious, and create genuinely new insights (as opposed to the remix you mentioned).
Agreed. I've notice something similar using LLMs for work almost daily. I think it is because LLMs work with patterns and large amounts of data, and humans work with patterns in much smaller amounts of data using heuristics.
What is also interesting, Jimmy, is that LLMs don't prompt themselves the way humans do. And what they produce is only as good as the prompt provided (and even then, it is necessarily limited in terms of the directions it will go). What I have noticed with the virtual assistants I designed for my classes is that when I ask for "draft responses" to a post (to see what the LLM picks up on), it almost never fully hits the deeper points or sees connections beyond the obvious. And to me that's one thing that really still largely differentiates humans (philosophers or otherwise) from LMMs - our ability to deviate off script, see connections that aren't as obvious, and create genuinely new insights (as opposed to the remix you mentioned).
Agreed. I've notice something similar using LLMs for work almost daily. I think it is because LLMs work with patterns and large amounts of data, and humans work with patterns in much smaller amounts of data using heuristics.