Discussion about this post

User's avatar
Matt Grawitch's avatar

What is also interesting, Jimmy, is that LLMs don't prompt themselves the way humans do. And what they produce is only as good as the prompt provided (and even then, it is necessarily limited in terms of the directions it will go). What I have noticed with the virtual assistants I designed for my classes is that when I ask for "draft responses" to a post (to see what the LLM picks up on), it almost never fully hits the deeper points or sees connections beyond the obvious. And to me that's one thing that really still largely differentiates humans (philosophers or otherwise) from LMMs - our ability to deviate off script, see connections that aren't as obvious, and create genuinely new insights (as opposed to the remix you mentioned).

Expand full comment
1 more comment...

No posts