I would challenge the notion that LLMs are bullshitters. I don't use ChatGPT very much, I use Claude. I've found that the information that it gives me is getting more reliable all the time. When it's important that I get something right, I check Claude for bullshit and rarely find it. You don't establish anywhere that LLMs are bullshitters, and doing so would make your article more credible.
Getting something right reliably does not disqualify as bullshit in the *Frankfurtian sense*. It could be completely accurate, but even that wouldn't rule out bullshit of the Frankfurtian type, just fyi.
I agree with some points but the notion that we are bullshit does not seem right to me.
Consider that the populace might be being LED to believe that AI, people, and everything is bullshit — that it’s no longer POSSIBLE to discern fact from fiction. Then, I suppose the populace could be very easily ruled/bullied/pushed around. (We may very well already be at this point societally.)
I don't think that the author is saying "we are bullshit" but rather "we are bullshitters". And this is entirely accurate. Even the most person honest among us can still be (and most probably is) a bullshitter.
Or better yet: figuring out how to design institutions and incentive-structures that reward truth telling and punish bullshit. I ain't holding my breath though.
the real bullshit are the people in large corporations creating these systems that leave us with 0 options but to use and literally live with it. we’re bullshitters because we are bullshitted by bullshits
I think this is true. B**shit comes across as pejorative in this post but I don't think it's meant to be, right? Every output is a hallucination - it's just that a lot of them reflected back at you can be illuminating. But yes, once you've interacted with these tools for awhile, it's clear it will just respond to your requests as opposed to exercising any kind of independent thinking. They can still be very helpful but you have to know the limits and temper your expectations. I think both sides in the AI debate dig in and get things wrong - skeptics claiming its capabilities are useless are gaslighting millions of people who use it effectively every day. The boosters often look away from its very real deficits. Because the big question is how much of these problems can get solved is an unknown (despite both sides definitively stating otherwise - humans are almost wrong), it causes lots of disagreement in the present moment. Great post, though. I love Franfurt's book.
I would challenge the notion that LLMs are bullshitters. I don't use ChatGPT very much, I use Claude. I've found that the information that it gives me is getting more reliable all the time. When it's important that I get something right, I check Claude for bullshit and rarely find it. You don't establish anywhere that LLMs are bullshitters, and doing so would make your article more credible.
I should have mentioned, for more information on Frankfurtian bullshit, see:
https://en.m.wikipedia.org/wiki/On_Bullshit
Getting something right reliably does not disqualify as bullshit in the *Frankfurtian sense*. It could be completely accurate, but even that wouldn't rule out bullshit of the Frankfurtian type, just fyi.
I agree with some points but the notion that we are bullshit does not seem right to me.
Consider that the populace might be being LED to believe that AI, people, and everything is bullshit — that it’s no longer POSSIBLE to discern fact from fiction. Then, I suppose the populace could be very easily ruled/bullied/pushed around. (We may very well already be at this point societally.)
I don't think that the author is saying "we are bullshit" but rather "we are bullshitters". And this is entirely accurate. Even the most person honest among us can still be (and most probably is) a bullshitter.
Thanks.
But for what it’s worth, the piece does state, “ChatGPT is bullshit. But the deeper point is that it is bullshit because we are.”
I like this:
We often talk about aligning AI with human values. But perhaps the more urgent task is aligning our values with truth.
Or better yet: figuring out how to design institutions and incentive-structures that reward truth telling and punish bullshit. I ain't holding my breath though.
the real bullshit are the people in large corporations creating these systems that leave us with 0 options but to use and literally live with it. we’re bullshitters because we are bullshitted by bullshits
I think this is true. B**shit comes across as pejorative in this post but I don't think it's meant to be, right? Every output is a hallucination - it's just that a lot of them reflected back at you can be illuminating. But yes, once you've interacted with these tools for awhile, it's clear it will just respond to your requests as opposed to exercising any kind of independent thinking. They can still be very helpful but you have to know the limits and temper your expectations. I think both sides in the AI debate dig in and get things wrong - skeptics claiming its capabilities are useless are gaslighting millions of people who use it effectively every day. The boosters often look away from its very real deficits. Because the big question is how much of these problems can get solved is an unknown (despite both sides definitively stating otherwise - humans are almost wrong), it causes lots of disagreement in the present moment. Great post, though. I love Franfurt's book.