I spend a lot of time calling up students for AI misuse (not my students—this is a job I do for my school). What depresses me is how *badly* they use it—made up references and quotes, simply copying and pasting bullet point lists, every second paragraph has its own section. They literally just copy and paste. If we can at least catch the students who can’t even be bothered to use it properly we’re doing some sort of service.
Universities tend to attract students of different dispositions. A large number of them are just motivated by credentials in the way described. That's no susprise because the structure of schooling their whole lives encourages this kind of thinking. But a good chunk are interested in ideas, expanding what they know, digging into the issues. They form a large part of the constituency of students who are against their peers using LLMs, just as they'd disapprove of those who used to buy their papers.
This is fair. I finished up my undergraduate degree over fifteen years ago and I recall that even studying philosophy which almost no one does for purely careerist reasons, there was a large contingent of people who essentially just wanted a degree because that's what was expected of them in their social class and life-stage and it could have been in anything. They often wouldn't read around the subject or engage in class discussions because their reason for being there was fundamentally an extension of going to school. Not to say that they didn't enjoy some parts of it, but it was more that they hadn't inculcated a habit of intellectual curiosity.
Counterpoint: If the trend continues and huge numbers of students use AI to do nearly all of their coursework, the "high-functioning signal" will come from somewhere other than a 4 year college degree. Worse yet, spending the time and money such a degree requires will signal something else: money to waste, aimlessness, poor initiative.
It's not really a counterpoint, since I'm talking more about a transition. I'm not convinced you're right -- a lot depends on how things shake out -- but I do fear that you are.
I agree that simply banning AI use is not the way to go. I even agree that doing so favors those students who are sneaky enough to mask their usage. I just think the alternative is to design assessments that don't themselves allow for AI use.
Above all, I just really, really do not want college to become a training ground for AI-prompting.
Oh I agree about the design part. It is something I'm working on for a journal. But I do think using AI to teach students about philosophical dialogue, as a part of the circulation, with proper oversight, could have a lot of potential.
I spend a lot of time calling up students for AI misuse (not my students—this is a job I do for my school). What depresses me is how *badly* they use it—made up references and quotes, simply copying and pasting bullet point lists, every second paragraph has its own section. They literally just copy and paste. If we can at least catch the students who can’t even be bothered to use it properly we’re doing some sort of service.
Universities tend to attract students of different dispositions. A large number of them are just motivated by credentials in the way described. That's no susprise because the structure of schooling their whole lives encourages this kind of thinking. But a good chunk are interested in ideas, expanding what they know, digging into the issues. They form a large part of the constituency of students who are against their peers using LLMs, just as they'd disapprove of those who used to buy their papers.
The latter group of students is largely why I teach. They unfortunately seem like the smaller group though.
This is fair. I finished up my undergraduate degree over fifteen years ago and I recall that even studying philosophy which almost no one does for purely careerist reasons, there was a large contingent of people who essentially just wanted a degree because that's what was expected of them in their social class and life-stage and it could have been in anything. They often wouldn't read around the subject or engage in class discussions because their reason for being there was fundamentally an extension of going to school. Not to say that they didn't enjoy some parts of it, but it was more that they hadn't inculcated a habit of intellectual curiosity.
Counterpoint: If the trend continues and huge numbers of students use AI to do nearly all of their coursework, the "high-functioning signal" will come from somewhere other than a 4 year college degree. Worse yet, spending the time and money such a degree requires will signal something else: money to waste, aimlessness, poor initiative.
It's not really a counterpoint, since I'm talking more about a transition. I'm not convinced you're right -- a lot depends on how things shake out -- but I do fear that you are.
Fair enough.
I agree that simply banning AI use is not the way to go. I even agree that doing so favors those students who are sneaky enough to mask their usage. I just think the alternative is to design assessments that don't themselves allow for AI use.
Above all, I just really, really do not want college to become a training ground for AI-prompting.
Oh I agree about the design part. It is something I'm working on for a journal. But I do think using AI to teach students about philosophical dialogue, as a part of the circulation, with proper oversight, could have a lot of potential.