In recent updates, OpenAI has brought attention to a pressing issue for teachers and professors. They’ve found that sites and apps claiming to detect AI-generated content in students’ work are less reliable than one might hope.

In their FAQ section aimed at educators, OpenAI admits that AI content detectors struggle to differentiate between AI-generated and human-generated text consistently. Surprisingly, they even mention their attempts to create such sensors labeled human-written text, including classic works like Shakespeare and the Declaration of Independence, as AI-generated.

To compound the problem, these content detectors often mistakenly flag work by students requiring English as their first language as AI-generated. The Markup previously reported this issue.

At the heart of this problem is ChatGPT, a popular tool among students due to its ability to generate text and provide human-like responses. It has become convenient for tasks like essay writing and research.

However, teachers are growing concerned about students potentially using ChatGPT to cheat by passing off its ideas and phrases as their own. They worry that students must rely more on this tool despite its occasional errors and inaccuracies.

Professors began noticing students using ChatGPT to cheat on college essays just a little over a month after the chatbot’s release in November 2022. A survey earlier this year revealed that one in four teachers claimed to have caught students cheating by using ChatGPT.

OpenAI recognizes educators’ challenges when dealing with AI-generated content presented as students’ own work. They suggest a possible solution: asking students to keep records of their conversations with ChatGPT and include them in their homework.

According to OpenAI, this approach can help students track their progress over time, allowing them to see how their skills in asking questions, analyzing responses, and integrating information have evolved.

It’s worth noting that OpenAI also acknowledges that ChatGPT is not free from biases and stereotypes, urging users and educators to review its content carefully.

As this issue gains more attention, educators and institutions may need to rethink their strategies to maintain academic integrity in the age of AI-powered assistance. OpenAI’s guidance and transparency are essential steps in addressing these challenges.

Possible Solutions Mentioned on Reddit

As we’re only beginning to tap into the vast potential of AI in education, it’s crucial that we adapt and strike the right balance for the future. These initial recommendations are promising steps in the right direction.

This was also discussed on The Reddit community under a post, and some practical and promising solutions were shared that I genuinely believe can effectively address this challenge. I want to share these comments because they are not only doable but can potentially make a significant impact.

Here are the comments:

This is the dumbest take and I still can’t believe people are falling for it. I’m a teacher, I can catch it; I caught two this week. I must be magic.”

Here is my secret: I actually read their work. ChatGPT is a predictive language model, it literally works backward from natural human thought. Natural human thought begins with an idea and then tries to use language to express it. AI looks at words presented and then presents words that it has seen come after in response. It doesn’t know why those answers are given as a response, it just knows that similar prompts have been responded to with similar answers. So the end result is no clear thesis but lots of “keywords.” Even individual sentences often lack clear meaning. It doesn’t look like human language.

If I ever suspect a student turned in AI generated work, all I do is ask them about things they wrote. The students that turn in those fake writings are too lazy to even read what they copied so they never have any clue what you’re asking. Yesterday a student turned in an AI-generated essay about Thomas Paine’s religious beliefs (wasn’t even the question). I simply asked him in person, “so tell me about Thomas Paine’s religious beliefs,” he had no idea what I was talking about— minutes after supposedly composing 5 paragraphs on the subject.

If teachers are engaged with their students’ work, finding AI is easy.

By pomonamike on Reddit

I think it’s much easier than this. Educators should use tests in class to get a baseline for a student every few months. GPT generated work will be quite obviously different both in the writing style and the grades received.

By ixid on Reddit

I agree. Just ask each kid to tell you what they wrote about when they hand it in. Sure there are a few outcomes, but, most of the students who did the assignment will have learned something and could tell you about what they wrote. The ones who used chatgpt will stand there slack jawed.

By vitium on Reddit

Start the semester having them write 2-3 writing assignments in class.

Keep a copy to compare to their writing style in the future, should anything look weird later on.

Easy way to catch 90% of lazy cheaters.

Cheaters that put in a ton of effort you’re probably never going to catch, oh well.

By penguished on Reddit

Note: In sharing these comments from Reddit, my intention was solely to disseminate valuable suggestions without any intention of infringement. If any content owner wishes to have their material removed, please kindly contact us at [email protected], and we will promptly address your request.

However, if you’re interested in exploring even more innovative solutions, you can find a wealth of great suggestions on this Page. Let’s embrace the possibilities of AI while ensuring it serves as a force for positive change in education.”

Share this:

Discover more from TECHPALAVA

Subscribe to get the latest posts sent to your email.

Discover more from TECHPALAVA

Subscribe now to keep reading and get access to the full archive.

Continue reading