AI continues to pose academic challenges

Instructors set standards for review, use bringing student conduct into question

6
0

As the use of Artificial Intelligence (AI) tools have become widespread, Delta faculty have struggled to contend with the challenges it poses to student educational success and expectations of ethical academic conduct.

Questions remain about how to best navigate disputes between faculty and students when evidence of AI use can be inconsistent.

Delta’s Student Conduct Handbook does not address AI directly but does give guidance on academic dishonesty, outlining disciplinary steps to be taken in case of  “plagiarism … misrepresenting work completed by others as their own.” 

AI large language models such as ChatGPT are trained on datasets of writing produced by others and a student taking credit for the writing as their own may constitute plagiarism.

“A professor usually sets within their syllabus what is allowable for academic dishonesty – i.e. plagiarism, i.e. the use of AI – and from there the instructor talks with students on what’s allowable,” said  Associate Vice President of Student Services Dr. James Dalske.

English Professor Paula Sheil says her standards for AI use have shifted over time.

“When [AI] first started my attitude was zero-tolerance … if I discover it I’m gonna run it through some AI-detectors, and I would spend an inordinate amount of time proving it to them and I thought ‘I’m not doing that anymore’ I’m a professional, if I detect it I’m gonna say ‘zero’ and you have to prove it to me, I’m not gonna waste my time,” said Sheil.

A student accused of improperly using AI may be referred to student services, but has the right to appeal the Vice President of Student Services’ decision or file a grievance against the instructor, following the process outlined in the Student Discipline and Appeal Procedure.

“This is what’s called a preponderance of evidence … 51 percent one way or the other is how I make a decision. Student likes my decision, great, it’s closed. Student doesn’t like my decision; they have the right to appeal,” said Dalske.

A student is guaranteed due process rights and is required to be given a written notice of what conduct warrants discipline which must include a “short statement of facts supporting the accusation,” according to the Student Conduct Handbook. 

These facts may include documentation submitted by instructors showing evidence from AI-detectors submitted through a conduct incident report.

However, as one 2023 study by the Journal for Educational Integrity found, “when applied to human-written control responses, the tools exhibited inconsistencies, producing false positives and uncertain classifications.” 

The study goes on to recommend balancing the use of such tools with “manual review and contextual considerations,” concluding that, “varying performance underscores the intricacies involved in distinguishing between AI and human-generated text and the challenges that arise with advancements in AI text generation capabilities.”

“GPT detectors frequently misclassify non-native English writing as AI generated, raising concerns about fairness and robustness,” found another 2023 study conducted by Stanford University. 

An average of 364 students enrolled each semester in Delta’s English as a Second Language program, according to data from Delta’s program review dashboards from the fall of 2021 to the spring of 2024.

What qualifies as evidence of AI-use may vary between instructors.

“Some teachers will say they may have seen a specific tone-of-writing/tone-of-voice that has changed,” said Dalske.

Delta has no policy either qualifying or disqualifying the use of AI-detection tools. Instructors may opt to use the tools and set their own standards in their syllabi.

“I’ve had some very serious run-ins with students … you say ‘here it is’ and they still filed a grievance and met with my dean and we all had to sit together and it’s just such a waste of my time. So, now when I put it on them I said ‘I’m just gonna give you a zero and no opportunities to redo it and it’s on you to prove to me that you didn’t’ and when you make it that point blank they just don’t – nobody’s challenged me this year,” said Sheil.

One student, Sheil said, was finally caught because of an instance of direct plagiarism from writing copied directly from a website, not because AI-use was detected.

A peer-reviewed 2024 study published in the journal PLOS One found that 94 percent of AI-generated submissions went undetected by university graders.

If student conduct escalates to a recommendation of expulsion, the student may formally request a hearing before the disciplinary measure is imposed. While evidence is brought to bear by both sides, the hearing is not subject to strict legal standards and “formal rules of evidence shall not apply,” as the Student Conduct Handbook notes.

“Again this is alleged, student conduct is not a criminal court case. We are not guilty until innocent; innocent until proven guilty,” said Dalske.

The Handbook does not specify what evidence may qualify as acceptable, but notes that the burden of proof lies with the representative of the college, not on the student.

Following a hearing, the final decision is left to the Board of Trustees, who may reject the findings of both the hearing and the Superintendent/President. 

While the Board “shall prepare a new written decision which contains specific factual findings and conclusions,” the Handbook does not elaborate further on standards for that evidence.

Delta has formed an AI Taskforce under Superintendent/President Dr. Lisa Aguilera Lawrenson, but no AI-specific policy yet exists beyond the general student conduct process already in place.

“The AI Taskforce, the Distance Education Committee, and the Academic Senate all seek to make this a requirement to help students understand what tools they can and cannot use in each of their courses,” says Dr. Lynn Hawley, Distance Education Committee Chair.

Chair of the AI TaskForce and Vice President of Administration Services Augustine Chavez says the goal is to train faculty and amend administrative procedures 4020.1-2 to require AI policy on all course syllabi.

“The Academic Senate is producing standard syllabus language that professor[s] could use on their syllabus. I would urge students to review the syllabus for the AI policy and follow what the professor has stated,” says Chavez.

On April 2, a motion was passed unanimously by the Academic Senate recommending the adoption of standard course syllabus language that would codify instructors options of either allowing students “usage of AI with or without citation,” “some AI usage allowed with citation,” or the more restrictive standard, “AI tool usage not allowed.”

The proposal does not stipulate guidelines for handling improper AI use in the classroom or for determining what should constitute evidence of misconduct by students, giving instructors leeway to make that judgment.

Sheil, for her part, no longer takes as severe an approach. 

“While I’m dealing with this ambiguity, I’m probably gonna be more lenient this semester … without the earlier hardline repercussions,” said Sheil, adding that AI can be helpful if used “to help you brainstorm,” keeping students critically engaged with the material.

In some cases, a decision may come down to the effect stricter enforcement could have on students.

“Sometimes I wonder, am I gonna be the gatekeeper between this person getting a ‘C’ and them becoming a police officer? Do I want to die on that hill?” says Sheil.