New Guidance on Using ChatGPT and AI Tools in the Classroom

Natasha Burman

Professors and students consider what the future holds

ChatGPT and other generative artificial intelligence (AI) tools have taken the world by storm. These tools use language models to provide natural, dialogue-like responses to queries. The public can use ChatGPT and similar AI tools to draft essays, create code, and edit work. The use of these tools in an academic setting is currently the subject of mass debate, with some arguing that they encourage cheating while others embrace their use

While the University of Toronto’s Office of the Vice-Provost, Innovations in Undergraduate Education issued a resource page on using AI tools and AI-generated content in the classroom in January, the Faculty of Law was silent on this issue until recently. 

Ultra Vires reached out to the Dean’s Office and the law school administration on March 15 to understand the Faculty’s stance on using generative AI in written submissions. In response, on the eve of March 22, Associate Dean, JD Program Christopher Essert released a statement to the U of T Law community advising them of the law school’s position: absent express instructions from an instructor permitting the use, using generative AI tools on graded written submissions will constitute “the use of an unauthorized aid, which is an offence under the University’s Code of Behaviour on Academic Matters,” and may attract disciplinary action. While the Faculty “recognizes this technology can be used productively and can be an important part of a learning experience,” it is also concerned with academic integrity. This is consistent with the guidance for undergraduate students. 

Associate Dean Essert’s email links to the University’s resource page, which provides answers to frequently asked questions like the extent to which students can use AI tools to complete their assessments and how faculty members can determine if a student’s work includes AI-generated content. 

The University states that students are expected to complete assignments on their own, without any outside assistance, unless otherwise specified. Professors are encouraged to inform students about what AI tools they may use. The University notes that it is within professors’ discretion to decide if students can use AI tools to create their assignment outlines and first drafts. Consistent with Associate Dean Essert’s email, students are urged to ask their instructors if they are uncertain about the permissibility of using any technology tool.

However, the University discourages professors from using AI detectors (such as “GPTZero”) on students’ work, as the quality of these detectors is yet to be ascertained. Further, the resource page notes that sharing a student’s work with one of these detectors may raise privacy-related and ethical issues. This raises concerns about how a professor can determine whether a student’s work contains AI-generated content. Associate Dean Essert’s email is silent on this issue; however, if the Faculty is largely adopting the University’s guidance in other regards, it could be assumed they too discourage the use of AI detectors. 

With exam season fast approaching, this guidance from the Faculty reassures law students’ and faculty members’ concerns. Ultra Vires also surveyed how other schools are responding to this issue and queried students and faculty on their thoughts about the policy.

ChatGPT makes the case for law students interested in using it. Credit: Sabrina Macklai

Beyond U of T 

New York City’s (NYC) Education Department has blocked the use of ChatGPT on school devices and networks, citing “negative impacts on student learning, and concerns regarding the safety and accuracy of content.” NYC is not the only school district to outright ban the AI-powered chatbot; as of January, the Seattle Public Schools have banned ChatGPT from all school devices, with the rationale that the district “does not allow cheating and requires original thought and work from students.” Similar controls have been implemented in Queensland and New South Wales, Australia.

In contrast, many universities in Canada have opted to release faculty guidance pages, like U of T. The page issued by York University and the memo from the Associate Vice-President, Academic of the University of Waterloo both state that the unauthorized use of ChatGPT or similar AI tools for generating content that is submitted as one’s own work is considered a violation of academic honesty. A uOttawa page on academic integrity directs students to ask their professors if using AI generators violates academic integrity provisions. Further, students must disclose any use of AI-generated content.

As of publication, McGill University has taken a different position in the debate. Instead, the Dean of Students sent out an internal newsletter naming ChatGPT as the beginning of a new paradigm shift, akin to the release of the HP35 calculator in 1972 and the invention of spell-check on the personal computer. The Dean’s Office is soliciting student and faculty comments on how the future should look.

How Should U of T Law Regulate the Use of AI-Generated Content?

Ultra Vires reached out to Professor Benjamin Alarie, a vocal proponent of allowing AI tools in the classroom, for his thoughts on what the law school’s policy should look like. Prof. Alarie used ChatGPT to help formulate his comment.

“First, I genuinely believe that AI and ChatGPT have the power to change education for the better,” commented Prof. Alarie. “New tech can help students and faculty in so many ways, like offering personalized explanations and sparking new ideas. But, of course, we’ve got to find the right balance between using these tools effectively and addressing academic integrity concerns.” 

In Prof. Alarie’s view, the Faculty should permit students to use AI tools like ChatGPT so long as they stay within clear guidelines. To Prof. Alarie, AI can be a “helpful sidekick” but not a replacement for students’ own thinking and work. 

Prof. Alarie also commented on the concern of academic integrity. He noted that “a reasonable guideline [for students using AI-generated content] is that they should treat it like any other source and give proper credit.” He emphasized that it is the Faculty’s job, as academics, “to help [students] understand the responsible use of AI in their academic journey.” 

Finally, regarding the use of AI detectors, Prof. Alarie noted that he understands why the University would be cautious about allowing the use of detectors in classrooms; however, he thinks “they could be helpful in some circumstances for teachers who want to ensure a level playing field for all students.” 

Further, Prof. Alarie advocated for finding a “middle ground,” such as “fostering a collaborative environment where students, teachers, and AI work together to maintain academic integrity. Open communication and a transparent evaluation process could be the way to go.” 

Ultimately, Prof. Alarie believes that “we can harness the power of AI to create an even more enriching learning experience if we approach it thoughtfully and responsibly.”

Professor Ariel Katz also provided a comment on the issue of using AI detectors, noting that, at the least, there would likely be no student intellectual property concerns. Among other things, this case could be analogized to the A.V. v iParadigms case from the United States, where the use of plagiarism software was held not to be a violation of copyright. Prof. Katz noted that the Faculty has other strong arguments against student intellectual property claims, such as arguing that “no reproduction of substantial part has been made,” and possibly requiring students’ consent to the use of these detectors. 

In addition to professors’ perspectives, Ultra Vires asked the U of T Law student community for their thoughts on using AI tools in the classroom. Of the students who responded, 91 percent used ChatGPT or a similar AI tool, with 64 percent using these tools in relation to their law school work. Many students noted that they used these tools “as a starting point” to formulate ideas and research topics. They also used them to help explain legal concepts. One student responded that they did not find AI useful for interpreting cases, stating that the language model seemed to just make up sources that did not exist. This is a common concern; individuals online have noted that ChatGPT regularly generates fake academic references and should be used with caution.

When asked whether students should be allowed to use ChatGPT or a similar AI tool to help with their law school work, most responses recommended permitting the use of AI tools to assist with research, summarize legal topics, and help make writing clearer. Students agreed that users of the tool should be warned of the risks of plagiarism, and the tool should not be used to replace students’ own work.

On the issue of whether to police the use of AI content generation in academia, the polled law students were divided on whether professors should be permitted to screen students’ work using AI detectors. One student noted that where AI-generated content is banned, the ban must be enforced in some way to uphold respect for academic rules. However, another student commented that procedural safeguards are required to ensure that detectors, if used by professors, do not lead to disciplinary action in cases where a student’s work is not plagiarised. As the current standard for finding an academic offence is on the balance of probabilities, this is “far too low, given the reliability issues of detectors.” The student recommended that an offence of using AI-generated content on an assignment should be its own unique academic offence, assessed on a higher standard—“beyond a reasonable doubt.” 

With the lack of clarity surrounding the efficacy of AI detectors, policing and enforcing the law school’s policy on the use of AI content in assignments remains uncertain.

Categories:
Tags:

Advertisement

Begin typing your search above and press return to search. Press Esc to cancel.