Despite what students may think, McMaster’s restrictive AI guidelines will best prepare students for the workforce of tomorrow
McMaster’s AI advisors have not had an easy job dealing with the rise of AI. When Chat-GPT was first released for public use, the university had to quickly throw together provisional guidelines which were mostly prohibitive of the use of AI. Out of nowhere, a huge threat to honest academic work became available to all students giving the academic institutions little time to consider how to respond.
McMaster’s response has taken time, but the beginning of this year has marked the introduction of guidelines no longer considered provisional.
These guidelines are unpopular with many and the consulting process created rifts between the McMaster Students Union and the university administration. MSU president Jovan Popovic suggested that students need to be prepared to work with AI in a future workforce in which the use of AI is prevalent. Meanwhile, the university was greatly concerned about the significant risks that AI poses to university pedagogy by undermining student engagement with their coursework and learning
The final guidelines have fallen firmly on the side of mitigating educational risks, without a single mention of the AI skills that might be required for the future of work. While this may disappoint student union activists who fought for more permissive AI use, I think the guidelines’ are best for students entering the uncertain AI future.
The guidelines’ ultimate goal is to maintain the integrity of the university learning process. This process is one based on learning the methods relevant to any given field of study, rather than simple content-based learning. These processes are under threat by generative AI’s capabilities to produce text indistinguishable from that written by a human, to analyze data and to interpret primary sources.
AI’s abilities to do this work convincingly represents a fundamental threat to intellectual labour. The MSU’s position, informed by this belief, is that students need to familiarize themselves with using generative AI in order to prepare themselves for a workplace dominated by AI use. But this fails to account for experts' varied views on what a future with AI might look like.
Without denying its potential to change the landscape of work, MIT Sloan, the Massachusetts Institute of Technology's business school, has discussed what the direct impacts on workers might be. It suggests that subject-matter experts and experienced employees will be increasingly required to work alongside AI, judging the quality of its output and the appropriateness of its use.
McKinsey, a globally recognized management consultancy, argues that humans will still have to check the work of AI to ensure it is correct and accurate. So, maintaining and fostering our abilities to write, validate sources and ensure the quality of our work remains essential. Considering the errors that AI can and does make, AI is best used for well-defined, job and company specific tasks such as searching through proprietary data. It is vital that we maintain and foster our creative and critical thinking abilities and not blindly trust AI with such important tasks.
McMaster’s new guidelines’ continued focus on teaching core skills is best suited to creating knowledgeable experts, ready to excel at tasks they are assigned and who understand where AI might help their work and the importance of verifying the accuracy of AI's outputs. Additionally the new guidelines’ promotion of the long, repetitive process of learning will create students prepared for a workforce defined by lifelong learning.
I believe McMaster’s current guidelines, old-school as they are, are in fact the best model for creating students ready to work alongside AI, regardless of how it may develop. Anyone can write a prompt for Chat-GPT, only well educated experts will be truly prepared for the work leftover.
McMaster community members share what they believe this report means for the community, particularly with regard to possible use of GenAI Turnitin
To better understand how generative artificial intelligence could be used in educational settings at McMaster University, a Generative Artificial Intelligence in Teaching and Learning Task Force was created in May 2023. The finalized report was then released to the McMaster community by Susan Tighe, provost and vice-president (academic), in late September 2023.
Erin Aspenlieder, the coordinator for the task force and associate director at the Paul R. MacPherson Institute for Leadership, Innovation and Excellence in Teaching, first heard about ChatGPT in Nov. 2022 on a podcast. She had been fascinated with GenAI technology and was curious about what this could mean for educational settings. Since then she has been learning about GenAI and its many functions.
As Aspenlieder learned more and began to speak with the McMaster community, she found there were some who were excited about GenAI’s future while others were apprehensive.
Jovan Popovic, McMaster Students Union president, was brought onto the Task Force by Kim Dej, vice-provost (teaching and learning). Popovic and MSU vice president (education) Abigail Samuels were both task force members and were heavily involved in conversations surrounding implementing GenAI.
Popovic expressed in an interview with the Silhouette that the final report reads to him as a discouragement to the use of GenAI in classrooms. He believed that GenAI is one of the most powerful learning tools and he worried that, by discouraging its use, McMaster students may fall behind a society that is utilizing GenAI as a tool to assist learning.
Popovic also shared that he is disappointed that despite students being discouraged to use GenAI, one of the items included in the final report is the possible integration of GenAI Turnitin. Turnitin is a software that is utilized around the world to detect plagiarism by comparing work with resources that already exists.
Popovic has shared written statements of disagreement about the integration of GenAI Turnitin with both the task force and the broader McMaster community.
Popovic referenced a piece by the Washington Post that examined the negative influences of GenAI Turnitin in educational settings. He also highlighted his concern for students falsely accused of cheating by the software and wanted to make sure that something will be done to ensure these students are protected.
“The biggest concern at the immediate moment is the Turnitin AI detection software. The concern of academic integrity cases flying through the roof on students who really shouldn't be going through [it] . . . I strongly believe that this may not deter the dishonest from continuing to use such resources, but it will deter those who study with ethics, seeing it potentially as a frightening threat,” said Popovic.
Aspenlieder explained that McMaster is currently conducting a privacy risk assessment and cost/benefit analysis for the use of GenAI Turnitin and acknowledged that the software does come with some uncertainties. Currently, she says that its implementation at McMaster will be dependent on the results of the previously mentioned PIA and cost/benefit analysis.
Lucas Mei, a fourth-year linguistics student, shared in an interview with the Silhouette that he has been keeping up with the development of GenAI for a while. Despite being very impressed by the technology, he disliked its use in academics. He stated that he thought using GenAI tools, such as ChatGTP, in academics could often cause students to not problem-solve through their work by themselves.
Mei also expressed that when he read the task force's report he felt that the person who wrote the report may not necessarily be the most knowledgeable about GenAI. He attributed this impression to the fact that many people in higher positions are often unaware of the applications of advanced technology.
Ultimately, Mei hoped that as the university continues to look into GenAI there are people on the task force who can better speak to the understanding of newer AI.
“I'm hoping that someone [on the Task Force] is of our generation or a millennial . . . and can actually understand AI. I'm just really hoping for that. Because I've seen way too many times things completely fall through because of lack of expertise and poor management and egos getting in the way,” said Mei.
As the next steps begin to be explored McMaster students are encouraged to attend the November townhalls organized by the task force, which will be announced later in the semester.
If you are unable to attend the town halls Aspenlieder also shared they are working on the open feedback form and Popovic encouraged MSU students to reach out to Samuels and himself through their emails with any comments or concenrs.
In a future with AI, we need to harness ChatGPT’s potential as a tool for teaching and learning
Change is inevitable in our constantly shifting and unpredictable world. Whether that change is for better or for worse, we adapt. And we can expect to see the same with the increasing use of powerful AI tools like ChatGPT.
ChatGPT is a conversational chatbot available to users for free. It can perform a range of different functions with varying complexity based on simple prompts. The AI can answer thoughtful questions, prepare essays, write code and do so much more.
With the rise of AI in the realm of education, many academics are marking ChatGPT as a threat to teaching – but it doesn’t have to be. This premature fear is preventing us from appreciating the benefits of ChatGPT for education.
When the calculator was invented, it too, wreaked havoc among educators. The calculator brought fear that students would no longer be able to practice computational skills and would render themselves dependent on the device.
However, we adapted. Schools didn’t give up on teaching math. Instead, they began challenging students with more complex mathematical concepts. Working around the cheating-related concerns posed by calculators paved way for smarter methods of teaching and learning. In the same way, ChatGPT holds incredible applications for both students and educators.
Industries and professionals are already using ChatGPT to perform and collaborate on a range of projects and tasks. For instance, many companies have begun implementing ChatGPT as a personal assistant to help with managing meetings and schedule, writing emails, generating code, and completing a variety of other functions that save time.
With the growing use of AI in industries, some educators are realizing the need to prepare graduates who are ready to navigate a world where AI is ubiquitous. Students need to be encouraged to develop their knowledge and skills surrounding AI tools like ChatGPT so that they are aware of the limitations and ramifications of their use and misuse.
In fact, the MacPherson Institute at McMaster University has already begun to address the potential benefits of ChatGPT in classrooms as a tool to enrich teaching and learning. One McMaster professor from the School of Interdisciplinary Science, Dr. Katie Moisse, plans to ask students to use ChatGPT to prepare scientific content and then edit and annotate the content to follow principles of inclusive science communication. Redesigning assignments in this way creates opportunities for students to use AI and demonstrate their critical thinking and course-related skills.
For students and educators, the applications of ChatGPT are truly limitless. Educators can use ChatGPT to enhance lesson plans, develop study resources and test students for critical concepts in innovative ways. Similarly, students can explore the AI as a personalized tool for creating study schedules, understanding challenging concepts and preparing their own study materials like flashcards, summaries of content and practice questions.
While we explore the benefits of ChatGPT and integrate it into education, we must be mindful of its limitations too. As a chatbot trained on heaps of text, the AI does not necessarily know what it’s talking about. It can generate inaccurate or biased information at times and remaining weary of these imperfections is necessary.
With ChatGPT in the arena, it’s time to rethink education.
We need to embrace AI technologies and thoughtfully apply them to create opportunities for teaching and learning in ways that are engaging, equitable and ethical.
This article is the second article in a 2-part series. Be sure to check out Part 1: ChatGPT is not your friend.
Sure, ChatGPT can do your homework for you - but that doesn't mean you should use it
As human beings, we are programmed to take shortcuts. We’re inherently lazy. And the story is no different when it comes to our schoolwork.
Last fall, educational institutions around the world grappled with a sudden surge in cases of academic dishonesty with the launch of the powerful AI system, ChatGPT.
If you haven’t heard of the viral AI yet, you’ve likely been living under a rock.
ChatGPT is a chatbot that can perform a range of different functions based on prompts for free. It can solve that tough physics problem you spent hours working through, or better yet, write your next 5-page essay in a matter of seconds.
As busy university students, it’s undoubtedly tempting to use ChatGPT to make our lives easier and buy us time that we feel is better spent elsewhere. But with growing concerns for the fate of education, we need to weigh the risks and benefits of using ChatGPT.
Though ChatGPT is ridiculously skilled, the AI has some critical limitations. For one, ChatGPT’s knowledge is outdated. The chatbot can only process information from 2021 or earlier. And the AI is imperfect in other ways as well.
Because the AI was trained on large sets of human-generated text, it does not have access to all of the knowledge held by humans. As a result, it can make mistakes. For instance, the chatbot may experience issues with accuracy, grammar, and biased content. ChatGPT also fails to provide references for the information it generates, leaving users to fact check the content.
Beyond its structural imperfections, using ChatGPT robs you of a valuable learning experience. AI cannot teach you essential skills, like critical thinking and communication, which translate to the real world. It detracts from our motivation to learn, creativity, and ability to express ourselves. As dull and unoriginal as it sounds, it’s true. These are skills and qualities you need to succeed in any career, and more importantly, in life. When you choose to cheat on an assignment, you miss out on an important opportunity to improve your competencies and knowledge.
But quite frankly, you’ve already heard this spiel enough times to know that an education, or rather an honest education, matters.
At the end of the day, all we truly care about is passing our classes so that one day we might hold a really expensive piece of paper that validates our years of blood, sweat, and tears. Yet, it’s surprising to think that students are willing to jeopardize years of hard work over for a few assignments.
AI output detectors are getting better at their jobs. In fact, Turnitin can now detect AI-generated text. This means you likely won't be able to get away with using ChatGPT to cheat. And getting caught for academic misconduct holds rash consequences. It can leave a permanent stain on your transcript and reputation, lead to expulsion or suspension, and even destroy your opportunities for higher education or certain careers.
These are things to consider about before you obliviously choose to use the chatbot for your next assignment. But don’t take my word for it. Just ask ChatGPT.
This article is the first of a two-part series. Stay tuned for the next article on the future of education in a world with growing AI.