Despite what students may think, McMaster’s restrictive AI guidelines will best prepare students for the workforce of tomorrow
McMaster’s AI advisors have not had an easy job dealing with the rise of AI. When Chat-GPT was first released for public use, the university had to quickly throw together provisional guidelines which were mostly prohibitive of the use of AI. Out of nowhere, a huge threat to honest academic work became available to all students giving the academic institutions little time to consider how to respond.
McMaster’s response has taken time, but the beginning of this year has marked the introduction of guidelines no longer considered provisional.
These guidelines are unpopular with many and the consulting process created rifts between the McMaster Students Union and the university administration. MSU president Jovan Popovic suggested that students need to be prepared to work with AI in a future workforce in which the use of AI is prevalent. Meanwhile, the university was greatly concerned about the significant risks that AI poses to university pedagogy by undermining student engagement with their coursework and learning
The final guidelines have fallen firmly on the side of mitigating educational risks, without a single mention of the AI skills that might be required for the future of work. While this may disappoint student union activists who fought for more permissive AI use, I think the guidelines’ are best for students entering the uncertain AI future.
The guidelines’ ultimate goal is to maintain the integrity of the university learning process. This process is one based on learning the methods relevant to any given field of study, rather than simple content-based learning. These processes are under threat by generative AI’s capabilities to produce text indistinguishable from that written by a human, to analyze data and to interpret primary sources.
AI’s abilities to do this work convincingly represents a fundamental threat to intellectual labour. The MSU’s position, informed by this belief, is that students need to familiarize themselves with using generative AI in order to prepare themselves for a workplace dominated by AI use. But this fails to account for experts' varied views on what a future with AI might look like.
Without denying its potential to change the landscape of work, MIT Sloan, the Massachusetts Institute of Technology's business school, has discussed what the direct impacts on workers might be. It suggests that subject-matter experts and experienced employees will be increasingly required to work alongside AI, judging the quality of its output and the appropriateness of its use.
McKinsey, a globally recognized management consultancy, argues that humans will still have to check the work of AI to ensure it is correct and accurate. So, maintaining and fostering our abilities to write, validate sources and ensure the quality of our work remains essential. Considering the errors that AI can and does make, AI is best used for well-defined, job and company specific tasks such as searching through proprietary data. It is vital that we maintain and foster our creative and critical thinking abilities and not blindly trust AI with such important tasks.
McMaster’s new guidelines’ continued focus on teaching core skills is best suited to creating knowledgeable experts, ready to excel at tasks they are assigned and who understand where AI might help their work and the importance of verifying the accuracy of AI's outputs. Additionally the new guidelines’ promotion of the long, repetitive process of learning will create students prepared for a workforce defined by lifelong learning.
I believe McMaster’s current guidelines, old-school as they are, are in fact the best model for creating students ready to work alongside AI, regardless of how it may develop. Anyone can write a prompt for Chat-GPT, only well educated experts will be truly prepared for the work leftover.
Recently launched “AI Dialogues” podcast presents information and nuanced discussion about the use of AI in university
A new Spotify podcast titled AI Dialogues recently was launched by McMaster's MacPherson Institute, featuring discussions on the use of generative AI in higher education and at McMaster University.
The MacPherson Institute is McMaster's teaching and learning center and provides instructors and teaching assistants with resources and training. The podcast discusses practical and ethical questions regarding the use of AI in education and aims to present this discussion to both educators and students who may not be familiar with AI technology.
Presently, according to McMaster's provisional guidelines on the use of AI and the final report by the Task Force on Generative Artificial Intelligence in Teaching and Learning, McMaster's current policy on the use of generative AI is that students should assume they do not have permission to use the technology unless otherwise specified by their instructor.
In an interview, Stephanie Verkoeyen, an educational developer at the MacPherson Institute and the host of the podcast, expressed thoughts on many instructors restricting the use of generative AI in the classroom. “A lot of instructors seem to be taking the approach right now of just banning use (of AI),” said Verkoeyen.
Verkoeyen stated that a reason for this approach may be a lack of resources and dedicated opportunities for educators to investigate the positive and negative implications of AI for themselves and their students. Verkoeyen hopes that the podcast will reach these educators and bring them different perspectives on the use of AI in higher education.
Four episodes have been released and have featured discussions with guests who take both supportive and more critical positions on the use of AI in education. For the second episode, MSU President Jovan Popovic was featured as a guest and discussed what he has been hearing from students on the topic.
On the fourth episode, guest Mat Savelli, an associate professor in the Department of Health, Aging and Society at McMaster, shared a more critical perspective of the use of AI. According to Verkoeyen, aspects of this perspective were rooted in concerns that it could undermine the critical thinking skills of students, such as when AI is used to summarize information.
For future episodes, Verkoeyen stated that some planned topics include discussing how educators can be better taught and trained to use and apply generative AI in their teaching, what potential to improve teaching AI possesses and how AI could improve the accessibility of learning in higher education.
Students, educators, and anyone interested in the discussion surrounding the use of AI in higher education can listen to the podcast on Spotify.
McMaster community members share what they believe this report means for the community, particularly with regard to possible use of GenAI Turnitin
To better understand how generative artificial intelligence could be used in educational settings at McMaster University, a Generative Artificial Intelligence in Teaching and Learning Task Force was created in May 2023. The finalized report was then released to the McMaster community by Susan Tighe, provost and vice-president (academic), in late September 2023.
Erin Aspenlieder, the coordinator for the task force and associate director at the Paul R. MacPherson Institute for Leadership, Innovation and Excellence in Teaching, first heard about ChatGPT in Nov. 2022 on a podcast. She had been fascinated with GenAI technology and was curious about what this could mean for educational settings. Since then she has been learning about GenAI and its many functions.
As Aspenlieder learned more and began to speak with the McMaster community, she found there were some who were excited about GenAI’s future while others were apprehensive.
Jovan Popovic, McMaster Students Union president, was brought onto the Task Force by Kim Dej, vice-provost (teaching and learning). Popovic and MSU vice president (education) Abigail Samuels were both task force members and were heavily involved in conversations surrounding implementing GenAI.
Popovic expressed in an interview with the Silhouette that the final report reads to him as a discouragement to the use of GenAI in classrooms. He believed that GenAI is one of the most powerful learning tools and he worried that, by discouraging its use, McMaster students may fall behind a society that is utilizing GenAI as a tool to assist learning.
Popovic also shared that he is disappointed that despite students being discouraged to use GenAI, one of the items included in the final report is the possible integration of GenAI Turnitin. Turnitin is a software that is utilized around the world to detect plagiarism by comparing work with resources that already exists.
Popovic has shared written statements of disagreement about the integration of GenAI Turnitin with both the task force and the broader McMaster community.
Popovic referenced a piece by the Washington Post that examined the negative influences of GenAI Turnitin in educational settings. He also highlighted his concern for students falsely accused of cheating by the software and wanted to make sure that something will be done to ensure these students are protected.
“The biggest concern at the immediate moment is the Turnitin AI detection software. The concern of academic integrity cases flying through the roof on students who really shouldn't be going through [it] . . . I strongly believe that this may not deter the dishonest from continuing to use such resources, but it will deter those who study with ethics, seeing it potentially as a frightening threat,” said Popovic.
Aspenlieder explained that McMaster is currently conducting a privacy risk assessment and cost/benefit analysis for the use of GenAI Turnitin and acknowledged that the software does come with some uncertainties. Currently, she says that its implementation at McMaster will be dependent on the results of the previously mentioned PIA and cost/benefit analysis.
Lucas Mei, a fourth-year linguistics student, shared in an interview with the Silhouette that he has been keeping up with the development of GenAI for a while. Despite being very impressed by the technology, he disliked its use in academics. He stated that he thought using GenAI tools, such as ChatGTP, in academics could often cause students to not problem-solve through their work by themselves.
Mei also expressed that when he read the task force's report he felt that the person who wrote the report may not necessarily be the most knowledgeable about GenAI. He attributed this impression to the fact that many people in higher positions are often unaware of the applications of advanced technology.
Ultimately, Mei hoped that as the university continues to look into GenAI there are people on the task force who can better speak to the understanding of newer AI.
“I'm hoping that someone [on the Task Force] is of our generation or a millennial . . . and can actually understand AI. I'm just really hoping for that. Because I've seen way too many times things completely fall through because of lack of expertise and poor management and egos getting in the way,” said Mei.
As the next steps begin to be explored McMaster students are encouraged to attend the November townhalls organized by the task force, which will be announced later in the semester.
If you are unable to attend the town halls Aspenlieder also shared they are working on the open feedback form and Popovic encouraged MSU students to reach out to Samuels and himself through their emails with any comments or concenrs.
New university task force works on clearer protocols around use of AI tools in the classroom, provides provisional guidelines ahead of the fall semester
The recent rise in generative artificial intelligence use has pushed universities to address the lack definitive and researched protocols for its use in the classroom.
On May 1, 2023, the Paul R. MacPherson Institute for Leadership, Innovation and Excellence in Teaching launched their Generative Artificial Intelligence in Teaching and Learning Task Force. The task force’s goal is to better understand the impact of generative AI through an educational lens and develop recommendations for policies around its use for at McMaster University.
"Task Force members representing all six Faculties included faculty, undergraduate and graduate students, staff and senior administrators. The efforts of this diverse group of experts are summarized in a Final Report. . .The Final Report will also include recommendations for continued work across all areas of the University, which may include research, teaching and learning and staff work,” said Kim Dej and Matheus Grasselli, co-chairs of the task force, in a written statement.
On Sept. 10, they will submit their recommendations to Susan Tighe, provost and vice president (academic), after which they will undergo further review before being released.
Until this is completed the provisional guidelines have been released by the university to help guide the use of generative AI in the meantime.
As McMaster prepares to release its specific policies and guide for generative AI, everyone is encouraged to use the provisional guidelines and resources provided on the Generative Artificial Intelligence in Teaching and Learning website.
Transparency is at the core of these guidelines. Instructors are permitted to integrate generative AI tools, such as Chat GPT, into their courses, if they so choose, but they must communicate clearly with their students the extent to which these tools will be and are permitted to be used.
When it comes to student work and assessments, while instructors are again permitted to integrate generative AI tools into these tasks, unless told otherwise, students should operate with the assumption that the use of these tools is not permitted.
If members of the McMaster educational community have any comments or concerns about the Provisional Guideline provided and future guidelines they are encouraged to share through the task forces form.
In a future with AI, we need to harness ChatGPT’s potential as a tool for teaching and learning
Change is inevitable in our constantly shifting and unpredictable world. Whether that change is for better or for worse, we adapt. And we can expect to see the same with the increasing use of powerful AI tools like ChatGPT.
ChatGPT is a conversational chatbot available to users for free. It can perform a range of different functions with varying complexity based on simple prompts. The AI can answer thoughtful questions, prepare essays, write code and do so much more.
With the rise of AI in the realm of education, many academics are marking ChatGPT as a threat to teaching – but it doesn’t have to be. This premature fear is preventing us from appreciating the benefits of ChatGPT for education.
When the calculator was invented, it too, wreaked havoc among educators. The calculator brought fear that students would no longer be able to practice computational skills and would render themselves dependent on the device.
However, we adapted. Schools didn’t give up on teaching math. Instead, they began challenging students with more complex mathematical concepts. Working around the cheating-related concerns posed by calculators paved way for smarter methods of teaching and learning. In the same way, ChatGPT holds incredible applications for both students and educators.
Industries and professionals are already using ChatGPT to perform and collaborate on a range of projects and tasks. For instance, many companies have begun implementing ChatGPT as a personal assistant to help with managing meetings and schedule, writing emails, generating code, and completing a variety of other functions that save time.
With the growing use of AI in industries, some educators are realizing the need to prepare graduates who are ready to navigate a world where AI is ubiquitous. Students need to be encouraged to develop their knowledge and skills surrounding AI tools like ChatGPT so that they are aware of the limitations and ramifications of their use and misuse.
In fact, the MacPherson Institute at McMaster University has already begun to address the potential benefits of ChatGPT in classrooms as a tool to enrich teaching and learning. One McMaster professor from the School of Interdisciplinary Science, Dr. Katie Moisse, plans to ask students to use ChatGPT to prepare scientific content and then edit and annotate the content to follow principles of inclusive science communication. Redesigning assignments in this way creates opportunities for students to use AI and demonstrate their critical thinking and course-related skills.
For students and educators, the applications of ChatGPT are truly limitless. Educators can use ChatGPT to enhance lesson plans, develop study resources and test students for critical concepts in innovative ways. Similarly, students can explore the AI as a personalized tool for creating study schedules, understanding challenging concepts and preparing their own study materials like flashcards, summaries of content and practice questions.
While we explore the benefits of ChatGPT and integrate it into education, we must be mindful of its limitations too. As a chatbot trained on heaps of text, the AI does not necessarily know what it’s talking about. It can generate inaccurate or biased information at times and remaining weary of these imperfections is necessary.
With ChatGPT in the arena, it’s time to rethink education.
We need to embrace AI technologies and thoughtfully apply them to create opportunities for teaching and learning in ways that are engaging, equitable and ethical.
This article is the second article in a 2-part series. Be sure to check out Part 1: ChatGPT is not your friend.
Sure, ChatGPT can do your homework for you - but that doesn't mean you should use it
As human beings, we are programmed to take shortcuts. We’re inherently lazy. And the story is no different when it comes to our schoolwork.
Last fall, educational institutions around the world grappled with a sudden surge in cases of academic dishonesty with the launch of the powerful AI system, ChatGPT.
If you haven’t heard of the viral AI yet, you’ve likely been living under a rock.
ChatGPT is a chatbot that can perform a range of different functions based on prompts for free. It can solve that tough physics problem you spent hours working through, or better yet, write your next 5-page essay in a matter of seconds.
As busy university students, it’s undoubtedly tempting to use ChatGPT to make our lives easier and buy us time that we feel is better spent elsewhere. But with growing concerns for the fate of education, we need to weigh the risks and benefits of using ChatGPT.
Though ChatGPT is ridiculously skilled, the AI has some critical limitations. For one, ChatGPT’s knowledge is outdated. The chatbot can only process information from 2021 or earlier. And the AI is imperfect in other ways as well.
Because the AI was trained on large sets of human-generated text, it does not have access to all of the knowledge held by humans. As a result, it can make mistakes. For instance, the chatbot may experience issues with accuracy, grammar, and biased content. ChatGPT also fails to provide references for the information it generates, leaving users to fact check the content.
Beyond its structural imperfections, using ChatGPT robs you of a valuable learning experience. AI cannot teach you essential skills, like critical thinking and communication, which translate to the real world. It detracts from our motivation to learn, creativity, and ability to express ourselves. As dull and unoriginal as it sounds, it’s true. These are skills and qualities you need to succeed in any career, and more importantly, in life. When you choose to cheat on an assignment, you miss out on an important opportunity to improve your competencies and knowledge.
But quite frankly, you’ve already heard this spiel enough times to know that an education, or rather an honest education, matters.
At the end of the day, all we truly care about is passing our classes so that one day we might hold a really expensive piece of paper that validates our years of blood, sweat, and tears. Yet, it’s surprising to think that students are willing to jeopardize years of hard work over for a few assignments.
AI output detectors are getting better at their jobs. In fact, Turnitin can now detect AI-generated text. This means you likely won't be able to get away with using ChatGPT to cheat. And getting caught for academic misconduct holds rash consequences. It can leave a permanent stain on your transcript and reputation, lead to expulsion or suspension, and even destroy your opportunities for higher education or certain careers.
These are things to consider about before you obliviously choose to use the chatbot for your next assignment. But don’t take my word for it. Just ask ChatGPT.
This article is the first of a two-part series. Stay tuned for the next article on the future of education in a world with growing AI.
The Silhouette: Please introduce yourself.
Tinson Chen: My name is Tinson Chen. I'm a fourth-year student in the arts and science program and combining with computer science. I use he and him pronouns and I am the President of the Students’ Association of Arts and Science Students and the [Vice President] of engagement of the McMaster AI Society.
How did you become interested in AI?
The pivot to the liberal arts was a decision I made near the end of high school. Once I'd gotten into the program and knew I wanted to stay, I got involved with the student politics of [the program]. I was a year [representative], senior program advisor and now the president. It was a good last opportunity to bring back a bunch of sorts of traditions that the last pre-pandemic year of students know. The reason I got into AI was that it's the most cutting-edge thing. The way I started with Mac AI was that I was a humanities and social science coordinator since they all have different faculty coordinators. For science and engineering, it's clearer how it relates to AI. Whereas, in the humanities and social sciences, [there’s] less obvious connection to machine learning. So, my big role was getting humanities and social science people to be interested in it.
Why did you make that turn to liberal arts?
I wanted to keep my options open. It was the end of high school and I was talking to my guidance counsellor. I was interested in a lot of stuff, into trivia too, and she told me: "Hey, there's this program that's pretty reputable and let’s you pursue everything you want to do." She was talking about artsci. I also really wanted a well-rounded education and to avoid tunnel vision for AI. I think the liberal arts can really inform the philosophy and the ethics of AI.
Considering the breadth of your interests, do you know what you would like to pursue after your undergraduate degree?
My interests, academically at least, are to do with natural language and getting computers to create natural language. If we were to create a computer that could actually convince a human of its humanity, that is sort of equivalent to solving the problem. I feel like the channel of language is the key to what we call intelligence. So that's what motivates me and why I'm pursuing a minor in linguistics as well. Non-academically, I wouldn't mind taking a couple years to cook around different places, learn different techniques and travel a little bit. You know, just learn the ins and outs of cooking.
When did you become passionate about cooking?
Wow, this is really making me realize how much I've changed going into university. This was only for the last bit of high school. Once I got to university, I was in Bates and had a kitchen. This gave me the chance to cook a lot more and get the ingredients to experiment with.
Is there anything else you'd like to share?
Maybe Parkinson's Law: work expands to fill time. You can do as much as you'd like. You just have to do it all shoddily.
By: Evonne Syed
The topic of integrating artificial intelligence and robots into the workforce rouses the concern of anyone wishing to enter the job market, and the same goes for postsecondary students.
Fortunately, the future is optimistic for students as automation is not expected to prevent graduates from attaining their career goals.
In fact, the rise of automation actually improves career prospects for university graduates, as it is creating a new job market. Forbes Magazine reports that artificial intelligence is predicted to create 58 million jobs as 2022 approaches.
As the popularity of automation systems and the use of artificial intelligence in the workplace becomes more widespread, there will be more and more people required to actually build and develop these systems.
This will open up opportunities for those who wish to enter the fields of robotics and information technology. BBC News anticipates the prominence of data analysts, social media specialists and software developers, as a result.
For this reason, while one may argue that automation has resulted in the elimination of certain jobs, the introduction of automation in the workforce is actually creating more jobs and opportunities in our current digital age.
Luckily, McMaster University has many programs to equip students with the necessary skills to flourish in our digital age. The recent construction of the Hatch Centre shows McMaster’s testament to students advancing in these fields.
Even if one is not interested in working in the field of automation, that does not mean that they are otherwise at risk of being unable to obtain a job. There is an increasing demand for “human skills” in the workforce since these skills are what distinguish robots from actual human beings.
University graduates tend to seek out careers that require a higher level of education which simply cannot be programmed into automation systems. It would be way too costly and time consuming to teach a robot the knowledge a person has acquired from their post-secondary education.
There are also plenty of skills, academic and otherwise, that students learn and develop through their time at university. Education and experiential opportunities prepare students to apply their knowledge in a variety of situations.
For example, critical thinking skills and problem solving are transferable “soft skills” that employers seek and students develop during their time at university.
Some jobs require humanistic qualities, which are simply not possible for a machine to replicate. For instance, no matter how much technology advances, robots may never be capable of understanding human emotions and experiences.
The interpersonal skills, empathy and compassion that people develop by interacting with one another are skills that are beneficial for the work environment. These skills equip anyone to thrive professionally as the future of the job outlook changes.
Technological advancements such as automation will inevitably impact life as we know it, and that includes changing our work environments. However, these changes are not inherently harmful and the possibilities for post-secondary graduates remain promising.
Students must be proactive, take initiative to educate themselves as much as possible and work on developing these skills. Provided that students make the most of their university experience, and are willing to undergo some extra training to keep their learning sharp, robots are sure to have nothing on them.
[thesil_related_posts_sc]Related Posts[/thesil_related_posts_sc]