I, AI-bot: A classroom discussion in artificial intelligence

September 3, 2024 Features

Artificial Intelligence (AI) is the seemingly paradoxical concept of non-living entities generating persistent logic and reason—which, from the earliest days of ancient philosophers such as Descartes, are the singular defining features of what it means to be human. 

The first actual attempts at simulating human intelligence came in 1943, when academics Walter Pitts and Warren McCulloch published ideas in The Bulletin of Mathematical Biophysics, wherein they used mathematical functions to simulate a “neural network” such as those found in the human brain. While this function may be considered similar to today’s circuit boards, study shifted to AI as we know it, which can be traced back to a conceptual experiment by Alan Turing in 1950. 

In “Computer Machinery & Intelligence,” Turing chose to avoid getting hung up on defining a “machine” and what it means “to think.” Instead he focused on functionality, asking if machines can do what we as thinking entities can, and decided that if a typewritten conversation between a machine and a human could be considered indistinguishable, then a machine can be considered intelligent. This became known as the Turing Test.

Along the winding path that took us from mathematical algorithms to Siri, computer programmers eventually formed rudimentary language parsing software, which identified pre-programmed combinations of keywords as containing “meaning.” This is probably best personified through the early text-based adventure games of the late ’70s and ’80s, wherein a player might instruct the computer to “take food” or “throw rock break window.”

This labour-intensive process quickly revealed a programmer’s searing holy hellfire: if intelligence is based around explicit instructions, then complex intelligence requires manually pre-programming millions of commands to account for every possible eventuality. Called “bottom-up” programming, this was quickly found to be impracticable, and eventually “top-down” knowledge evolution was conceived, which we now know as machine learning. This begins with a simple premise and allows the computer to use trial and error to rewrite and refine its own rules and associations.

Fast forward to 2017, and our very perceptions of reality are thrown violently into contest through the emergence of deepfakes, or the process of using machine learning to generate a convincing video depiction of a real person, and with that, the AI ball as we know it was set into motion. Since then, AI technology has become eerily sophisticated, and we now have AI personalities that can, in a superficial sense, achieve the hypothetical test that Turing envisioned nearly a century ago.

There are three main avenues of AI that we may encounter: speech synthesis/cloning, image/video generation, and text-based artificial personalities. It’s the latter two that are causing the most stir, with visual AI algorithms capable of generating shockingly coherent artwork from complex instructions, and conversational AI algorithms creating (mostly) passable writings.

This article explores AI from an academic standpoint. Is procedurally generated artwork legally distinct from the real artwork that the AI was trained on? Is it considered transformative fair use, or is it at least ethical? Can AI-generated writing be trusted as accurate? And can the use of AI in assignments be considered cheating?

Unfortunately, the unprecedented ingenuity of this technology leaves many of these inquiries in a murky grey area, but through discussions with some Camosun instructors, I’ll do my best to find some clarity.

Legality

The question of legality emerges primarily in the procedural generation of AI artwork, which is trained by assimilating millions of actual, human-created art pieces. Those opposed to AI art say that art must be original material, and because AI algorithms are trained on real artwork without the creator’s consent, this is, in their opinion, considered stealing. The opposition states that just as human artists are inspired by pre-existing media, the crucial element lies in the concept of “transformative fair use,” which creates something similar but entirely distinct, with no permission necessary.

Even the concept of art itself is crumbling under scrutiny. Some argue that true art cannot be created by machines, because art is an emotional expression of the human experience. Others feel that art has always existed independently of the creator—that the meaning lies in the artwork itself, and if a piece causes an emotional reaction in the human viewing it, it has artistic merit. 

This story originally appeared in our September 3, 2024 issue (illustration by Ray Nufer/Nexus).

Camosun College Criminal Justice instructor Michel Legault says that as far as legality is concerned, the answer is undefined, because laws and policies are largely a product of unfortunate hindsight.

“The ‘legality’ is the term that probably creates the grey area, because there’s no legal framework per se. Laws are made as a result of ‘something went wrong.’ Yes, some laws are made for prevention, but most of the laws are made as a result of something that happened, an offence occurred or whatever,” he says. “So if AI is used in a way to create a criminal offence of some sort, then legislation will probably follow at that point. In the meantime, academic institutions or even companies, for that matter, have to create their own policy around the use of AI, saying, we expect a level of critical thinking, we expect you to do your research, understanding what you bring to the table.”

Integrity

The most rampant problem arising in schools regarding the use of AI is when students procedurally generate the answers to their assignments and try to pass them off as legitimate using engines such as ChatGPT. Originally, my opinion was that ChatGPT uses quite a lot of words to say nothing of actual substance, so it must be laughably easy to spot and weed out, so therefore not really a true academic threat.

However, that view was challenged by English instructor Kelly Pitman, who says that students who use AI put a heavy labour load on teachers, as well as actively contribute to the erosion of trust between instructors and students as a whole.

“It requires, typically on average, several hours per assignment that may be dependent on AI, because I have to go through a process of checking through students’ writing, and I have to talk to the student, and I often have to schedule a rewrite for them,” she says. “That’s how it’s transformed my life, and I’d have to say my fear is that it’s also transforming my relationship with students, because your default is always, ‘Hmm, this seems like a kind of set essay from a book, I wonder if the student wrote it,’ and you’re constantly having to ask that question… and often the answer is that they did not.”

Detecting plagiarism used to be a simple process of copy and pasting text into a search engine to evaluate if there were any glaringly similar matches. Unfortunately, AI-generated text is too variable to be caught by such strict parameters, and the instructor has to rely on their intuition around evaluating conflicting writing styles. Accusing a student of academic dishonesty is no trivial matter, and while she has only ever been wrong once, Pitman lives with the constant anxiety of misinterpreting the situation and falsely accusing an innocent student. 

“I’m also very afraid of being wrong, and accusing someone who has really worked hard to do what they had to do, but I have had a lot of conversations with students who have used AI, and they are painful and difficult conversations where they tell me why, and I tell them why they can’t, and I feel for them,” she says. “Often they know it’s wrong, and they’re doing it because of extrinsic factors. They’re nervous about it, they’re panicking. They’re working a full-time job, they’re living in a dump with a roommate they don’t like, and worried about the future, and there’s a lot of motivation [to cut corners].”

Camosun College Faculty Association president Lynelle Yutani, who is also a Camosun instructor, says that, like Pitman, rather than blind vindictive punishment, her primary focus is understanding the student’s justifications, in order to gain some context on their poor choice.

“Whenever I’ve had a student in my class who has bent those academic integrity boundaries, the reason behind that has always interested me more than the thing that’s actually happened,” says Yutani. “The pressure to get good grades, or the consequences of failing out of an expensive program, or sometimes there’s ego involved, all of these things that drive us to do things that are very survival-instinct based.”

Consequences

The repercussions of getting caught for cheating extend far beyond the immediate consequences of getting caught or the perceived short-term benefits of following the path of least resistance. Pitman thinks that students considering cheating should re-evaluate the very reason that they are paying an arm and a leg to attend college in the first place.

“What I wish is that students ask themselves more often is whether it matters whether they know the content of the courses that they’re taking. Is it about a ticket to a job or is it about learning?” she says. “That’s the core question, because if it’s really just a bunch of hoops to go through so that you can start your future, you’ve got your teachers insisting that you learn and that’s not your priority. There will come a time when you will be expected to demonstrate the skills that it says on paper that you have. So you’re losing that opportunity to make sure that you will be able to do that.”

Yutani says that the students who graduate without legitimately learning are not only doing themselves a disservice—it also reflects negatively on the instructors whose job it is to make sure students who traverse the education system emerge rightly and thoroughly educated. 

“If I graduate students into the workforce that don’t have the skills or capabilities that we’re certifying on the basis of credentials,” she says, “is that my fault because I didn’t catch them, or is that the student’s fault because they used AI and passed it off as their own work?”

To Pitman, the value of education does not lie solely in written knowledge, but also the ability to reason, problem solve, and think critically. Students who take the time to learn and understand are also taking the time to improve how they resolve problems and challenges in the lives of themselves and those around them. 

“I’m surprised at how often I have to work with a student to get them to think about the knowledge, and what value that might have for them, and what value it might have for a culture that we have people who can think their own way out of problems,” says Pitman. “I see myself performing a civic duty as well as preparing people for employment, and I don’t want to send people away with a college education or a university degree who can’t read a book or who can’t spot a problem in an argument, can’t think their way through to a new solution that no one’s thought of.”

Perspectives

Yutani believes that AI is not a magic technological solution, and that it has the potential to do more harm than good if not used responsibly.

“It’s not a panacea to solve all your problems and make everything easier. Depending on how it can be used, it’s very possible for it to make things worse. And I think that’s what we’re seeing in general, in pop culture media,” says Yutani.

Pitman points out that AI can only ever function using popular points of view that have already existed across many years of the internet, and that in order to move forward as a culture, we need to find new discussions and new ideas.

“Another thing that I think students need to think about with AI is we’re living in a time of great development of our ideas, about identity and power, and AI by default is reproducing dominant discourse,” she adds. “We can’t just keep rehearsing and revising the same old knowledge; that’s been getting us nowhere.”

AI algorithms have no sense of morality, and this subtle trait is what makes us human. The ability to distinguish between multiple morally complex scenarios is something AI will never be able to do, because it lacks empathy.

“The important question is, ‘What is the human element, and does it matter?’ If I ask someone to write an argument about the best political candidate for the 2025 federal election, AI could probably do a comparison,” says Pitman. “But a human being can have a subtlety to bring to that conversation, including putting together a moral subtlety that AI cannot do. A machine doesn’t make value judgements, and value judgements are a lot of what we do when we communicate and work together.” 

On the other hand, like technologies that have come before it, AI can play a positive role in the classroom and workplace, and shouldn’t be viewed as strictly negative or creating double standards.

“I’m not sure if it’s so terrible when you look at it from an equity or accommodation perspective,” says Yutani. “Not everybody can have the same level of skills, but if we have tools that allow people to achieve a similar level, is that the worst thing in the world? If tomorrow you’re employed and your employer says, I need you to get this thing done, and you don’t know how to do it, well, for a lot of people the first step would be Google, YouTube, and AI. If you deliver a product that works for your employer and they’re satisfied with it, then you’re successful in the workplace, and if I’m in an applied learning classroom, and I say, ‘No, you can’t do any of those things,’ that’s a bit disingenuous, don’t you think?”

Yutani believes that open communication is the key to properly integrating AI into the classroom—there is no reason for a student to slink around in the shadows using this forbidden technology surreptitiously, like a child sneaking into the kitchen to get a midnight snack. The knowledge—and the snacks—are there to be consumed freely; we just need to communicate openly about it. 

“To me, it’s not AI that’s a problem, it’s our approach to incorporating it into our lives that is not being done in the most appropriate and collaborative way,” says Yutani. “I think it’s about disclosure and consent. Let’s agree to share when and how we’ve used AI on anything, whether it’s me producing material to you, whether it’s you turning in material to me. Let’s agree that we’re just going to be transparent about that.”

Legault agrees that the key to embracing AI is to treat it like any other information source: critically, which requires stringent verification and citation. However, it should be noted that verifying AI sources can result in far more work than you would have to do if you just used Google, like our pre-Generation-Alpha ancestors did.

“There’s nothing wrong with doing research, the idea is what do you do and how do you bring that research? Do you properly quote it, do you cite it? If you’re asking AI to do your paper, there’s some very positive aspect to that, if you are prepared to say so, as well as make sure that you are able to bring your own knowledge to the table,” he says. “You have to be able to critically approach the information in front of you, and verify it. And if you verify everything that’s there to the right source, and then you’re able to cite the right source, it’s doing the work twice, in a sense, because if you did the research right from the get-go, you arrive at the same place.”

Verifying the info you receive from AI is of utmost importance since ChatGPT is notorious for haughtily telling bald-faced lies based on some obscure Onion article that it believes to be absolute fact, because mama never told her little chatbot that you shouldn’t believe every byte of data you scrape out of the dank recesses of the internet.

At the end of the day, AI is here to stay, and the sooner we can stop treating it like some grubby little thief whispering treacherous filth into our ears, the sooner we can gingerly embrace it before moving onto some other groundbreaking technology at which we can wave our torches and pitchforks.

Pitman reminds us that while hard work is a virtue that cannot be understated, teachers can also do their part to help struggling students so they won’t have to resort to criminal enterprises to pass.

 “I think we just have to accept that it’s not unfair or a bad thing if something takes work. AI is much easier, and that’s its attraction, right, but I think Thomas Edison said that there’s no replacement for hard work,” she says. “But I also think teachers have a role to play in assuring that they’re thinking about how to help students master the material, how to layer the knowledge so that they go into an assignment with a pretty good basis.” 

Yutani agrees that people shouldn’t get carried away by the negative connotations that AI has stirred up, and instead focus on using it in a way that advances our cause as mostly intelligent, sometimes-sentient living creatures.

“I’ve actually heard this ‘sky is falling’ scenario about every major technological advancement over the last 30 years, that’s been my whole life,” she says. “I recognize that it’s going to be difficult, we’re going to make mistakes, but in the end, society only moves forward, and we’re going to have to make the best of that and do so with the least collateral damage.”