Resisting “technopanic”: There Are Better Ways For Universities To Respond To ChatGPT

Resisting “technopanic”: There Are Better Ways For Universities To Respond To ChatGPT

Since its release by Open AI in November 2022, ChatGPT has generated a lot of buzz and controversy, much of it related to topics that seem hostile to our classrooms. In particular, there were concerns that ChatGPT allowed (perhaps encouraged) plagiarism and plagiarism went undetected, leading to a (further) deterioration in educational standards.

But what is behind this anxiety? I want to warn against encouraging (other) techno-terrorism. Instead, universities should respond constructively to the newcomer, not fearfully and defensively.

Chatgpti and his grief

ChatGenerative Pre-trained Transformer (ChatGPT) is "a chat that provides precise and accurate answers to questions and queries". This technology can produce different results. It can answer questions about quantum physics. He can write a poem. Oh, and "not only spits out a persuasive essay on any topic, but does it faster than a human can write."

Like any technological innovation, ChatGPT is not perfect. In 2021, your understanding of data after 2021 may seem limited, and it will not help you in your mission, says Amazing Flight Professor George Santos. That said, its versatility and complexity, especially compared to other AI tools, have made it the subject of public scrutiny and even ire.

Last month, for example, The Guardian quoted the vice-chancellor as saying he was concerned about "the creation of large complex text generators, most recently ChatGPT, which look very convincing and increase the difficulty of detection". The story goes on to report that thanks to ChatGPT and similar advances in artificial intelligence, some Australian universities are returning to traditional 'pen and paper' exams.

West , for his part, describes Chattipit as "the latest major disruption in education" and a "new threat of theft" that "academic leaders" are trying to "fight back".

These concerns are not unfounded. Fraud, forgery, and contract fraud are significant problems in higher education institutions. As the 2022 study explains:

Assessment integrity is critical to academic integrity, and academic integrity is essential to the entire enterprise of higher education. If students receive the results of an academic assessment administered by someone else, their credentials and the value of their degrees are at risk.

Threats to academic integrity are less common. The study, which surveyed 4,098 students at six Australian universities and six colleges, found all of the above practices of concern.

These risks to the integrity of higher education are largely unknown to academic staff and the public. Pre-ChatGPT Use of AI in Media to Create Contract Fraud and Litigation. Most teachers are familiar with student work copying excerpts from academic essays or Wikipedia as their work. This can also happen after a few class notes on the importance of properly referencing and acknowledging each other's work.

Added to this is the difficult and often dangerous environment in which university professors work. A 2021 article in The Conversation found that around 80% of Australian university teaching jobs are held by precarious staff, such as those working on "fixed-term" and short-term contracts with little or no guarantee of stability or continuity. employment. All academics (regardless of their professional level) work in an environment disrupted by job cuts, and one profession that is increasingly at risk is teaching.

Artificial intelligence has not caused these problems in industry, but it has not alleviated the number of academics. Responding to a breach of academic integrity can be time-consuming and emotionally difficult for both academics and students. Some violations, such as ChatGPT's claims, may not be detected by software designed to detect them, and may be particularly difficult for a teacher to prove.

Want to get the best in religion and ethics delivered to your inbox?

Sign up for our weekly newsletter.

Beyond "techno-terror".

My concern is that an exclusive or primary focus on the threats to academic integrity posed by ChatGPT may ultimately lead to techno-terrorism . Technopanics are those who see social problems and public safety as a threat to social problems and public safety as a result of technological advances, smartphones, social media, or artificial intelligence.

Technopanic has several goals. They provide convenient scapegoats for real and imagined social problems. These goats are easy to recognize. They are not human and therefore cannot respond (ChatGPT may be an exception here). The sensationalism of the techno-panic fits well with the clicker era, even if that panic predates Web 2.0, as exemplified by the "bad video" of the 1980s.

In the end, techno-panics are losers. By design, they are not interested in prescribing constructive ways to approach technology, expecting punitive and often unrealistic actions (such as deleting social media accounts or banning AI from the classroom). Technological innovation remains a defining and negative factor in human endeavour.

In fact, AI is only human. Its use and abuse reflect and undermine social issues, values, belief systems and prejudices. A recent study found that addressing the ethical issues surrounding artificial intelligence "requires education at the earliest stages of our interaction with AI, whether as developers when we first encounter AI or when users begin to interact with AI. with".

Tracks for construction

With this in mind, I'd like to outline some ways that universities can respond constructively to the advent of ChatGPT. Some of them are already pending. All may be in places outside the ivory tower, such as elementary and high schools.

  • Host information sessions with AI experts (academic researchers, media professionals) on ChatGPT and similar AI tools. These sessions can be tailored to students and staff. These technologies should provide an objective and non-emotional view of what they do, the potential risks and benefits. It's important to consider these benefits, because artificial intelligence is not completely harmless, and to suggest otherwise is naive, even paranoid. We hope that these sessions will allow students and staff to voice their concerns and learn something new. Members of both groups will have very different perceptions of ChatGPT, from those who have used the technology to those who only have terrible headlines.
  • Develop clear and unambiguous institutional guidelines for student assessment of AI use.
  • Bring AI into the classroom to improve learning, prepare students for the world of work, and learn how AI can be used ethically. Tama Leaver said in a blog post about the WA Department of Education's decision to ban ChatGipt in public schools. Leverage applies especially to young people, although the statement can be applied to students of all ages;

Education must equip our children with the skills to ethically use, evaluate and disseminate creative AI applications and results. You should not try this behind closed doors at home because our education system is very paranoid and every student wants to use them to cheat in some way.

  • Establish mandatory training in ethics in all academic studies, especially in the first year. This training may take the form of semester or quarter-term courses, or may be incorporated into existing courses (eg, Introduction to Computer Science, Introduction to ICTs and Communication). The decision to compromise academic integrity by buying a paper or using a chatbot to write an essay is itself an ethical decision; It is a decision based on what a person thinks is right and wrong. The decision to use technology well or not to use it is the same.

Each of these proposals has problems with universities' notoriously limited budgets and limited time for students and scholars. Even the best and most generous AI researcher doesn't want to constantly get up and ask friends to introduce chatbots when there are other important things that demand their time and attention. .

However, these suggestions are still better than giving up and admitting defeat to our technological masters.

Jay Daniel Thompson is a Lecturer in Professional Communication in the School of Communication and Communication at RMIT University. Her research explores ways to promote ethical online communication in an age of online information and digital hostility. He produces content for digital media. Co-author of An Introduction to Fake News in Digital Cultures.

The work , has been updated

0/Post a Comment/Comments