GLOBAL

Developments and challenges around AI and research integrity
Divisions created between academic and research ethics are largely artificial and should be dismantled. Academia needs to work collectively and urgently to integrate academic and research integrity with AI, global ethics expert Professor Sarah Elaine Eaton told the 2024 World Conference on Research Integrity.Further, European guidelines on AI and research are world-leading and highly valuable, but should extend their focus beyond researchers, research organisations and research funders to encompass others involved in knowledge production, including publishers, industry, and graduate students and academic supervisors.
Eaton, who is associate professor in the Werklund School of Education at the University of Calgary in Canada, was speaking in a symposium session at the World Conferences on Research Integrity Foundation’s 8th WCRI 2024, which was held in Athens, Greece, from 2 to 5 June.
The biennial, hybrid WCRIs are key events on the research integrity calendar that gather together experts from diverse fields to discuss developments in and challenges to research integrity – this year there was a record 800 participants – and to produce a policy statement.
The Athens Statement, to be finalised later this year, will be on “catalysing the translation of research into trustworthy policy and innovation”. This, said a WCRI release, “encapsulates the conference’s dedication to fostering a seamless bridge between research and its real-world impact”. For Eaton, this bridging was at a strategic and theoretical level.
Back to the basics
Eaton is the editor of the Second Handbook of Academic Integrity, published in February 2024 by Springer. The handbook has 145 contributors and 112 chapters and, she said, “is considered a major reference work in the field of academic integrity. We are more and more connecting the fields of academic integrity and research integrity”.
Eaton drew on the handbook, the first edition of which was produced in 2020, to produce a Comprehensive Academic Integrity Framework that a colleague at the University of British Columbia, Kieran Forde, has called a “colourful swirly donut” of integrity.
There are eight overlapping ‘swirls’, or themes, around the core of Comprehensive Academic Integrity, within and between which to organise thinking and action on ethics: student academic conduct; publication ethics; research integrity and ethics; instructional ethics; ethical leadership; institutional ethics; everyday ethics; and professional and collegial ethics.
The key argument is that academic integrity must encompass but also extend beyond issues of student conduct, and should be a foundation of all aspects of education, and that academic integrity includes research integrity.
Student academic conduct, Eaton said, is what has historically been positioned as academic integrity, involving issues such as student plagiarism and cheating on exams and assessments.
“Of particular interest are the ways in which the experiences of students are more complex than in any generation previously, including students becoming involved in research earlier, including at the undergraduate stages,” she noted.
Thus, for students, research ethics are of more concern than ever before. “The time to start training students is early in their career. Research integrity doesn’t start when you become a professor. Academic integrity is one of the foundations for research integrity,” she argued.
New guidance in Europe
Eaton turned to look at AI and ethics guidance in Europe, and said: “Europe is miles ahead of us in North America when it comes to AI guidance and policy.”
At the WCRI opening plenary on Sunday 2 June, Iliana Ivanova – commissioner for innovation, research, culture, education and youth for the European Commission – referred to recent developments in AI ethics policy and guidance.
There have been a slew of documents. The European Union Artificial Intelligence Act was approved by the European Council in March this year, creating a common regulatory and legal framework for all EU countries. It is the first comprehensive regulation on AI by a major regulator anywhere, and includes a focus on education and training.
Also in March came a more specific AI and research document from Ivanova’s Directorate-General for Research and Innovation, Living guidelines on the Responsible Use of Generative AI in Research.
Produced by the European Research Area Forum of countries and research and innovation stakeholders, the guidelines cover the use of generative AI in research for funding bodies, research organisations and researchers in both public and private research ecosystems, in an effort to clarify and encompass the proliferation of AI guidance that has emerged.
The principles framing the new guidelines are based on existing frameworks such as the European Code of Conduct for Research Integrity, which was revised in 2023, and the Ethics Guidelines for Trustworthy AI.
Opening the conference, Ivanova said: “In a period where public trust in institutions is challenged and the use of generative artificial intelligence for scientific writing is gaining ground, we shouldn’t spare efforts in promoting research integrity.”
It was important to “shed light on the key challenges for research integrity and to identify how best to address them in an inclusive and effective way. We need to continue developing practical guidance to operationalise high-level ethics and integrity principles”, said Ivanova.
“These are values that will remain a top priority in EU research and innovation policy,” she noted.
Such documents, Eaton told the conference, are valuable and she has studied them, searching for guidance for groups such as researchers and research organisations.
“One of my takeaways is how generative AI provides many opportunities for different sectors. However, it also harbours risks such as large-scale generation of disinformation and other unethical uses with significant societal consequences. I particularly appreciate the use of the word disinformation,” she said.
There is a big difference between misinformation, perhaps ignorant or accidental misuse of information, and disinformation, which is the intentional misuse of information.
Eaton said that individuals who participated in drafting European ethical guidelines were thoughtful and deeply informed on the complexities involved. Some of the key principles that emerged were reliability, respect, honesty and accountability.
Artificial divisions between academic and research ethics
“As somebody who studies academic integrity, I couldn’t help but notice the parallels between these and what are called the fundamental values of academic integrity espoused by the International Centre for Academic Integrity: courage, fairness, honesty, respect, responsibility and trust,” she said.
The divisions created between academic and research ethics are largely artificial and ought to be dismantled, said Eaton. Academia also needs to work collectively and holistically to integrate research integrity and AI.
“I hear echoes of the same values for academic and research integrity, further emphasising for me that the notion of integrity needs to extend beyond one sector or the other, and the divisions that we have created between academic and research integrity and ethics are by and large artificial.
“It’s time to start considering the broader implications of the terms that we use and to be more inclusive and holistic with them,” she stated.
Eaton said the European guidelines’ focus on individual researchers, research organisations and research funding bodies are very clear and well-developed. However, the report is silent on the roles of publishers and others involved in knowledge mobilisation, and there is little guidance for supervisors, graduate students or research trainees.
All people involved in the large research ecosystem have a responsibility to be involved in conversations around AI. In no way should industry or publishers, supervisors and students, conferences and others “be absolved of responsibility regarding AI and research integrity”, she said.
“It is our responsibility to work collectively and holistically to integrate research, integrity and artificial intelligence,” she advised.
Decouple policy from practice
Eaton said it can be useful to decouple policy from practice. The processes involved in discussing, designing, reviewing and approving policies, can make them inflexible. “Thinking about policy in a way that can be evergreen, so that it can still be applicable five years from now, will be the challenge for those developing AI policies,” she stressed.
“Procedures and practices that don’t override a policy but rather can be used to operationalise it in a more nimble and agile way, may provide us with the flexibility to address problems when they come up in such a way that the policy can remain intact and valuable until its next cycle, and that the governance processes around policies continue to remain important,” she explained.
“I come from a country where higher education is decentralised, and every province and territory has its own policy. So we’re a hot mess,” she said. In contrast to Canada, European countries have achieved some consistency. Eaton is not sure, though, whether a global policy on AI and research is possible – or even desirable.
Some ethical challenges around AI
The audience asked about the major challenges facing the use of AI in higher education and research, such as plagiarism. Eaton has written a book about how the concepts of plagiarism and originality are going to change due to technology.
In an article for University World News last year, on “Artificial intelligence and academic integrity, post-plagiarism”, she argued that in the age of post-plagiarism, people will use AI apps to enhance and elevate their creative outputs as a normal part of everyday life.
“We will soon be unable to detect where the human written text ends and where the robot writing begins, as the outputs of both become intertwined and indistinguishable,” she wrote.
“The key is that even though people can relinquish full or partial control to artificial intelligence apps, allowing the technology either to write for them or to write with them, humans remain responsible for the result,” she noted. Learners must be prepared for this reality.
In the face of moral panic around technology, Eaton recollected technological leaps that sparked moral panic but soon became embedded in everyday use – such as radio in the 1920s. “We begin to wonder how we ever lived without the technology,” she told WCRI 2024.
Nobody has a crystal ball. “What we know is that if we’re worried about ChatGPT today, we’re being short-sighted and we’re worried about the wrong thing,” she said, regarding problems such as cut-and-paste plagiarism and thievery of ideas.
“Because these concepts will continue to be challenged as we move into more and more complexities around AI-generated text versus human-generated text and who owns what.
“But right now, we don’t have the answer. Understand that the technology will continue to evolve. To try to be one step ahead of it, whatever that might look like, will be important,” she concluded.
Email Karen MacGregor at macgregor.karen@gmail.com or karen@universityworldnews.com.