We discuss some of the challenges HE institutions need to consider, as well as some of the positive contributions AI could make in a research environment
Article written by Tamar Elderton-Welch, Head of eLearning at Marshalls, a Ciphr company
The integration of artificial intelligence (AI) into our daily lives is rapidly becoming more commonplace. With generative AI tools such as Midjourney, ChatGPT, Dall-E and Grok providing an impressive array of functions (including research gathering, code creation, and custom digital artwork), it is clear to see the potential for its use and misuse in academia.
Through the skilful use of well-considered prompts, AI systems can produce convincingly written essays, articles and research that most individuals would be hard-pressed to identify as AI-authored. In this era of technological advancement, universities specifically are faced with the challenge of recognising the growing presence of artificial intelligence (AI) in coursework submissions and research projects.
In this article, we will discuss some of the challenges higher education (HE) institutions need to consider, as well as some of the positive contributions AI could make in a research environment.
- Positive uses of AI in academic settings
- The threat to academic integrity
- Proposed pathways for universities
Positive uses of AI in academic settings
While AI poses significant challenges in preserving academic integrity, it simultaneously empowers universities and students in the domain of research. AI has the potential to revolutionise research by streamlining data analysis, aiding in pattern recognition, and enhancing hypothesis generation
For subject areas where researchers grapple with large volumes of data – such as physics, sociology, bioinformatics, genetics, climate science, and social sciences (to name but a few) – AI tools can quickly process and analyse these datasets, offering insights that might take researchers years to uncover. AI algorithms also excel at recognising patterns within data. This capability is a boon for fields like medical diagnostics, where identifying subtle correlations in medical records can lead to earlier disease detection and improved patient outcomes.
AI can also be used as a tool to help generate hypotheses based on existing data, suggesting avenues of research that human researchers might overlook. This collaboration between human intellect and AI’s computational power can drive innovation and has the potential to break down silos between academic disciplines.
On a broader scale, AI can be a very useful tool to help students structure their essays and gather information, and act as a prompt for carrying out further research or considering a broader array of topics.
But AI should be used with several caveats in mind.
The threat to academic integrity
One of the most obvious and pressing issues universities face with the rise of AI is the persistent threat to academic integrity. While AI has enabled more robust plagiarism detection tools – making it increasingly difficult for students to submit someone else’s work as their own – it has also opened the doors to more sophisticated and harder-to-detect forms of academic dishonesty.
Students can employ AI to generate essays, reports, or answers that are more difficult to trace back to their sources. This form of ‘smart cheating’ challenges universities to adapt their anti-cheating strategies continuously, which is time-consuming and not always possible.
Universities need to confront the ethical quandaries posed by AI. Where does the line blur between legitimate assistance and unethical academic support? How do they strike a balance between leveraging AI for educational benefits and safeguarding the integrity of the learning process?
Proposed pathways for universities
Policy updates
The rise of AI presents a dual-edged sword for universities, where academic integrity faces new challenges. To thrive in this digital landscape, universities must carefully consider their existing policies on academic standards and expectations of students. Universities must establish clear ethical guidelines for the use of AI in coursework and research where the boundaries between acceptable assistance and academic dishonesty are transparent and well-defined.
Student and staff training
Another essential aspect is staff and student training. As AI continues to evolve, universities should invest in faculty development to ensure that course leaders and students are well-versed in how to use AI tools effectively and ethically, whilst preserving academic integrity. Students must learn that research gathered by an AI must always be validated by rigorous scholarship and critical thinking and not accepted at face-values. Just as humans are fallible to bias and stereotypes, so too is AI. While AI tools can generate information that may seem credible at first glance, they’re only as good as the sources of information they’ve been trained on – leaving room for misinformation, dubbed an ‘hallucination’.
Guidance on AI markers
Alongside updating policies on AI use in academia and staff and student training, ideally universities should provide additional guidance on certain ‘red flags’ that strongly indicate AI usage. At present – despite the existence of anti-plagiarism solutions like Turnitin, Copyscape, and GPT-3 Detector – there’s no system, as yet, that can definitively detect the use of AI. There are, however, certain markers that course leaders can look for, signalling the need for closer scrutiny of the submitted coursework.
In this digital era, where AI and academia intersect, universities must navigate a path that preserves the core principles of learning and knowledge dissemination while harnessing the power of AI for the advancement of education and research. By doing so, higher education institutions can reap the benefits of AI while avoiding the worst pitfalls.
eLearning content: Delivered by Ciphr, powered by Marshalls
This content was initially published on Marshallelearning.com (November 2023) and has been uploaded to and lightly amended on Ciphr.com as part of the brand amalgamation in August 2024