ChatGPT and Rapidly Evolving AI in Research (Part 2): An Interview with Christopher Leonard

by Arushi Gupta

The influence of artificial intelligence (AI) has extended to various domains today, including the realm of research and publication. The impact of AI in research is undeniable, empowering researchers worldwide to embark on new explorations, improve their academic writing, and achieve ground-breaking discoveries with unparalleled speed. Recently, one particular AI tool has shed light on the transformative capabilities of AI in research: ChatGPT.

ChatGPT is a remarkable generative AI writing tool capable of generating responses that resemble human input across a range of prompts. Researchers are increasingly turning to ChatGPT to streamline tasks like literature review and data analysis. However, while this tool proves immensely helpful, concerns arise regarding the accuracy and reliability of AI-generated content in research, especially in sensitive domains such as healthcare and finance.

This is where assistive AI writing tools like Paperpal, an AI writing assistant designed specifically for PhD students and researchers, come into play. Paperpal provides researchers with a more reliable, viable option to enhance their academic writing and achieve faster success in publishing.

(If you haven’t already, read part 1 of this series here)

In this conversation with Christopher Leonard, Director of Products and Strategy, CACTUS, we cover crucial questions around AI in research, and what the future looks like for researchers looking for smarter ways to improve the research writing, academic publishing, and other processes.  

Christopher Leonard – Director of Products and Strategy, CACTUS

Christopher Leonard

Christopher Leonard has more than 20 years of experience in a variety of key editorial, technology, and product roles at Elsevier, BioMed Central, Qatar Foundation & Emerald. He is passionate about building a better, more supportive publishing space for researchers and his current role at Cactus Communications enables him to pursue his passion of redesigning academia by placing the researcher at the center. Like Schrodinger’s cat, he is both simultaneously optimistic and pessimistic about how AI will change how we communicate research findings in the near future.

Question: Technology has been advancing in leaps and bounds and AI writing tools such as ChatGPT are transforming how we write. Given your experience as an academic author and editor, and with creating AI products for researchers, how do you see the increasing reliance on tools like ChatGPT affecting research writing quality and skills in the long term? What impact do you think this will have in terms of overall research quality, need for innovation, and reduction in human effort and oversight?

On the positive side, I’m sure that ready access to AI writing tools such as GPT4 will result in much improved research writing quality. Even though we cannot be sure of the reliability of training data for these LLMs, they seem to produce a reasonable first draft of sentences and paragraphs that the author can then adapt for their manuscripts.

The danger is when we rely on that output to be correct and when we use the output of LLMs as-is. GPT3 and 4 have a tendency to invent references to non-existent articles and to pronounce confidently on things that turn out to be incorrect. In addition, it is terrible at maths and data manipulation, so anything involving numbers should be double-checked (which is fair, it is a language model, not a data model).

The scourge of papermills (ghost-writers of academic articles who fabricate data and results, as well as writing a passable article) will likely recede – which is good. But only because that power now sits in the hand of the individual, which is, potentially, bad. The same goes for image manipulation. Western blots, for example, can be easily created with the current generation of AI and passed off as real.

It’s hard to see a way for publishers to screen for all of this. ChatGPT output can be paraphrased to bypass plagiarism detectors, compelling manuscripts can be created with fake data, and peer review has not shown itself to be able to weed out these problems even before the advent of GPT. Some combination of weak signals from many indicators may be able to allow humans to detect generative work, but it is hard to imagine that being automated in the near future.

 So what is the answer? My personal sense is that the concept of the research article as a quantum of academic communication will come under question. Narratives around data can be invented, as can the data itself. We may see quantitative work communicated primarily through the results and data, with no or little narrative. However, this isn’t relevant to all areas of academic endeavour, so it is possible that a whole new world of scholarly communication could emerge.

Question: ChatGPT has been mentioned as author on some preprints and research papers. On the other hand, some top publishers have mentioned they would not accept any submissions that use such AI writing tools or have them as authors. With your experience on both sides of academic publishing, what is your take on this? Do you think researchers can benefit from generative AI writing tools like ChatGPT, if yes, how?

I think ChatGPT in this instance is more like a language editor. Creating alternative representations of your prompt until you see something that matches what you were trying to communicate. In that sense the ‘author’ is still the human, since they created the prompt and selected the best match to their intention.

The criteria for someone to be listed as an author is specific, at least in medical publishing. WAME has some great guidelines here, variations of which have been cited by publishers in not allowing ChatGPT to be named as an author. It’s something I agree strongly with.

Paperpal for Word

Question: How comfortable would you be in using generative AI tools like ChatGPT vs. assistive AI writing tools like Paperpal for your own writing?

As a native English speaker, I would always prefer to use an assistive writing tool such as Paperpal. If you have an idea in your head about what you want to say, and can take a first stab at it yourself, tools like Paperpal will suggest enhancements to that sentence, and you can get on with the next sentence. However, if you are uncertain about the nuances of English and the strange grammar of academic research articles, then it may be preferable to have GPT generate some versions of your sentence/paragraph and then run them through Paperpal. GPT will make the English correct, Paperpal will make sure it sticks to academic norms.

Paperpal is an AI writing assistant that helps academics write better, faster with real-time suggestions for in-depth language and grammar correction. Trained on millions of research manuscripts enhanced by professional academic editors, Paperpal delivers human precision at machine speed. Try it for free or upgrade to Paperpal Prime, which unlocks unlimited access to premium features like academic translation, paraphrasing, contextual synonyms, consistency checks, submission readiness and more. It’s like always having a professional academic editor by your side! Go beyond limitations and experience the future of academic writing. Get Paperpal Prime now at just US$12 a month!


You may also like

Your Paperpal Footer