MDPI’s Peer Review Week 2023 Webinar joins the global celebration of peer review and its crucial role in academic publishing. This year’s theme, as selected by a global poll, is “Peer Review and the Future of Publishing”.
The webinar was an online roundtable discussion moderated by Dr Ioana Craciun, from the MDPI Scientific Office Board. It featured a distinguished panel of researchers from a diverse range of fields.
The speakers were Mr Agbotiname Imoize, who specialises in Electrical Engineering; Dr Haipeng Liu, whose work focuses on intelligent healthcare; Prof Dr Pabulo Henrique Rampelotto, a molecular biologist; and Dr Marcin Kulawiak, who specialises in geoinformatics.
The researchers shared their experiences and insights into the future of scholarly publishing and peer review’s essential role in it.
The need for publishing data openly
Prof Rampelotto began the discussion by describing how the global push towards Open Access needs to be extended to data too. He claims this would help increase transparency, by letting readers check that the data has not been manipulated to achieve certain results.
He advocates depositing data in open repositories and including how to access them in articles.
Data peer review is a topic we have explored previously.
Privacy concerns about open data
Dr Kulawiak expressed his concerns about open data: privacy. He described how there are currently two clashing trends. On the one hand, you have the push for Open Access and, on the other, global data protection rights.
Whilst anonymising data may sound straightforward, it is not always so. Dr Kulawiak provides the example of a survey being held for a small, non-diverse classroom. If this survey includes variables like gender or nationality but the surveyed participants are very homogenous, the variables could be used to identify participants by their answers.
He explains that this is particularly an issue in the medical field, as different kinds of data can be used to identify people. And scientists may not be qualified to judge which characteristics are suitable for anonymisation.
Dr Kulawiak supports open data, but he thinks it needs to be selective whilst we figure out the details of privacy. He concludes that this is not a matter for scientists but lawyers, and thinks it is the responsibility of governments to set common standards for data privacy rules.
Using artificial intelligence for peer review
A technological revolution is occurring with the emergence of artificial intelligence (AI) increasing in sophistication. ChatGPT took the world by storm, and researchers are responding by incorporating AI into their workflows.
The topic of using artificial intelligence (AI) for peer review sparked an interesting debate. Dr Liu began the discussion by expressing optimism for AI being used as a pre-screening tool to reduce reviewers’ workloads.
Dr Kulawiak, an expert in IT, outlined some of the tools already or soon to be available to researchers:
- Algorithms for checking spelling and grammar.
- Reference-checking algorithms, for checking references are recent, correctly formatted, etc.
- Tools for collecting the abstracts of all the referenced articles in a text.
These tools, Prof Rampelotto explains, would all reduce the tedious labour involved in a review. Checking references can be a very time-consuming task, and poorly written articles slow down reviews by making it harder to understand the points being made.
He argues that tools like these should be applied before the reviewers even receive the paper. This would ensure that reviewers receive high-quality papers and lessen the likeliness of rejection based on poor grammar or referencing.
Most importantly, these tools would allow reviewers to focus on the scientific merit of papers above all else.
Issues of artificial intelligence
Dr Liu, however, warned about the limits of AI, stating that it should only be used for these preliminary and repetitive tasks rather than generating reports.
This is because AI tools such as ChatGPT are prone to errors and misinformation. Accordingly, Prof Rampelotto suggests, tools for detecting AI writing, which are already being applied to submitted papers, should be applied to review reports too.
Dr Kulawiak was less concerned about AI-generated reports. He claims that a bad report is a bad report, regardless of who (or what) writes it. As such, he believes we should focus on the review itself, potentially even scoring reports and reviewers.
Recognition for peer reviewers
This idea of scoring reports naturally led into a discussion about how we can give peer reviewers the recognition they deserve. Mr Imoize began by proposing that we need more non-financial incentives for peer reviewers.
Scoring reviews was one of his suggestions. This would help recognise reviewers who create high-quality reports, boosting their reputations, and potentially even becoming metrics that universities and institutions could consider when giving grants or job roles.
Dr Kulawiak was excited by this idea and suggested the potential of linking this scoring system with vouchers for APCs. This would mean reviewers who produce higher-quality reports would receive higher discounts when publishing their papers. Therefore, hopefully creating a healthy ecosystem of peer review and publishing for scholars.
Engaging young minds
Mr Imoize described the challenges of finding reviewers for journals. Workloads are increasing, especially for senior and experienced researchers. This is partly because journals seek reviewers that they know they can rely on. Often journals request that reviewers have a PhD or have reviewed several articles already.
This, he suggests, is hindering the reviewing process. He believes that we should trust in younger researchers, the future scientists, to review material. They are enthusiastic, so reviewing could be an avenue for them to prove themselves in their fields. And reviewing material is a helpful way to get to grips with the always-growing scientific literature on any given subject.
By including PhD students, or even MA students, in the reviewer pool, journals would face less pressure in finding reviewers and would be helping to train the next generation of scientists to value peer review.
The core value of peer review remains
Peer review is the primary means of assessing research rigour, credibility, and potential. It ensures research is trustworthy, applicable, and original, among many other things. It’s supported by the countless dedicated scientists who work, often for little reward, to ensure the process remains. Also, as Dr Liu concludes, good reviews show a journal is dedicated to publishing high-quality science.
Therefore, it’s a cause worth celebrating. Alongside highlighting the exciting future of peer review, MDPI’s Peer Review Week 2023 Webinar demonstrates the unwavering commitment to this process in science. We have plenty of content available celebrating and exploring peer review, why not begin with our article Peer Review Week 2023.