Publication ethics: Annals of Forest Science will strongly struggle against sloppy science

This blog post is for use by editors, reviewers and authors of Annals of Forest Science. It might be of use also for any other journal or scientist. Please feel free to comment and complement it: the debate over publication ethics is far from being closed.

The thoughts in this post originate from my personal experience as editor (and as researcher) and from listening to a very enlightening presentation by Lex BOUTER, Vrije Universiteit Amsterdam (lm.bouter@vumc.nl) given at Strasbourg, beginning June 2016.

This blog page will evolve with time whenever new material and new thoughts are made available by editors and authors.

Scientific misconduct is a matter of vivid debates in the scientific community and even beyond in the general press. Very spectacular cases of Plagiarism, Fabrication of data and experiments, and Falsification of data have been reported, cited, commented in numerous articles, papers, blog pages, comments. These cases lead to many questions, the most frightening being: is there a (dramatic) increase of such cases of misconducts in the scientific community? do such misconducts happen in all disciplines, on all continents, in all kind of institutions? How often may an editor be contronted to such flagrant misconducts?

Such misconducts indeed need be severely combatted and condemned. Progress has been made in detecting them. Very sophisticated tools are now available to detect plagiarism, and Springer Nature provided us with an automatic check by Ithenticate, integrated into Editorial Manager. From now on, every single manuscript submitted to Annals of Forest Science will be analysed for plagiarism, and all editors and referees will see the results from this check.

But are such misconducts so frequent that editors need really bother about them and make them a central issue in their activity? From our own little experience, we might say “probably not”. We had to face one such problem during the last decade, with a paper of Annals of Forest Science being fully plagiarised with only species names (and indeed author’s names) changed. The plagiarised paper was published in an obscure open access journal in Nigeria, which retracted after evidence for plagiarism was handed over. The author of the plagiarism was identified as commonly practicing this exercise, and his scientific career broken (but we do not know whether he cares about it….). Well, that makes a nice story, but is this the main problem science editors are confronted to? No, definitely no.

As underlined by Lex Bouter and from the experience of many fellow editors, the problem lies not with such characterised research misconducts, but with questionable research practices that are much more common. Such practices include a combination of “ignorance, honest errors and dubious integrity”. This, which can be qualified as “sloppy science”, is definitely the major problem editors and reviewers have to face. It is much more common and widespread than we would expect. As editors, we have to very clearly struggle against sloppy science and avoid publishing papers that fall into this category. Sloppy science stems much more from ignorance and honest errors than from typical misconducts. This makes it even more difficult to combat.

The list of questionable research practices is quite large, and this post does not ambition to list them all. It simply aims at attracting the attention of editors, reviewers and authors on some aspects of “sloppy science” we should take into account: as a consequence we should decline publication of such papers or at least require that the authors revise the manuscript accordingly. The difficult aspect is of course that we are not in a black and white world, but in the real one, with many nuances between recognised misconducts and responsible conduct of research that make it impossible to provide clear-cut and simple advice on how to deal with it. The flair and personal expertise of reviewers and editors are required.  Our aim is to detect sloppy science while always  assuming that it is mostly due to ignorance and honest errors, and only seldom to dubious integrity.

A few examples of frequently encountered aspect of “sloppy science” :

  1. Discussion and conclusion not supported by the presented data: this is quite a frequent situation, where the authors spend a lot of time discussing aspects of the question that are not substantiated by their results; easy to detect, and easy answer to the authors: “please stick to your data in the discussion before attempting to produce a more general conclusion“;
  2. Cherry picking of data and facts: quite frequent also, with authors providing only the positive correlations among factors and avoiding any reference to the non significant ones; the emphasis in the manuscript is put on the positive results which may in reality be overwhelmed by a large number of unconclusive results; only one answer to authors: “please provide an overview of your whole data set, best by making it available (see our blog page on data papers and open data)”;
  3. Unsuitable sampling and too small samples to draw any conclusion: this is unfortunately a very frequent case, and numerous manuscripts base on small samples that are absolutely not designed to provide any firm conclusion; suitable answer: “dear authors, return to the forest, the greenhouse or the climate chamber, and sample more data in support of the point you want to make. Please be aware that bad weather, greenhouse failure, freezer breakdown or whatever incident came across your research cannot be accepted as a valid excuse for publishing uncomplete results, how painful this might be“;
  4. As a variant of the cherry picking is the long recognised fact that negative results are only seldom published; only positive ones are. This creates important biases in some research areas. In this case, the answer lies with the editors: “please, editors, accept to publish negative results, provided indeed the results are really nagative, and not just not significant because of erroneous or unsuitable sampling; Annals of Forest Science does“;
  5. One of the most frequent claims we have with our authors is that they do not provide explicit research hypotheses or at least explicit and focused research questions; as a result, the submitted manuscript describes results without any clear guideline and indeed no clear cut answer; even if we may understand that in some cases a purely observational approach may be relevant, in general, experimental papers should be driven by explicit a priori hypothese; answer to the authors: “please provide a limited series of explicit a priori hypotheses that guided your research; this is a prerequisite to publication of your manuscript in our journal“;
  6. A more subtle variant of the above which is difficult to detect is a change of the hypotheses (or sometimes just the first formulation of hypotheses) after the results are known; that is, presenting a data driven hypothesis as a a priori hypothesis; this is qualified sometimes as HARKing (hypotheses after the results are known); from our perspective, it seems difficult to detect in the papers, unless the described data set does not at all fit with the tested hypothesis, but do we often have real access to the data set?
  7. The use of unsuitable statistics and the lack of description of the statistical model used (instead the authors state that they used a fancy and popular statistical software) and other statistical weaknesses are unfortunately still to frequent; answer then can only be: “dear author, use a more suitable statistical model and seek some expertise in data analysis; please be aware that there are so many  autocorrelations in your data set that your conclusion is not supported by the data“;
  8. The tendency to test different statistics until one provides a significant result, and then forget about all the earlier trials; this is probably not unfrequent; for us, it remains difficult to detect, unless the authors provide a complete an honest description of their statistics; answer:”dear authors, please provide a honest and precise description of the statistics you used and describe explicitely the statistical approach used“;
  9. Correlation is not causation” is hammered many times during all trainings about statistics and data anlysis; yes, but….. the reality still is that we struggle to avoid concluding about causes when only correlations are available;
  10. Unfair citations: this is an issue to, as in the age of internet and imediacy, the seminal papers introducing still valid concepts are sometimes forgotten and only recent reviews highly visible on the web are cited; answer: “Dear authors, do not forget that research was already quite active in this field before the onset of internet….”
  11. Lack of access to the data sets in support of the claims made in the paper; we very strongly advocate that open access to data whenever possible is a prerequisite to good science; this will be a long and tedious effort to convince authors of the importance of this, but progress is underway (see the blog post devoted to open data in Annals of Forest Science);  our answer: “Dear authors, please do provide an access to the data set that is mobilised in support of your paper; by the way, data sets can be citable items provided with a Digital Object Identifier and downloaded in a repository; property of the data set will be yours (or your instution’s)”
  12. … this could be an unending list, and we do not intend to provide a compendium of all potential aspects of questionable research.

Our point is that as editors of scientific journals and as reviewers, we have to be very careful about these matters; the difference between good and sloppy science is subtle, and does not relate to the place of publication, but to the care brought to gather the relevant data, analyse them properly and report about them as honestly as possible. This is the foundation for good science. Our duty is to use these criteria while assessing suitability for publication of the manuscripts that are submitted, to check whether the manuscripts avoid the pitfalls of sloppy science, and to provide the authors with advice and guidelines about how to avoid them. We indeed do not want to display an ureasonable behaviour of almighty editors and referees who take some sadistic pleasure in making the poor authors suffer….

There is serious progress ahead, as well as a lot of work. The general credibility of science editing and science in general may be at stake.

 

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.