We all have been there: you read a paper and wonder: how did this ever pass peer review? Who were these incompetent peer reviewers? The following email exchange gives some insights into the farcical quagmire which the traditional peer review process is. It took place between the Editor-in-Chief of an Elsevier subscription  journal Diabetes Research and Clinical Practice and a professor of physics and astronomy, who was invited to peer review a clinical trial study on gestational diabetes, his expertise assumed from some  obscure “keywords”. Apparently any academic can be spontaneously invited to act as Elsevier reviewers, actual expertise doesn’t matter.

In the end, the indignant editor Antonio Ceriello, Italian research clinician with an h-index of 80, appeared to be threatening the physics professor with legal consequences from his own lawyer and Elsevier’s legal department, should he not cease complaining about these editorial practices of recruiting inappropriate reviewers.

The academic who so staunchly refused to become an expert in gestational diabetes, is Daniel Whiteson, faculty member at University of California in Irvine, USA. Whiteson is a particle physicist and studies experimental high energy physics, using data from the Large Hadron Collider. All this is rather clear on his institutional webpage. This however did not stop Elsevier from trying to check if he still may have other, hidden interests or hobbies. Whiteson was in fact invited to act as peer reviewer by these Elsevier subscription journals:

And of course,  Diabetes Research and Clinical Practicethe official journal of the International Diabetes Federation. On November 16th, Whiteson received a peer review invitation for a paper about gestational diabetes biomarkers obtained from clinician trial data. The invitation came from an official Elsevier email address (EviseSupportATelsevierDOTcom), and was signed by Ceriello:

“Dear Professor Whiteson,

I would like to invite you to review the above-referenced manuscript. To maintain our journal’s high standards we need the best reviewers, and given your expertise in this area I would greatly appreciate your contribution.

I kindly ask you to give this review invitation the same consideration that you would want one of your own manuscripts to receive”.

The email provided the links to access the editorial manager. Important was only that he doesn’t share his peer review with anyone without permission from Elsevier. If Whiteson wanted to, he could have easily reviewed that diabetes paper, maybe even order the authors to cite some of his particle physics papers. Would someone at least check his credentials should he have rejected the paper?

Whiteson, who had enough from Elsevier peer review shenanigans, wrote back, rather angrily:

“If your journal wants “high standards”, then you shouldn’t send review requests to people who are totally unqualified (me). 5 seconds on Google would tell you that.

Given the level of attention you are giving this critical element of peer review, I suspect your journal is garbage.

If you can’t do your job with appropriate diligence, don’t do it. You are harming science”.

Ceriello, using his private email address, replied:

The words you used qualify you, who probably has not experience with the process of review selection.

This was how the email exchange went afterwards:

Whiteson: “If you’re not willing to put forward the effort needed to do your job, step aside. Your sloppiness undermines the peer review process, and the credibility of science”.

Ceriello: “Why did I not find one paper with you as first author?”

Whiteson: ” You should be ashamed of yourself. I’m reporting your sloppiness to your journal’s publisher“.

Ceriello: “I am now plenty of fear

incompetent referee

I approached Ceriello, who was appointed as EiC of Diabetes Research and Clinical Practice on July 1st 2016, and is affiliated with two research institutions, Institut d’Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS) in Barcelona, Spain and the Department of Cardiovascular and Metabolic Diseases at IRCCS MultiMedica in Milan, Italy. The chairman of “Diabetes and Cardiovascular Diseases” study group at  European Association for the Study of Diabetes (EASD) elaborated in his email to me:

“Any journal has a long list of potential referees, linked with several key words aiming to quick identify their fields of interest. The selection of potential referees is based on the use of these keywords and, generally, several potential referees are selected and invited. The number of referees invited needs, generally, to be enough large to have the real chance to have some positive answers, because people are asked to invest time without any remuneration and to keep the responsibility of evaluating the work of somebody else. Please, note that the keywords originate from the same that the potential referees have previously used in their papers or activities.

It happens sometime that, using such key words, are invited referees who are anyhow not expert of the field of the proposed paper to evaluate”.

So, Ceriello’s strategy to find the right reviewers for his diabetes journal is”keywords”. Not their publication record in field-relevant research. Not even their general association with that field. Never mind conflicts of interest, because it doesn’t even get to the stage of establishing appropriate scientific qualifications. Does a computer algorithm alone invite reviewers? Are there trained pigeons at Elsevier offices typing random things into computer with their beaks? Which keywords turned Whiteson into a qualified peer reviewer for diabetes (or cancer, or chemistry, or ocean engineering, or agriculture?), aside of his being a US professor for something? There are no other scientists of this name immediately searchable on internet who might have been the correct addressees for that peer review, a confusion can therefore be excluded.

Ceriello concluded his email to me with:

It was my intention, and still it is, to end this story. However, it seems that this is not the case for Prof Whiteson.

I suppose you are in contact with him. Please, advice him that if he is still interested in continuing, the next step will be for him to explain to the legal department of Elsevier why he defined, in writing, the journal “garbage”, not being qualified for this (a physic professor evaluating a scientific medical journal…) and to my personal lawyer why he was, again in writing, so offensive with me, without any evident reason”.

When I asked him if he and Elsevier threaten Whiteson with legal actions for libel, Ceriello final message was: “Threaten? No advice of where he can go“.

So I wrote to Elsevier, precisely to Andrew Miller, Senior Strategy Manager for Elsevier STM journals, who penned in 2016 the press release announcing Ceriello’s tenure as EiC of Diabetes Research and Clinical Practice. I received no reply, but this is more likely due to the fact that Elsevier declared me towards its editors a “toxic individual” and announced not to communicate with me again (read here). Neither did Elsevier ever write to Whiteson, even if to apologise for what clearly looks like a legal threat issued by their academic editor Ceriello on their behalf.

One wonders: is it only Elsevier which invites peer reviewers at random like this? At Frontiers, it is not uncommon that enthusiasm for a topic is more important for a peer reviewer or even academic editor than actual scientific expertise. And an article about nanotechnology I previously published suggests that if there is any actual peer review happening at Elsevier, American Chemical Society, Royal Society of Chemistry  and others, it is a total mess. Maybe these awful nanotechnology papers were peer reviewed by bored Renaissance historians or eager polymath psychologists.


Donate!

If you are interested to support my work, you can leave here a small tip of $5. Or several of small tips, just increase the amount as you like. Your generous patronage of my journalism, however small it appears to you, will greatly help me with my legal costs.

€5,00

20 thoughts on “How Elsevier finds its peer reviewers

  1. bored Renaissance historians or eager polymath psychologists.

    As someone who has published on Renaissance history of thought, and (more frequently) in psychology, I want to reassure everyone that it’s not me who peer-reviews those nanotechnology papers.

    Liked by 1 person

  2. There is a tragic side to this tale. Two years back when the Canadian health research funding body (CIHR) was run by an incompetent political hack, he decided to buy Elsevier’s referee matching system to find referees for grant applications. It cost so much money that they eliminated face-to-face review panels and made the entire process “virtual”, or more accurately left it to a chatroom format not much different from the one we are using here. That is how a random collection of university librarians, lab techs and no-hoper researchers who had never held a grant in their lives ended up reviewing two entire competitions, with disastrous results. Another fact about the Elsevier system that is worth noting is that about 15% of researchers in the CIHR competition were sent their own grants to review. Some did it, too.

    Like

  3. As mentioned in the reply above, the procurement process for CIHR reviewer matching was atrocious. It resulted in a program that was so not fit for purpose and cost so much that it triggered an external audit (http://www.cihr-irsc.gc.ca/e/49862.html). There is a lot of material left out of the public documents but it was clearly a series of cascading mistakes and incompetencies. The idea for a matching algorithm was relatively easy to understand – though inherently flawed. The idea was to build an automated database of keywords from the publications and on-line data of researchers and then to screen submitted applications for instances of those words to develop possible matches. This was called Research and Reviewer Matching Solution (RRMS). It was a dating app for scientists. Elsevier presumably rebranded its reviewer search engine database and stuck on a multi-million dollar price tag. Unfortunately and predictably, it appears the product that was bought needed a lot of adaptation to CIHR’s purpose (who knows where the problems lay, likely on both sides), but one issue was that the system was incapable of understanding French. The system basically failed at first operation and staff scrambled to identify reviewers for the deluge of grants. This is where the librarians, admin assistants, etc came in, since the agency resorted to allowing eligibility to anyone named on a previous grant.

    The fundamental conceit here was that peer review of grant applications could be delegated to a series of micro-transactions and that errors inherent in peer review would be compensated by numbers. That is, increased scatter would be offset by more data and would self-correct. It took two competitions (with a major overhaul and attempted mitigation between them) to abandon the experiment. Of course, that was at the cost of hundreds of researchers losing funding.

    I wrote a letter to an international review committee convened to evaluate the peer review debacle if you’d like advice on how not to run a funding agency:
    View story at Medium.com

    It can be difficult to identify reviewers and I’m sure most publishers try to make the task easier. But such systems depend on the quality and relevance of input data and such listings are far too large to allow automated assignment. I recently had a case of handling three reviews where one was glowing and the other two identified a series of serious flaws. I looked further into the glowing reviewer (who had been one of those recommended by the authors, but had a very common name so I hadn’t done a lot of fact-checking) and after 30 mins of filtering found the person had published several papers with the submitting authors within the past 5 years. That was my error in assignment, but it was a clear example of insufficient data and a clear deception by the authors. The manuscript was rejected. This is one reason I like ORCID.

    Lastly, the reputation of a journal is dependent on its fairness and quality. It sends precisely the wrong message when crappy papers get through the system and are published. The fact that journals and their editors have the perverse incentive that their profitability/existence depends on getting material published so as to charge publication fees creates an inherent tension that too often favours money over quality.

    Like

    1. This type of matching is something we encountered and dealt with multiple times in our projects (web tools for science networking etc). Usually we need to match researchers with content which is relevant to them. But each situation (problem) is different. So you need to use different approaches to solve them. Keyword matching is only one possible approach.
      In any case, to build a solution which does it well, I would estimate the cost in low tens of thousands USD, with less than 10 thousand USD maintenance per year. If they paid more than that – they were being ripped off.

      Like

  4. Hi Leonid
    Are you sure that Ceriello guy is a member of the species Homo Sapiens?
    Those garbled replies do sound strangely like a chat bot.
    Not that it matters….
    Cheers, oliver

    Like

  5. “a physic professor evaluating a scientific medical journal…”

    Physic means medicine, so maybe he thought he was a medicine professor instead of a professor of physical sciences… =)

    Seriously though, Elsevier’s practices over the years have already been atrocious, especially concerning the subscription fees. But reviewing practices as well? It does render their published papers a tad suspicious.

    Liked by 1 person

  6. I get very off-topic review requests from MDPI. I consider them spammers, because they did not honor my request to be removed from their reviewers list.
    I guess the paper I was asked to review cited me. But that does not mean I am in the same domain, I get out of domain citations, too.
    Elsevier requests were recently only borderline. I was probably qualified to review, but I declined because it is too far from my personal interests (and the abstract already looked like a rather bad paper, too…)

    Like

  7. You have to (grudgingly) admire their business plan. Publish any fraudulent crap as long as the authors (including Tiwari, Sharma and their ilk) pay. Charge through the nose for your bogus referee matching software to cripple national science programs to the point where the remaining desperate researchers are willing to pay to publish in your crap journals. Brilliant in a Trumpishly nasty sort of way.
    Personally, I won’t submit to or review for Elsevier journals, and I avoid citing papers that appear therein, which nobody minds because they are all paywalled up the kazoo. Bloody shameless outfit.

    Like

  8. Whatever the final outcome of the Ceriello/Whiteson fight, the root of the problem is how to motivate qualified reviewers for the deluge of manuscript submitted to the journals. See for example the following email we received 5 months after submitting a manuscript to an Elsevier journal (Note: I co-authored the work, but was not the author for correspondence; the ms. was submitted on June 2017).

    ====================================

    Ms. Ref. No.: [reference redacted]
    Title: [Ms. title redacted]
    [Elsevier’s journal redacted]

    Dear Dr. [name redacted]

    I regret to inform you that I have been unable to obtain referees responses for your manuscript. This is an unusual but occasional occurrence. In such circumstances rather than to detail the publication process further I am returning the manuscript to you in case you wished to direct it elsewhere. I hope that you will understand this situation and I thank you for your patience and for giving us the opportunity to consider your work.

    Yours sincerely,

    [name redacted]
    Editor
    [journal redacted]

    ====================================

    Like

  9. Editors don’t need to select any reviewers. Many journals don’t accept submissions without suggestion of 3-5 reviewers. Lazy editors simply use these suggestions. There were several stories recently about authors who wrote reviews for own papers using fake e-mail addresses of “suggested revewers”.

    Like

  10. And also is it fair that an Editor alone decides secretely based on the feedbacks of 2 or 3 unknown reviewers that we cannot know if they are experts in the subject or not if our work will be published or not? Wouldn’t it be more democratic and rigorous if both peer-review and original data were available for public for the effects of a peer-review? Wouldn’t be a public debate about the contents of a scientific work an healthier and transparent one?

    Like

  11. Keywords? Impossible to catch a particle physicist using a set of keywords for Diabetes. So one can only conclude that their ‘system’ is bogus, though that didn’t stop them selling it to the Canadian government for a tidy profit.
    It doesn’t seem so difficult to meet review requests, which is why I find it strange that there are such delays. I always review on time and accept almost all requests from reputable organisations. The latter of course excludes the entire for-profit sector….

    Like

  12. I just did a quick search for Whiteson, diabetes, and UC Irvine and I found a researcher named Katrine Whiteson who does diabetes research as a faculty member at Irvine (https://news.uci.edu/2016/05/31/making-sense-of-microbiomes/). So…I suspect this was the simple matter of a typo! While I don’t blame Dr. Whiteson for getting annoyed about spam email, perhaps he should take his own advice and do a simple google search (and it only took 5 seconds for me to find it!).

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.