Guest post

Lack of transparency in ERC funding decisions, by Shravan Vasishth

Academic research is dependent on funding, and funding agencies, both public and charity ones, play a crucial gatekeeper function in deciding who will go on to continue researching or even working in science, and who will not. With great power comes great responsibility. Unfortunately, funders traditionally end up serving the interests of select elite scientists by confusing the needs of those with the greater good of science as such. Money is dumped on the biggest pile, either to established star researchers or to their privileged academic scions. In this zero-sum game of science funding, many early career researchers see their grant applications rejected and are forced out of academia. The logic seems to be that this research proletariat would have spent it on booze and candy anyway, while the high elite will be investing it wisely to produce great science. Or whatever the funders, advised by that very elite, perceive to be great science. The guest post below by Shravan Vasishth, professor for psycholinguistics and neurolinguistics at the University of Potsdam, Germany, tells of a peer reviewer experience of his when it hit against such attitude from the most prestigious EU research funder, the European Research Council (ERC).

The assumption of such elite-oriented funders as ERC is, given the traditional standards of research metrics: with those scientist “geniuses” who once produced papers in Nature or Cell, one will be wise to invest another couple of millions, so they might produce another one or two of those cherished high-impact publications. The question if any of that stellar research is any reliable or reproducible is rather irrelevant: in fact the German central public funder DFG just now declared irreproducibility to be a very minor issue which says nothing about the paper’s real scientific quality. Other science funders however started to understand the importance of research reproducibility and introduced mandates for open data in order to invite the wider scientific community to peer review and to verify the published research supported by these funders. The Gates Foundation enforces unconditional data sharing and does not allow any backdoors and opt-outs, which are unfortunately a ready option with the European Union and its Horizon 2020 funding programme.

Research integrity, or, more importantly, the dangerous shortcomings on it, is yet another issue. Large funders are able to exercise their goalkeeper function to sanction bad science and misconduct. If they chose to do so, their grip on the cheating scientists is significantly stronger than that of any journal or even academic employers like universities (in this regard, see this older opinion piece of mine). The European molecular biology society EMBO, which funds early career researchers and young investigators, proved itself very well able to bring the hammer down on those who manipulate data (see cases here, here and here). Elsewhere, again and again we hear of some long-discredited zombie scientist getting another piece of funding cake, while their honest peers go hungry. Certain more traditional (or elite-oriented funders) seem however to think that research integrity is for losers who should blame their pathetic lack of creativity for never publishing in the top journals.

ERC certainly seems to toe this line. They keep awarding money in chunks of €2-2.5 Million to well-connected scientists whose appalling research integrity record is secret to noone, and even continue to do so once those ERC grant recipients had to retract papers or were officially found guilty of data manipulation and research misconduct by their host institutions (see examples I present here). Funnily, ERC does have a “Standing Committee on Conflict of Interests, Scientific Misconduct and Ethical Issues”, but they are apparently loath to use it properly, as this bogged-down case from Spain demonstrates. Such attitude is not only damaging to ERC’s own reputation, but to that of those many honest scientists they fund.

Now, here comes one peer reviewer’s own experience with ERC. Their press office announced to me to issue a comment, which I will then provide as an update to this article. The following account was written in a way to prevent the identification of the ERC grantee in question. Of course, if ERC had any concept of transparency or accountability, all peer reviewer reports of funded projects would be published anyway.

ERC


Lack of transparency in ERC funding decisions, by Shravan Vasishth

I have been reviewing for journals and funding agencies since 2002, and have done perhaps 150 reviews so far. Something I really like about journal reviewing is that in most cases, the journal sends me the reviews of all the other reviewers, along with the action editor’s decision. This is extremely useful feedback for a reviewer to get, because it helps one understand how one’s own subjective judgement about a paper stacks up against other reviewers’. Was I too harsh? Was I too lenient? Did I miss something important in the paper? All these questions can be answered to some extent if one sees the rationale for the final decision on a paper, plus all the reviews.

With funding agencies, the situation is different. All funding agencies I know of (NSF, Swiss NSF, NWO, ANR, etc.) only tell the reviewer what the final decision was, and sometimes not even that. A case in point is the European Research Council’s Starting/Consolidator/Advanced Grant funding scheme. I have reviewed for the ERC several times now. The ERC hands out millions of Euros to researchers, and I am sure that most of it is money well spent. But given the reviewing I have done, I am not sure that the ERC paying attention to detail.

An example is a review I did for the ERC. This was a proposal for an advanced grant. I had several problems with the proposal, the major one being that the proposal was just a minor incremental advance over what the researcher had been previously funded for. Incremental advances are something the ERC explicitly does not fund, and something that, in my opinion, was definitely not worth 2 million Euros of funding.  More seriously, I took the trouble to read the PI’s papers and found several serious mistakes in their work. For example, in one paper they write:  “t=1.24“, with a p-value well below 0.05. That p-value is a mathematical impossibility. There is no way that you can get a p-value that low with a t-score of 1.24. I explain this next.

A t-value is a number expressing how far the sample mean is from the hypothesized mean, standardized by the standard deviation of the sampling distribution of the sample mean (the standard error or SE). If the hypothesized mean is mu0, and the sample mean is xbar, and the standard error is SE, then the t-score is defined as t=(xbar – mu0)/SE.  If the sample mean is far from (either greater or smaller than) the hypothesized mean, the t-value will be much larger than 0; if the sample mean is near the hypothesized mean, the t-value will be near 0.

One can determine the probability of obtaining the observed t-value or a value more extreme by plotting the t-distribution and computing the area under the curve that lies beyond the observed t-value. The shape of the t-distribution is roughly Gaussian, but it has somewhat fatter tails. The exact shape of the t-distribution for a particular data-set is determined by the sample size of the data. The relevant t-distribution then is parameterized by a number called the degrees of freedom, and this number is n-1, where n is the number of data points. For example, in the paper I mention above, the number of data points was 9, so the relevant t-distribution is t(8).

Given this t-distribution, if we observe a t-value of 1.24, we can calculate the probability of seeing a value like 1.24 or a value more extreme (in either direction) by adding up the area under the curve to the left of -1.24 and the area to the right of 1.24, in a t-distribution with degrees of freedom 8. The figure below shows the two areas we have to compute; the vertical lines mark -1.24 and 1.24 in the t(8) distribution. The t-distribution stretches from – infinity to + infinity, but I just show a truncated t-distribution here.

Once we have this probability (called the p-value), we can report the probability of observing an absolute t-value like 1.24 (or something more extreme) under the assumption that the hypothesized mean mu0 is true. If this probability is below 0.05, we conventionally say that the hypothesized mean is unlikely to be the true mean, so we reject that hypothesis. If the p-value is larger than 0.05, we failed to reject that hypothesis. [More teaching material here, -LS]

shravan erc

In the above case, where we have a distribution t(8) and the observed t-value is 1.24, the corresponding p-value is 0.25. Instead the authors reported a p-value of less than 0.05. That is an impossibility. This takes exaggerating your results to a whole new level. It also shows that researchers are so obsessed with getting low p-values that they invent alternative facts in order to get that to happen.

This wasn’t an isolated example. I had found several such mistakes in the PI’s papers. For me, this completely disqualifies the researcher from getting 2 million Euros. The researchers were literally making up their p-values and nobody (certainly not the reviewers and action editor of the paper) noticed. I doubt that the researchers were engaged in scientific misconduct, they were merely careless, or (more likely) just incompetent. But as Andrew Gelman says, any sufficiently crappy research is indistinguishable from fraud. At the very least, this kind of work needs to be discouraged. A strong signal needs to be sent to such researchers that they are not going to get millions of Euros unless they clean up their act.

Despite these—to my mind—serious objections, the ERC funded this person. The reason is not clear to me, but I suspect that the PI was simply too famous not to get funded. This is a very general problem that needs to be addressed. When a famous scientist, or a scientist from a brand-name university like Harvard, MIT, Stanford, Cornell or the like submits a totally crappy paper to a top journal, it is more likely get published. For example, as Tomkins et al (2017) recently noted:

“Once papers were allocated to reviewers, single-blind reviewers were significantly more likely than their double-blind counterparts to recommend for acceptance papers from famous authors and top institutions. The estimated odds multipliers are 1.76 and 1.67 respectively, so the result is tangible”.

If you are from a less famous university, your best bet for survival is to cultivate the editor-in-chief of a major journal by schmoozing with them on Facebook or in person, at conferences. I have personally seen several cases of the editor-in-chief being from the same university or even the same department as the author of a paper. In such cases, there is a clear conflict of interest, but this is often ignored.

I hope that something similar is not happening in the ERC. But how would we know if it is? In the case of journals, we know, because as a reviewer, I see the decision of the editor, and I see the other reviews. I know how crappy the paper really is, and I see that the editor overrode all the objections of the reviewers because he/she is best friends with the author. Similarly, I have seen the action editor refuse to accept papers that go against the theory of their friend/colleague.  All this is out in the open, and that’s great, because then we can identify the unethical people openly.  At least they are doing it openly. I can respect that a little bit.

The ERC needs to similarly become more transparent and accountable for their decisions. They should (a) release all reviews to the reviewers, (b) the ERC’s rationale for deciding to fund or not fund. Instead of providing this level of detail, this week I received an ungrammatically constructed sentence from the President of the ERC that tells me the following [full letter here, -LS]:

“The European Research Council would like to thank you for your assistance as external referee in the scientific evaluation of proposals submitted to the ERC Advanced Grant Call…. Your reviews were very much appreciated by the evaluation panels, as it greatly helped it in their task to identify the truly excellent proposals submitted to these calls.

For an external reviewer, it is not always easy to conform the expression of your opinion to the panel’s demand. Also, we are conscious that providing reviews is a time-consuming task with often only little direct reward. This is one more reason for us to express the gratitude of the European Research Council for your contribution to the academic life.”

What does the president mean when he says: ” For an external reviewer, it is not always easy to conform the expression of your opinion to the panel’s demand“? What do the phrases “conform the expression of your opinion” and the “panel’s demand” mean?

I would like to see more details, specifically, why the panel disregarded my decision and the problems I raised in the proposal and the PI’s previous work. I am willing to accept that I judged the researcher too harshly, and that I might indeed be wrong to reject the proposal. However, I would like to learn from the other reviewers’ perspectives. This opportunity was never given to me.

These funding committees have a tough job to do, I understand that. They have limited resources and they have to use their best judgement to make their choices. I am fine with all that, and I am fine with the subjectivity of the decision-making process; after all, my own decisions are subjective and biased too. However, the lack of accountability of these decisions is a big issue. When a reviewer provides a review for a proposal, the funding agency should release all reviews and the rationale of the funding committee to each reviewer.

Incidentally, when I posted my complaints about the bland and incomprehensible message from the President of the ERC on twitter, I heard from the ERC the same day. They claim that they have no knowledge of any letter sent to me:

“Dear Prof Vasishth, we noticed your tweets about ERC review process earlier today. We could not identify any letter or email that ERC president sent to you, so perhaps there is some confusion. If you have any comments, further questions, please send them by email to ERC-PRESS@ec.europa.eu. We’ll forward your message to the right person”.

In response, I sent them back the letter they had emailed me. The letter to me must have been sent by an automatic mailer, with no human intervention. What’s surprising is that they can’t find the automated email they sent. If I hear from them, I will post on their response.


Update 10.05.2017. ERC did not reply to me as promised (yet), but they did write to Shravan Vasishth. Everything is now perfectly clear and understandable. Or maybe not:

“Dear Professor Vasishth,

The letter you are referring to is a standard communication sent to
all external reviewers of ERC proposals. Over the years we engaged
more than 10,000 such reviewers. The purpose of this letter, if one
reads it fully, is to express our gratitude for the work, which is
essential to the functioning of the ERC, though usually very complex,
time-consuming, and with little direct reward.

The complexity of the external reviewers’ task includes the fact that
it is not always easy for them to draft a review in a way that can
fully address the needs, questions and concerns that the panel members
may have. This is a difficulty for any external reviewer in any
evaluation process. The sentence you pointed out was perhaps
formulated in a way that was not entirely clear for the readers, but
we hope that this explanation brings more clarity.

Let us add here that it is a common practice in major funding agencies
to share the evaluation reports among all panel members, but not with
external reviewers. This approach guarantees both the necessary
transparency, but also the necessary confidentiality of the work of
panels. Remote referees are called upon by the panels for their
specific expertise to help panel members to judge proposals. They do
not take part in the discussions of the panels and don’t have access
to other proposals being evaluated.

We hope to have satisfied your request and we thank you again for the
[…] occasions in which you have served as external reviewer for ERC
proposals lately.

Best regards,

ERC press office”

21 comments on “Lack of transparency in ERC funding decisions, by Shravan Vasishth

  1. Sounds familiar. Try Erwin Chargaff’s “In Praise of Smallness”: How can we return to small science?

    Like

  2. Great link, thanks!

    Like

  3. Pingback: Boletim de Notícias, 27/abr: Humanos há 100 mil anos nas Américas | Direto da Ciência

  4. Perhaps this article gives you an idea of what “conform (the expression of your) opinion to the panel’s demand” means.
    https://www.lifescience.net/news/661/funding-projects-with-european-commission-grants/
    It is to be taken literally. The project is already chosen, and you are advised to write your review accordingly.

    I find it amazing, the hypocrisy of the EU/EC. They preach open science. Article 13 of the EU Charter even states “The arts and scientific research shall be free of constraint.” How is incompetent and corrupt EC bureaucracy not in violation of this article, is a mystery to me.

    For open science advocates, it is important to understand that this is the most critical issue of all. Everyone depends on research funding and unless transparency of funding allocation processes increases, other aspects of open science simply will not work.

    Like

    • Ana Pedro

      Is this really surprising?
      Nevertheless my apologies to a few scientists who really deserved getting an ERC grant

      Like

  5. “An example is a review I did for the ERC. This was a proposal for an advanced grant. I had several problems with the proposal, the major one being that the proposal was just a minor incremental advance over what the researcher had been previously funded for. Incremental advances are something the ERC explicitly does not fund, and something that, in my opinion, was definitely not worth 2 million Euros of funding. More seriously, I took the trouble to read the PI’s papers and found several serious mistakes in their work. For example, in one paper they write: “t=1.24“, with a p-value well below 0.05. That p-value is a mathematical impossibility. There is no way that you can get a p-value that low with a t-score of 1.24.”

    Why not disclose the problematic paper?
    People should know.

    Like

    • paper easily identified in pubmed by searching on the t value

      Like

      • Still the other papers and mistakes were mentioned. It would have been good if these reviews were detailed and publicly available.

        Although, I feel most scientists have to conform to another non-optional social convention that after a paper is published it should not be criticized or reviewed (unless it’s really completely wrong).

        Like

    • I criticize a lot of papers publicly, also incorrect analyses. See my home page.

      My beef here is not with this particular researcher who got the ERC award, and not even the research question. There are many other researchers like this one doing strange stuff (just read Andrew Gelman’s blog). My beef in this post is with the ERC decision process, which is flawed. Clearly, the ERC committee members don’t understand why it’s serious that a nonsensical p-value appeared in the paper I mention, otherwise they would have taken my review seriously. I would like to know how they make these funding decisions. It’s possible they had good reasons to fund this proposal, I could have been overly harsh. I can’t judge the extent of my misjudgement because I never saw the committee’s documented decision and the other reviews.

      In linguistics and psychology, it’s a low stakes game. Our mistakes will not kill anyone. But when the ERC starts to display lack of understanding about statistics in the medical field, for example, this is worrying.

      Like

  6. Following up on the ERC’s response, I guess my main problem here is this: how do I, as a reviewer, evaluate the panel’s competence to make a decision? It is important for me to know that a decision was made on good grounds. How can I know? I won’t just trust the ERC to make a good funding decision, because I have seen too many bad decisions (Human Brain Project, many others).

    Like

  7. “Secrecy breeds incompetence.” Julian Assange

    Like

  8. Frédéric

    Thank you to both authors for the post and their integrity. Like we say in French “On ne prête qu’aux riches.” (Only the rich get richer.) I have always been amazed (and pissed off) by how often conformism is met in the scientific community.
    The post actually recalled me some occasions where I felt like a spectator watching self-important people amongst themselves while pretending that others had a word to say. I attended several workshops organized by prominent senior scientists which were held at an elite institution. Although the workshops were intended for junior scientists like students and postdocs, most speakers clearly came not to educate, but to impress the first row of people who invited them. Concepts and results and methods whose meaning could only be grasped by the latter people (who were in fact already convinced) were shown. What a waste of time and money. Later, at one occasion the attendees received a message from the organizors asking why the questions never came from the students. Well, like with the ERC, the problem is that they were unable to understand that there was a problem with the way they organize things, and suggested that the students were ungrateful idiots who wasted their effort and the time of such great colleagues. It is hard to believe that that kind of conformist and submissive behavior leads to breakthrough science.

    Like

  9. Ana Pedro

    If scientists were obligated to create richness, jobs by developing useful applications in the society this and other types of issues wouldn’t be a problem

    Like

  10. From Latvia

    ERC is a typical product of neocolonialism

    Like

  11. Pingback: Janine Erler dossiers which ERC does not want – For Better Science

  12. Pingback: Does ERC help cheaters pay protection money? – For Better Science

  13. Pingback: Fousteri affair: Dutch integrity thwarted by academic indecency – For Better Science

  14. Pingback: New ERC President Mauro Ferrari was partner of Texas cheater Anil Sood – For Better Science

  15. Pingback: Gradito al regime - Ocasapiens - Blog - Repubblica.it

  16. My problem with ERC grants is that there does not seem to be any transparency in the procedures, and that when problems occur, there is either no one to contact, or the person to contact tells you that in the end it is about national laws and that they cannot help. I had two bad experiences with ERCs and ended up not understanding why they fund post-docs in humanities with them. I can admit a scientist needs to frame a project very solidly, but in literature or philosophy, you cannot ask a scholar who is applying for a post-doc within your ERC project to write in three years a monograph on a topic which is actually one step in your own larger idea. Even more so when humanities are also becoming hyper-specialized. For an interview to an ERC post-doc recently, I first answered a call for projects which described a project and asked for three referees letters (not just the emails of the referees, and only after I was” shortlisted” I received the actual description of the project (from the ERC application obviously, and barred with “confidential”).This description had nothing to do with the one of the call, almost. The shortlisted candidates were also asked to prepare a 10 mn presentation. I call this brainstorming and I wonder how much of our intellectual property will be respected, when an ill-defined project is actually interviewing 5 candidates with very different backgrounds!! In addition to that, the PI was in fact awarded directly a new ERC consolidating grant, directly following an ERC. How much freedom does that leave to the results of the participants?
    And do not look for anyone at the ERC to contact, there is no email to be found when you want to ask if this is all very legal…
    In comparison with the bad ERC experiences, I have enjoyed two Marie Curie fellowships where my rights as an employee did exist. I would suggest ERC should be used only to fund PhDs and expert technical staff. It is ridiculous to lure post-doctoral researchers in humanities with it, and the money would be better employed funding positions in the research agencies at the national level.

    Like

Leave a comment