Book review

“Unreliable” by Csaba Szabo – book review and excerpt

Csaba Szabo's book "Unreliable: Bias, Fraud, and the Reproducibility Crisis in Biomedical Research" - review by Zoltan Ungvari and excerpt.

The Hungarian-born pharmacologist Csaba Szabo used to work for many years in USA and is since 2018 professor at the University of Fribourg in Switzerland. He now wrote a book, called “Unreliable: Bias, Fraud, and the Reproducibility Crisis in Biomedical Research“. It first appeared in Hungarian, the English-language version will be available from Columbia University Press in March 2025.

I read the book’s pre-publication draft and I strongly recommend it. Here I publish a review written by Zoltan Ungvari, associate professor at University of Oklahoma. It is followed by an excerpt from Unreliable, namely its preface.

Worth mentioning that Szabo did his PhD in UK, at the the William Harvey Institute in London and under the supervision of the late Nobel Laureate Sir John Vane. Another mentee of Vane and successor to his position was Christoph Thiemermann, who in turn trained Salvatore Cuzzocrea (who then became rector of the University of Messina in Italy). Both men now retracted a large number of publications for data forgery, with more retractions expected.

Queen Mary and John Vane’s Cowboys

Welcome to the the William Harvey Research Institute in London. Meet two proteges of its founder, the late Nobelist Sir John Vane: Chris Thiemermann and Mauro Perretti. Then meet their own rotten mentees, especially Salvatore Cuzzocrea and Jesmond Dalli.

In fact, Cuzzocrea’s first retraction was initiated by Szabo, for their joint paper Cuzzocrea et al 1999 (read July 2023 Shorts). When reading the book, you may try to guess who that Italian student code-named “Antonio” in Szabo’s lab really was.

Szabo’s own choice of action proves that having retractions on your publication record is per se not a problem, but to knowingly cover up fraud is. Everyone can unknowingly fall prey to a cheating lab member or collaborator, in that case retracting your own papers can be a sign of personal integrity. In this regard, Szabo joins the very short list of exemplary researchers like Frances Arnold (read here), Jonathan G Jones (read here) and the recently deceased David Wasserman (read here). Unfortunately, most scientists instead prefer to do everything to prevent retractions of their papers, even if they know the data therein is falsified. Many lie and deny. Some even deploy lawyers. Some try to stealthily influence the research misconduct investigations.

The Voinnet investigator and the tricky issue of conflict of interests

Great scientists never have any conflicts of interests, and in the case of the investigation of the research misconduct by the plant scientist Olivier Voinnet, led by his Swiss employer ETH Zürich, this was also apparently the case. Voinnet was found guilty of misconduct and admitted image manipulations in many papers. Yet his science remained…

Now, the (slightly shortened) book review by Zoltan Ungvari and an excerpt from Unreliable provided in the agreement with the publisher.

A disclaimer: Unreliable uses as illustrations numerous cartoons authored by me, for which I have been remunerated by Csaba Szabo prior to book’s publication. I am however not paid for publishing this review, and it wasn’t a part of the cartoon remuneration agreement either.


A Bold Reckoning with Fraud and Failure in Biomedical Research

By Zoltan Ungvari

 I have had the privilege of knowing the author of this book for nearly 20 years. Over the decades, our paths have crossed countless times as colleagues, collaborators, and friends. Some might question whether my perspective is biased due to our history, but I will gladly embrace that bias to highlight what I believe is a vital and timely contribution to one of the most pressing issues in biomedical science.

In informal discussions among scientists, the topic of the reproducibility crisis often rises to the surface. These conversations, born out of frustration and bewilderment over the state of research, its underlying causes, and the potential solutions, tend to be heated and bitter. Yet, they rarely yield anything concrete: no position papers, no joint statements, and certainly no comprehensive analyses published in book form. That is what makes Dr. Szabo’s effort so remarkable. He took these raw, often chaotic discussions as a starting point, delved deeply into the problem, and translated his findings into a thorough and accessible critique.

Although not a certified “scientific integrity expert” or a “science journalist,” Dr. Szabo is uniquely qualified to tackle this subject. As a highly accomplished figure in his research field (gasotransmitter biology and pharmacology), he has spent most of his career in the United States and understands its scientific ecosystem intimately. Now based in Switzerland, his position grants him independence and shields him from potential repercussions that might deter others from addressing these issues so openly and unflinchingly. While Szabo’s book does not make him a “whistleblower” in the traditional sense — his analysis relies entirely on publicly available information — it nonetheless took courage, determination, and a tremendous amount of hard work to bring this project to fruition.

The book delivers a hard-hitting exposé of the structural flaws, ethical lapses, and systemic failures in modern biomedical science. Its central thesis — that the scientific enterprise has been undermined by hypercompetition, financial incentives, and a culture of publish-or-perish — resonates as both a warning and a call to action.

The opening chapters establish the context for this crisis, detailing the pressures that young scientists face in academia. Szabo recounts his own journey — from a budding scientist in Hungary to a tenured professor — as an illustrative backdrop. This personal narrative grounds the book in real-world experiences, providing a rare behind-the-scenes peek at the grueling demands placed on researchers. Long hours, scarce mentorship, and ever-increasing pressure to secure and maintain grants create an environment ripe for ethical shortcuts. These pressures, Szabo argues, do not merely foster mistakes but actively incentivize misconduct.

A standout section of Unreliable is its examination of “scientific fraud and the fraudulent fraudsters,” where Szabo dissects high-profile cases such as the infamous work of Piero Anversa at Harvard. These accounts reveal how systemic issues — from pressures applied through hierarchical lab structures to weak oversight mechanisms — allow fraud to fester unchecked. Szabo’s focus extends beyond individual bad actors to a broader critique of the scientific ecosystem that enables and often tacitly rewards such behavior. Szabo thus frames the reproducibility crisis as a product of deeper institutional failings. Drawing on alarming replication studies, he highlights that over two-thirds of landmark cancer studies could not be reproduced. Szabo’s discussion of “p-hacking” and selective reporting illustrates how even seemingly mundane practices contribute to an unreliable body of literature. These practices, he argues, are not isolated aberrations but systemic issues perpetuated by journals’ obsession with novelty and institutions’ fixation on metrics.

The book’s fifth chapter, on the “broken publishing system,” is particularly scathing and expands on a hidden but pervasive problem: the toxic commercial interests of scientific publishing. Szabo takes readers on a tour through the wild west of predatory journals, where fraud and outright fabrication run rampant. He describes how these “publishers” operate as little more than profit-hungry enterprises, often preying on desperate or unscrupulous authors who pay to see their names in print. Even the supposedly respectable publishers don’t escape his scrutiny. Szabo points out glaring conflicts of interest and journal editors who seem more interested in raking in revenue than ensuring the quality of the science they publish. Then there are the “papermills”—factories of academic fraud that churn out fake studies like clockwork. His description conjures an almost absurd image of fraudulent science assembly lines, complete with “pay-for-authorship” schemes that tarnish the integrity of academic literature. It’s a system, Szabo argues, that thrives on quantity over quality, polluting the scientific canon with unreproducible or outright fictitious junk studies.

A particularly troubling theme in the book is the near-total lack of institutional accountability when it comes to scientific misconduct. Universities, research institutions, and grant-giving bodies, which should be the first lines of defense in maintaining scientific integrity, show little interest in investigating allegations of misconduct. Journals, too, largely fail in their responsibilities. Despite being the gatekeepers of scientific knowledge, they rarely perform retrospective analyses to uncover instances of image manipulation or other forms of data fraud in already published papers. The task of cleaning up the scientific literature has been left to a handful of self-appointed “lobbyists” for scientific integrity, working without institutional support or adequate resources and in the fear of litigation. When misconduct is exposed, the consequences are often nonexistent or laughably mild, focusing disproportionately on scapegoats rather than addressing the systemic issues or holding senior figures—those at the top of the pyramid—accountable for fostering an environment where such behavior thrives. This culture of avoidance and misplaced penalties perpetuates the very problems Szabo highlights, making his call for systemic reform all the more urgent.

Yet, for all its criticisms, Unreliable is not without hope. In its final chapters, Szabo outlines a way forward, and this is where his tone shifts from indictment to a call for action. He proposes a series of reforms to bring the system back to its ethical and intellectual roots. Among these suggestions are stricter oversight and mandatory data sharing—ideas that seem obvious but require a major cultural shift to implement. Transparency in peer review, which Szabo likens to letting some sunlight into a dark room, also takes center stage. He also advocates moving away from individually-based grant systems, such as typical R01 grants in the USA, in favor of institution-centered, larger funding schemes closely tied to strict attention to reproducibility and translatability. It is hard to disagree with his general notion that drastic reforms are necessary. 

Perhaps his boldest proposal, though, is the criminalization of deliberate scientific misconduct, a concept that has been raised over the years but has never really gained traction. “Why,” he asks, “shouldn’t knowingly faking data be treated like any other type of fraud?” This idea is bound to ruffle feathers, but Szabo argues that the fear of real consequences might be the wake-up call the field desperately needs.

Szabo is unafraid to call out hypocrisy, yet his genuine love for science — and his hope for its future — soften the edges. Many readers will find the book an insightful and engaging read, while others may find the tone somewhat bitter or too combative. The book’s extensive use of case studies and statistical analyses lends it credibility, though at times the level of detail may overwhelm some non-scientist readers.

Thus, Unreliable serves as both an indictment of the state of things and a roadmap draft for reform. It is a must-read for anyone concerned with the integrity of science — whether they are researchers, policymakers, or simply informed citizens. Szabo’s work underscores a sobering truth: The current system stifles innovative science by rewarding bad actors with undue advantages, while honest researchers struggle within a framework stacked against them. Without urgent reform, these issues will only escalate, jeopardizing not just the future of biomedical science but its very credibility. The book has already made waves in Szabo’s home country of Hungary, where it was first published in Hungarian a few months ago. It has sparked critical acclaim and spirited debate, drawing attention from scientists, policymakers, and the broader public alike. Now, with its English-language release, it is hoped that it will do the same on a global stage.


Unreliable

PREFACE

Almost twenty years ago the American biostatistician John Ioannidis published an article titled “Why Most Published Research Findings Are False.”1 His considered biostatistical opinion is that most of the published scientific literature is incorrect and therefore—by definition—not reproducible. Ioannidis is not a crackpot. He is the leading expert in a field that some people call “science about science.” He devoted his life to studying publication patterns, biostatistical issues, authorship issues, scientific citations, clinical trial designs, and much more. His research focuses on the hallowed biomedical scientific literature: a massive body of original papers and review articles, which are considered the bread and butter of everyone working in biomedical research and clinical medicine.

This literature is massive: PubMed, the largest database of biomedical litera- ture, contains more than thirty-five million papers; and Europe PubMed Central (Europe PMC) contains more than forty-two million papers, abstracts, and patents. This database grows by about one million new papers per year. Subsets of this immense collection are the starting points for every scientist who wishes to embark on discovering something new; identifying the foundation of new drug discovery and development; or the basis on which a physician can find out if a drug or medical procedure is effective or harmful. If this body of information is faulty or unreliable, how can anyone conduct meaningful research?

Basic research is the foundation of all biomedical science. This type of research is typically conducted in a research laboratory, for example, using cells cultured in a Petri dish or animal models of various diseases. These model systems are our workhorses to study how fundamental biological processes work—for example, how cells become cancerous or how various cells in the brain or the immune system communicate with each other. Some of this information leads to translatable concepts that can be the basis of designing new medicines, which—if things go well—eventually become available for the treatment of patients. But what if the entire body of biomedical literature cannot be trusted? Seems like there is an elephant in the room. The precious few who dare to mention its name meekly call it the reproducibility crisis.

The magnitude of the reproducibility crisis had not been tested systematically until recently. When it finally was tested, the results were mind-boggling. In 2011, scientists at the German pharmaceutical giant Bayer made a splash with news that they could not replicate 75 to 80 percent of the sixty-seven preclinical academic publications that they took on.2 In 2012, the hematology and oncology department of the American biotechnology firm Amgen selected fifty-three cancer research papers that were published in leading scientific journals and tried to reproduce them in their laboratory. In 89 percent of the cases, they were unable to do so.3 Several systematic reproducibility projects have been launched subsequently. The “Reproducibility Project: Cancer Biology” selected key experiments from fifty-three recent high-impact cancer biology papers. Although the replicators used the same materials and methods as the original paper, 66 percent of the published work could not be replicated.4 Several other replication projects are currently ongoing, and as you will see later in this book, the results aren’t looking too good either.

If you think it’s bad enough that scientists can’t reproduce other scientists’ data, then consider this. It turns out that most scientists can’t even reproduce their own data. In 2016, the prestigious scientific journal Nature published the results of its anonymous survey.5 More than 1,500 scientists replied, and in the fields of biology and medicine, 65 to 75 percent of the respondents said they were unable to reproduce data published by others and . . . 50 to 60 percent said they had trouble reproducing their own previous research findings.6

The implications are scary: it looks like the published body of biomedical literature is unreliable. I attempt to get to the bottom of this matter in this book. I cover some of the questionable but rather commonplace practices in research that exist in most published literature (for example, lack of blinding, lack of randomization, and selective publishing). Problems related to the inherent unreliability of some of the methods and reagents used in research are also covered. Questionable— but once again, commonplace—statistical practices (such as outlier exclusion and “p-hacking”) are also covered. I look into the dark world of so-called image correction as well as intentional data manipulation—all the way down into the deepest pockets of scientific fraud, to the so-called scientific paper mills where entire scientific papers are conjured up literally from thin air for paying . . . well, let’s call them . . . customers.

It is important to find out what goes on behind the scenes. What pushes people to resort to dishonest methods and approaches? Most of the public is unaware of the immense pressures and perverse incentives that are commonplace in biomedical research. In this book I examine the responsibility of individuals working at various levels in a research laboratory, but I also expose the significant pressures that the entire biomedical scientific ecosystem—the scientific institutions, the granting agencies, and the publishing industry—places on each individual investigator.

I introduce some of the “science sleuths” (also called data detectives). Official ones, such as the Office of Scientific Integrity (OSI) at the National Institutes  of Health (NIH), as well as unofficial ones: the dedicated private investigators who—often working in anonymity and in fear of litigation—devote their time to exposing scientific fraudsters and scammers.

Winston Churchill once said: “Democracy is the worst form of government— except for all the others that have been tried.” Similarly, the current system is the worst system to conduct research—except for all the others. The current system has produced major advances and will continue to do so. Some of the brightest minds on the planet work in biomedical science. It is the only “game in town.”7 It is possible to do good, valid science, and there are many investigators to prove it. Even though some observers conclude that the current situation is hopelessly broken, I disagree. Yes, the system is wasteful, redundant, and littered with “scientific garbage.” But maybe it is not beyond help. In chapter 6, I propose a few ideas to improve the situation.

Initially, I planned to cover a broader scope of subjects, from basic laboratory studies to clinical trials and drug approvals. I soon realized that basic research alone can easily fill a whole book. This is that book.

Finally, a personal note. I am not a disgruntled scientist who is desperate to take down the institutions of science out of spite. I chose a life in science at a very young age, and I have a fulfilling life in it. I don’t have a personal agenda against any of the scientists mentioned in this book. Even with all the dark phenomena detailed in here, I continue to believe in biomedical science.

I just think it can be made better.

  • 1] John PA Ioannidis, “Why most published research findings are false,” Plos Medicine 2, no. 8 (August 2005): e124.
  • [2] Florian Prinz, Thomas Schlange, and Khusru Asadullah, “Believe it or not: how much can we rely on published data on potential drug targets?” Nature Reviews Drug Discovery 10, no. 9 (August 2011): 712.
  • [3] C Glenn Begley, and Lee M Ellis, “Drug development: Raise standards for preclinical cancer research,” Nature 48, no. 7391 (March 2012):531.
  • [4] Timothy M Errington, Elizabeth Iorns, William Gunn, Fraser Elisabeth Tan, Joelle Lomax, and Brian A Nosek, “An open investigation of the reproducibility of cancer biology research,” eLife 3 (December 2014): e21627.
  • [5] Monya Baker, “1,500 scientists lift the lid on reproducibility,” Nature 533, no. 7604 (May 2016): 452-4.  
  • [6] Perhaps the selection of the responders may not be fully representative to the entire scientific community: it is possible that people who are more concerned or frustrated with this matter responded to the survey. 
  • [7] So, please, don’t get me started on the topic of pseudoscience called “alternative medicine”. (At least, not in this current book.)

This article is excerpted from Unreliable: Bias, Fraud, and the Reproducibility Crisis in Biomedical Research published by Columbia University Press. Copyright (c) 2025 Csaba Szabo. Used by arrangement with the Publisher. All rights reserved.


Donate!

If you are interested to support my work, you can leave here a small tip of $5. Or several of small tips, just increase the amount as you like (2x=€10; 5x=€25). Your generous patronage of my journalism will be most appreciated!

€5.00

7 comments on ““Unreliable” by Csaba Szabo – book review and excerpt

  1. felixsauvage's avatar
    felixsauvage

    Can’t wait to read it!

    Liked by 1 person

  2. Michael Jones's avatar
    Michael Jones

    Start bringing up the “reproducibility crisis” in your departmental meetings and bring up concrete examples, and then see what happens to your career.

    Re: the book—Exciting! Looking forward to reading it.

    Liked by 1 person

  3. Aneurus's avatar

    The magnitude of the reproducibility crisis had not been tested systematically until recently. When it finally was tested, the results were mind-boggling. In 2011, scientists at the German pharmaceutical giant Bayer made a splash with news that they could not replicate 75 to 80 percent of the sixty-seven preclinical academic publications that they took on.In 2012, the hematology and oncology department of the American biotechnology firm Amgen selected fifty-three cancer research papers that were published in leading scientific journals and tried to reproduce them in their laboratory. In 89 percent of the cases, they were unable to do so.Several systematic reproducibility projects have been launched subsequently. The “Reproducibility Project: Cancer Biology” selected key experiments from fifty-three recent high-impact cancer biology papers. Although the replicators used the same materials and methods as the original paper, 66 percent of the published work could not be replicated.4

    These numbers say all, and are known for more than a decade now.

    Yet, we hear from science stakeholders that the amount of dishonest scientists is a vast minority from the total. Well no, that’s not what the numbers suggest. And that’s not what we, as sleuths, experience by randomly checking people here and there.

    Like

  4. Klaas van Dijk's avatar
    Klaas van Dijk

    Thanks alot for publishing this review and a part of the English version of this book.

    Recently, LOWI has once again advised [in Dutch] that Dutch universities can / must throw away complaints when the complainer publishes (parts of) his complaint, even when the case refers to research which is funded for 100% by public money. LOWI is as well (fully) funded by public money, although in an indirect way. I still fail to understand why such rules do not exist in other sections of a society which are partly or fully funded by public money. https://lowi.nl/advies-2024-21/

    Like

Leave a reply to felixsauvage Cancel reply