In medicine, academic performance is evaluated quantitatively, by the sheer number of papers. Promotions are granted according to the publication output, often counted in hundreds. Doctors love to throw around sentences like “I have more than 300 papers”, or 400, or 500, which is meant to put their clinician colleagues in their place. Such high-throughput publishing culture heavily relies first of all on the system of “honorary” authorships, i.e. those utterly unrelated to the actual research become co-authors solely by the virtue of their higher hierarchy status or their being friends or even family. Other questionable tactics are salami-publishing (where even a tiniest dataset or analysis is stretched and re-used again and again for several consecutive publications) and good old self-plagiarism, or text re-use. To avoid being busted for double-publishing, clever doctors combine both methods to achieve some variation between their overlapping publications. At the end of the day, where others would publish only one measly paper, these tricksters get two, three and much more. Guess whose publication list will look more impressive, and who will climb the academic career ladder then. Another danger of self-plagiarism: it can lead to “proper” plagiarism and poor quality research. When untreated in one scientist, it also becomes contagious.
The US-based Croatian radiologist Hedvig Hricak is a master of this strategy (see my earlier report on self-plagiarism), in fact she is so good that her junior colleagues seem to have been learning it from her. Hricak leads the radiology department at the prestigious Memorial Sloan Kettering Cancer Center (MSKCC) in New York. Despite her previous retraction for text and data re-use, as well as her solid record of self-plagiarized papers, MSKCC decided to fully support Hricak: their research integrity officer declared in January 2016 that he had “not found any issues that have not already been satisfactorily addressed”. Not only this, MSKCC even moved to employ certain clinicians from under Hricak’s patronage, despite or maybe even because of their own unethical publishing and retractions.
In fact, Hricak seems to feel that safe that she apparently started to plagiarize texts written by others as well. The paper by Kircher, Hricak and Larson, “Molecular imaging for personalized cancer care”, Mol Oncol. 2012, contains at least five sections which are very similar to the contents of the 2010 book chapter “National Cancer Clinical Trials System for the 21st Century: Reinvigorating the NCI Cooperative Group Program”, by Nass, Moses and Mendelsohn. A certain whistleblower alerted the journal to “plagiarism and copyright violation”, this detailed analysis I make available here. The Editor-in-Chief (EiC) of Molecular Oncology, Julio Celis, informed the whistleblower some days ago that he gave the authors 30 days to reply.
While plagiarism is usually a quite straightforward reason for retraction, self-plagiarism is much trickier. The “Text Recycling Guidelines” on the website of the Committee on Publication Ethics (COPE) are muddy at best. Basically, COPE seems to leave it up to the individual editors to decide whether any given self-plagiarized paper warrants a correction or even a retraction. Some editors are honest, some less so, and some bend the rules for those with power and connections. This is why one self-plagiarized Hricak paper was retracted and others were saved, occasionally with COPE’s explicit approval. Hricak is after all a very influential and well-connected clinician; even those “just” working for her enjoy an impressive degree of editorial well-wishing.
Presented below are some cases of Hricak’s and her colleagues’ self-plagiarism and salami-publishing (as forwarded to me by the whistleblower), and how the journal editors chose to condone them.
- Lakhman, Katz, Goldman, Yakar, Vargas, Sosa, Miccò, Soslow, Hricak, Abu-Rustum and Sala, “Diagnostic Performance of Computed Tomography for Preoperative Staging of Patients with Non-endometrioid Carcinomas of the Uterine Corpus”, Ann Surg Oncol. 2016
Lakhman, Yakar, Goldman, Katz, Vargas, Miccò, Zheng, Moskowitz, Soslow, Hricak, Abu-Rustum and Sala, “Preoperative CT-based nomogram for predicting overall survival in women with non-endometrioid carcinomas of the uterine corpus”, Abdom Imaging (now titled Abdominal Radiology), 2015
For these two papers, the whistleblower informed both journals that
“the patient cohort is identical, the imaging readers are identical, the imaging features are identical, the tables on interobserver agreement, baseline characteristics and CT features are all the same”.
The whistle-blower’s detailed critique is available here. Deb Whippen, senior managing editor at Annals of Surgical Oncology, replied then as follows:
“Both journals carefully reviewed the articles and the claims you brought forth according to the COPE guidelines. The Editors investigated the matter and concluded that while the data studies share the same patient population, the questions are sufficiently different to warrant two publications. Each journal serves a different reader base and there is a valid need to educate both audiences. We consider this matter to be officially closed”.
The two journals issued on January 29th 2016 a similar-sounding editorial note, here and here, to inform their readers that self-plagiarism and data reuse is an ethical and actually useful practice, as long as a bit of “different rationales and criteria” is thrown in.
- Vargas, Burger, Donati, Andikyan, Lakhman, Goldman, Schöder, Chi, Sala, Hricak. “Magnetic resonance imaging/positron emission tomography provides a roadmap for surgical planning and serves as a predictive biomarker in patients with recurrent gynecological cancers undergoing pelvic exenteration”. Int J Gynecol Cancer. 2013
Burger, Vargas, Donati, Andikyan, Sala, Gonen, Goldman, Chi, Schöder, Hricak. The value of 18F-FDG PET/CT in recurrent gynecologic malignancies prior to pelvic exenteration. Gynecol Oncol. 2013
Donati, Lakhman, Sala, Burger, Vargas, Goldman, Andikyan, Park, Chi, Hricak. Role of preoperative MR imaging in the evaluation of patients with persistent or recurrent gynaecological malignancies before pelvic exenteration. Eur Radiol. 2013
The whistleblower alerted all three journals as well as COPE in January 2014, the criticisms were similar as above: identical patient populations, hypotheses, results “either identically published or slightly modified”. Detailed criticisms and communications are available here.
This is how the journals replied. The response of Gynecologic Oncology was understandable: they declared that theirs was “the original paper in the group of three” and that they are discussing the issue “with the Editors-in-Chief of European Radiology and the International Journal of Gynecological Cancer”. Uziel Beller, EiC of the latter journal, explained to the whistleblower:
“There is a grain of truth in your detailed observations however, in our opinion, we do not believe it warrants or justifies further action. We are certain that Dr. Vargas [Hricak’s subordinate at MSKCC, -LS] and colleagues appreciate and understand the significance of ethical publishing and will keep the rules and regulations in mind when producing original work in the future”.
The EiCs of European Radiology Beth Y. Karlan and Adrian Dixon drew the whistleblowers attention to Beller’s letter and stated on behalf of their journal:
“Following internal investigation (which looked at the detailed submission letters explaining the background of these papers), discussion between the relevant editors and with the lead author who is ultimately responsible for their governance, it has been concluded that these separate publications were justified on the grounds that all three papers investigated a different technique or combination of techniques and tested different hypotheses.
While nobody condones excessive publications on any one group of patients, it is inevitable that large and growing data sets will be analysed in numerous different ways and that such analyses will become more common in the future. It is of course essential that the authors and editors make full reference to other papers published on related topics and similar data sets, especially so that the data do not get counted as separate studies in any subsequent meta-analysis”.
The common author to all above mentioned papers is the Albanian radiologist Evis Sala, at that time lecturer and honorary consultant radiologist at the Addenbrooke’s Hospital in Cambridge, UK. Sala had to retract one of her papers just 3 months after it was published in the journal The Obstetrician & Gynaecologist in January 2012. The retraction note for Moyle et al, “Magnetic resonance imaging of uterine abnormalities” gave only unspecified “factual errors and out-of-date terminology” as rationale.
Despite the above, Sala is now a proud member of Hricak’s radiology team at MSKCC. Another self-plagiarism-experienced Balkan colleague of Hricak’s, who also joined recently, is Ivana Blažić.
- Blažić, Maksimović, Gajić and Šaranović. “Apparent diffusion coefficient measurement covering complete tumor area better predicts rectal cancer response to neoadjuvant chemoradiotherapy”. Croat Med J. 2015
Blažić , Lilic and Gajić. “Quantitative Assessment of Rectal Cancer Response to Neoadjuvant Combined Chemotherapy and Radiation Therapy: Comparison of Three Methods of Positioning Region of Interest for ADC Measurements at Diffusion-weighted MR Imaging”. Radiology, 2016
Again the whistleblower laments a “major overlap in patient population”, and that the tables and 2 out of total 3 figures are the same (detailed criticism here). Neither paper references the other, yet both claim that “to our knowledge, this is the first study” to apply the described methodology in order to predict tumor response to chemoradiotherapy.
Interestingly, the academic EiC of the journal Radiology, Herbert Kressel, was originally appointed as Editor-in-Chief by none other than Hricak herself. As I reported before, Kressel took great pains to cover up the issues of Hricak’s self-plagiarism and data re-use in his journal. He also disregarded peer criticism of yet another questionable Hricak publication in Radiology, namely Zhang et al. 2009. Korean radiology professor Byung Kwan Park of Samsung Medical Center in Seoul Korea was a staunch critic of that Hricak paper and demanded its retraction due to “too many critical flaws”. Kressel published Park’s Letter to Editor in the same year, followed by a Response from the first and corresponding author Jingbo Zhang from MSKCC (link to both here).
In 2013, a second correction followed. Zhang, the original “guarantor of integrity of entire study”, disappeared as author completely (together with another co-author, Liang Wang). Instead, a totally new first and corresponding author from MSKCC’s Department of Radiology was surprisingly pulled out of the hat. Correcting a paper by a new corresponding author, with the original one banished, four years after the paper was first published and criticized, is certainly an unconventional event in academic publishing. Not only this, Kressel even recruited his own statistics expert to save the Hricak and Co paper. However, that expert apparently overlooked certain mathematical irregularities which the whistleblower alerted the journal to (detailed analysis here). Most strikingly, Hricak’s team proclaimed that the doubling time of patients’ tumours “ranged from −248 to 72 days (mean, 474 days; median, 811 days)”. Thus, both the mean and median values were beyond the range of the doubling time, a mathematically somewhat improbable situation. But surely not to Kressel and the journal’s publisher, the Radiological Society of North America (RSNA), which Hricak used to preside over.
To me, Park wrote in regard to Hricak’s paper:
“There are a lot of critical flaws in it. None of the authors can answer correctly the questions that readers pose. Their doubts will not be quenched even though any erratum will be published”.
It seems therefore not all of Hricak’s is that awe-inspiringly excellent that the medical research community should be thankful on their knees for her evangelical self-plagiarism. But with Sala, Blažić and others, this practice already went into next generation, all due to Hricak’s protective tutelage.
Welcome to the wonderful world of clinical publishing.
Update 27.09.2016. the first author of Zhang et al., Radiology, 2009.has contacted me through my website and accepted my invitation to place a reply to the criticisms of her paper as a comment below.
The journal Radiology has the highest impact factor among diagnostic imaging journals. Why would such a major Radiology journal publish a study that also appeared in Croatian Medical Journal, which is a minor local journal. I do not understand – they are not in the same league.
Leonid – nice piece and a lot of valid points. How about “metastatic plagiarism” instead of “infectious plagiarism” to describe these people? It seems they all work in a cancer hospital – LOL
Dear radiology md – There are two possibilities as to why this was published in Radiology. Either the editors gave the authors a “pass” or the editors did not know.
I have to think that the editors did not know. I published in Radiology and there is a sole submission policy. When you look at both articles it looks like Croat Med J. 2015 article came from Belgrade and the Radiology, 2016 came from New York and Belgrade, but the figures are the same. I doubt the patients flew transatlantic for this study!!
I don’t know how the authors got away with the sole submission policy and how the same figures appear in both.
Zhang and Hricak’s paper in Radiology should be REUSED in a journal like Annals of Improbable Research (
“the doubling time of patients’ tumours “ranged from −248 to 72 days (mean, 474 days; median, 811 days)”. ”
How could that statement pass peer or editorial review ?????
Read the link below…The editor Kressel warns people about what might happen to them if they plagiarize!!!
he even warns people that blogs like this may catch you!!!
Plagiarism and Redundant Manuscript Screening
This year we will introduce one additional new initiative in manuscript processing. Our journal and a number of others have been concerned about the issues of plagiarism and redundant manuscript submissions (2–7). A number of studies have suggested that as many as 10% of submissions to biomedical journals may be redundant (8). The availability of software to identify plagiarism and redundant publication has served to further heighten our awareness of the prevalence of this problem (9). Web sites such as “Deja vu: MEDLINE Duplicate Publication Database” (10) and “Retraction Watch” (11) serve to further underscore both the importance of the problem and the disappointing frequency with which it occurs. This year, as part of our pre–peer review process, we will be screening submitted manuscripts for plagiarism and/or redundancy by using a dedicated software package. Problematic manuscripts will be returned to the authors without peer review. As noted in our publication information for authors and on the author contributions disclosure forms that we require for every submission, Radiology accepts manuscripts of original research only with the understanding that they have been contributed solely to this journal; authors must attest that a manuscript on the same or similar material has not already been published by them and has not been and will not be submitted to another journal by them or by colleagues on their study before their work appears in Radiology. We hope that this initiative will serve to deter authors from this inadvisable practice and further enhance the scientific integrity of this journal.
and this is what Hricak (as first author) writes in Radiology !!! DO AS I SAY, NOT AS I DO….
Hricak warns people for “recent exposes in the press” :-))
A Statement about Authorship from Individual Members of the International Society for Strategic Studies in Radiology
There has been much recent discussion (1,2) about what constitutes appropriate authorship for scientific articles. Indeed, recent exposés in the press and elsewhere about “gift,” “honorary,” and even “ghost” authorship have led to lawsuits, other forms of litigation, and the ruin of the careers of some dedicated health professionals. In many cases, improper assignment of authorship is due to naivety or oversight; but in some cases, it is undertaken knowingly and, at worst, for ultimate financial gain. The International Committee of Medical Journal Editors (ICMJE) has issued stringent guidelines for appropriate assignment of authorship and other aspects of scientific publications (3). These guidelines also deal with issues concerning redundant publications.
The International Society on Strategic Studies in Radiology (IS3R) unanimously endorsed the statement at their 2011 annual symposium and, in addition, the following IS3R members have personally underlined their support for the ICMJE criteria for authorship and are determined to ensure that colleagues in their department/institution/working group adhere to them fully.
If Radiology uses special software to detect plagiarism and/or redundancy, why didnt they catch the Blažić , Radiology, 2016 publication?
It has most of same patients from Blažić Croat Med J. 2015 paper.
These guys under the leadership of Hricak are on a roll — no wonder why their chairman – Hricak – warns against “exposes” in the media. She and her team have been featured in several press stories.
As a reply to SPIN ECHO’s comment above.
First of all software cannot catch all forms of plagiarized papers. Some “experts (!)” know to find ways around it…such as re-wording the same thing in different ways.
Second, some big-shot authors obviously have “free passes” with their editor friends – an example is the kidney paper Leonid wrote about…the authors report that means/medians are outside the range of minimum and maximum, and all other non-sense.
Did anyone notice that in the first erratum – the authors DEFEND this and SAY “means and medians can be outside the minimum and maximum range” and Kressel (editor) accepts this explanation!!!
This was very strange to me until Leonid exposed that Kressel (editor) was actually appointed by Hricak – the author of the infamous paper.
Shockingly – 4 years after the first erratum, Hricak publishes another erratum on the same paper – which is more ridiculous than the original paper and the first erratum.
This time Kressel THANKS THE AUTHORS!!!
Well…why are the obvious problems and IMPOSSIBILITIES in this paper are being swept under the rug…repeatedly?
And why are there so many plagiarized and redundant papers coming out from a single department in a world renowned hospital?
Hricak or Memorial Sloan Kettering should make a public statement to clarify this mess.
After all Memorial Sloan Kettering is a tax exempt institution and a big recipient of Federal Grants. The Office of Research Integrity under the US Department of Health takes research misconduct very seriously….https://ori.hhs.gov/
I understand the humor in some of the comments, but this is a very serious subject. I believe the accurate language for most of these paper is “salami sliced publishing”. Salami sliced publishing has been written about and in my judgement is far more serious than self-plagiarism. Self-plagiarism may be have a copyright problem and self-loving, but misleading salami slicing in my opinion is scientific misconduct and dangerous to the scientific health community.
Salami sliced publications has been written about in the scientific press
There is a big cost as writen in Nature Materials
It is well dezcribed in this publication (Salami publication: definitions and examples Vesna Šupak Smolčić )
And it is even dezcribed by the international publisher Elsevier
Click to access Quick_guide_SS02_ENG_2015.pdf
Elsevier writes “as a general rule, as long as the “slices” of a broken up study share the same hypotheses, population, and methods, this is not acceptable practice. The same “slice” should never be published more than once”
There are certain examples when publishing separate papers is tolerated, but with open access and internet publishing the argument that there are separate audiences who read different journals is not usable anymore. In salami sliced papers the text is different so that is the reason they are not identified by plagiarism software.
There are major dangers to salami sliced publications. This summarizes it best from the USA Office of Research Integrity (http://ori.hhs.gov/plagiarism-16) :
“This type of misrepresentation can distort the conclusions of literature reviews if the various segments of a salami publication that include data from a single subject sample are included in a meta analysis under the assumption that the data are derived from independent samples. For this reason, the breaking up of a complex study containing multiple dependent measures into separate smaller publications can have serious negative consequences for the integrity of the scientific database. In certain key areas of biomedical research the consequences can result in policy recommendations that could have adverse public health effects.”
What is clear is how authors should handle possible salami slicing. Nature Materials tells it “Nature Materials explicitly requires that all authors provide details and pre-prints of all papers that are under consideration, in press or recently published elsewhere that could have any relevance to their submitted work. Ultimately, though, this relies on the integrity of authors”
There are times when a complete data set can be republished, for instance if there are new methods to analyze the data that was not available at the time of first paper. However, the decision if two manuscripts are separate is decided by the editors, not the authors. But the authors must be honest.
In these Hricak publications it seems the close date of all the groups of submissions are very problematic. The detailed comments in this blog are helpful – they show that sometimes no information or a very very tiny bit of information was provided about the other paper. In pubmed Hricak has 556 papers – its hard to think that she and her colleagues were not aware of the proper rules of publishing.
Yes, some of these papers are salami sliced, but others are simply plagiarized.
I don’t think that salami is worse than plagiarism. Neither should be permissible in scientific publications. This is so blatant.
Look at the Kircher MF, Hricak H, Larson SM paper in Mol Oncol. 2012 PMID: 2246961 on page 191 they wrote,
Polymerized nanoparticle platform technology, which allows nanoparticles to be loaded with different targeting moieties, contrast agents, and therapeutic agents, could allow the development of highly personalized treatment regimens (Li et al., 2002)
The text is a copy and paste from A National Cancer Clinical Trials System for the 21st Century: Reinvigorating the NCI Cooperative Group Program ISBN 978-0-309-15186 where on page it says
Polymerized nanoparticle platform technology, which allows nanoparticles to be loaded with different targeting moieties, contrast agents, and therapeutic agents, could allow the development of highly personalized treatment regimens (Li et al., 2002).
I am the author of the Radiology paper on renal tumor growth rate. I would like to clarify a few things:
The “striking” “improbable” mathematic situation you mentioned, i.e., the doubling time of patients’ tumors “ranged from −248 to 72 days (mean, 474 days; median, 811 days)”, was taken out of context. The original results were written as such: “The median RDT of all renal tumors was 0.45, corresponding to a DT of 811 days, with a mean RDT of 0.77, corresponding to a DT of 474 days. The RDT range was −1.47 to 5.06, corresponding to a DT range of −248 to 72 days.”
Please note the repeated word “corresponding”. We ranked tumors by RDT values first, established the range and median of RDT second, and then finally found the DT values of tumors with such RDT range and median values. The RDT and DT values do not have a rank-preserving relationship, that is, when you rank the tumors based on one of these values, the other value will not have the same ranking. Therefore, the DT values quoted above were of the tumors with the smallest, greatest and median RDT values, not the actual smallest, greatest and median DT values. This can be confusing. Maybe the following simplified table will help:
Tumor 1 RDT -1.47 (lower end of range) DT -248 days decrease over time
Tumor 2 RDT 0.45 (median) DT 811 days median growth
Tumor 3 RDT 5.06 (higher end of range) DT 72 days fastest growth
On one end of the spectrum, tumor 1 shrank over time, with negative RDT and DT. Tumor 2 showed median growth, with RDT of 0.45, well within the range of -1.47 to 5.06. This tumor would on average double every 2+ years. On the other end, tumor 3 was the fastest growing one, with the greatest RDT value of 5.06, and its doubling time was only 2+ months, much shorter than that of tumor 2, and it should be.
As can be seen from the simplified table, the tumors were sorted by RDT, not DT. The ranking of DT was different from that of RDT. If you had ranked these three DT values from the smallest to the largest: -248 days, 72 days, and 811 days, yeah, you could say the range was -248 to 811 days, with a median of 72 days, but that would be meaningless. Because the median DT of 72 days did not correspond to the tumor with the median growth rate, but rather, the fastest growing one.
Simply ranking the tumors based on their DTs would work only if there were no regressing tumors with negative DTs (imagine there were only tumor 2 and tumor 3, but no tumor 1). Because in that case, the RDTs and DTs would actually be rank-preserving to each other, although in an inverse order, and our paper probably wouldn’t have raised so many questions. Unfortunately we were dealing with a more complicated scenario. We had to rank tumors based on their growth first (and RDT is a good indicator for this purpose), and then identified the DTs corresponding to tumors that shrank the most, grew the fastest (thus the range), and the one that was in the middle (median).
In my 2009 Response to Dr. Park, I explained the above, although not in such great details. I also quoted a paper by Jennings et al., who had used the same method to calculate lung cancer growth rate. In their results, their DT corresponding to median RDT was 207 days, also outside the DT “range” of -50 to 26 days. I felt that my Response had adequately answered this question, yet it seems that my Response had been either largely ignored or not well understood. It is also quite strange that in the presence of other studies with similar findings and methodology, our study was singled out and kept being criticized. Whoever is interested can look up the references given in our original paper regarding every step of the calculations.
A probable explanation for the “mathematical irregularities” you mentioned regarding Figure 2 is that, so many data points are clustered in the left lower corner, if your “median change line” just deviated from 26.5% to 27.5%, the dots would be divided very differently. I cannot address your other comments regarding this figure, since you did not specify the problem.
In terms of Figure 3, the apparent discrepancy is due to the fact that, the 21 tumors with DTs of more than 730 days (RDT < 0.5) did not include the regressing tumors with negative DTs. But when RDT values were calculated, the 21 tumors with very long DTs corresponded to small RDTs < 0.5, and the regressing tumors with negative DTs corresponded to negative RDTs. These two groups now would have their RDTs combined together in the first two bars of Figure 3. However, since there were 7 tumors with negative RDTs, the total number of tumors in bars -1 and 0 should have been 28, whereas Figure 3 appears to show two to three tumors more than that. Without access to the database, my best speculation is that this was caused by a rounding error. Many data points were clustered around RDT of 0.5 (Figure 4). A few tumors with RDT slightly greater than 0.5 probably got rounded down to 0.5 and included in bar 0. But these few tumors with RDTs slightly greater than 0.5 did not account for the 21 tumors of RDT < 0.5.
Regarding the 2013 Correction, the reason for the “disappearance” of Dr, Wang and myself was that we had both relocated away, and could not be reached. I can only imagine how painstaking it must have been for my former colleagues to follow the digital trail I had left behind, and to retrace every step I had taken for the study. The Correction was written mainly by statisticians, to whom I did not have a chance to explain the science behind my study, therefore what is confusing to you must have caused them an even greater confusion.
I agree with their statement that “the relationship between DT and RDT is not rank-preserving; that is, ranking patients based on DT values does not correspond to ranking them based on RDT values”, — this is exactly why the DT corresponding to median RDT was not the actual median DT and appeared to be out of range. This statement basically is a reiteration of what I had tried to explain in my previous Response to Dr Park. But unfortunately, just like my Response, this statement from the statisticians also went largely ignored or not being understood. I could see that in the Correction, the DTs were ranked to make the median DT appear within range. As I discussed before, you can rank whatever you want to rank, but ranking DT like this in this case is meaningless. Nor do I understand some of the other comments in the Correction such as, “DT values were incorrectly calculated based on RDT values”, because RDT values were calculated from DT, as stated in our Methodology, not the other way round. Not having participated in the re-analysis, I do not know exactly what calculation error was found.
I hope the above comments help to clarify a few things. Thank you and good luck.
This is very interesting. I thank Dr. Jingbo Zhang for this clarifications. It makes a lot of sense that “the relationship between DT and RDT is not rank-preserving”. However, many of the other questions that Byung Kwan Park, MD of Department of Radiology and Center for Imaging Science, Samsung Medical Center, Sungkyunkwan University School of Medicine, asked remain unanswered.
The issues with the paper are even more confusing now. So much so that Byung Kwan Park, MD wrote back “There are a lot of critical flaws in it. None of the authors can answer correctly the questions that readers pose. Their doubts will not be quenched even though any erratum will be published”.
The patient population is this study is quite different than anything ever published in the medical literature in the last 50 years. As stated in the original paper 7 of 53 (or 13.2%) patients had negative values due to “spontaneous regression of their tumor”.
In the medical literature, it says, “The frequency of spontaneous regression in RCC patients is estimated to be approximately 1%”. “Spontaneous regression of primary RCC is extremely uncommon and (2) Most cases include spontaneous regression of metastatic RCC (particularly pulmonary metastases).1–3 Up to 2002, 95 documented cases of spontaneous regression in RCC metastases have been reported; only 22% of these were histologically demonstrated.1 Histologically proven regression of a primary RCC is exceedingly rare.”
So in all the worlds literature and all the large series of thousands of patients till 2002 95 cases were reported and Zhang had 7 in their tiny series of 53 patients??? This is statistically impossible. It is more likely that all the original and new authors of this paper would win a lottery at the same time!
Dr. Zhang wrote above “Without access to the database, my best speculation is that this was caused by a rounding error. Many data points were clustered around RDT of 0.5 (Figure 4). A few tumors with RDT slightly greater than 0.5 probably got rounded down to 0.5 and included in bar 0” — so when a discrepancy cannot be explained, it is may be a “rounding error” ????
Why not communicate with the authors of the paper who are at Memorial Sloan Kettering Cancer Center such as Hedvig Hricak Md PhD ?
“Regarding the 2013 Correction, the reason for the “disappearance” of Dr, Wang and myself was that we had both relocated away, and could not be reached”
While it is quite common for authors to move institutions, it is very unusual for multiple authors to leave no forwarding email or phone or institutional name for contact. If one of the authors has now been reached can they explain if all the corrections in the first addendum are accurate, why did the new set of authors correct the errors in the second erratum? Which erratum is accurate? Now that the author is reached, for the publishing accuracy shouldn’t you write a letter to the editor correcting the corrections of the new group of authors of this paper? What about Wang? Is Wang still disappeared? Can the authors get together and solve this issue finally?
Why did the second group of authors write “DT values were incorrectly calculated based on RDT values”, If you wrote “Nor do I understand some of the other comments in the Correction such as, “DT values were incorrectly calculated based on RDT values”, because RDT values were calculated from DT” . Why didn’t Jingbo Zhang write into Radiology when the 2013 correction was published to correct the new authors corrections of “mistakes” that the original authors made??
The list continues
Thanks for the comments from Anonymous. Our paper was on renal “tumor”, not renal “RCC”. I advise anybody who criticizes other people’s work please at least pay attention to these important details.
One may argue, even after excluding benign tumors included in the study, the number of regressing tumor is still much higher than 1%. I can think of numerous reasons behind this: patient population may be different in different studies during different times; maybe more patients with solid renal tumors undergo surveillance now, whereas they almost always had immediate surgery in the past; our study included only patients with surveillance CT over at least 3 months, maybe this introduced a selection bias itself (benign behaving renal tumors were more likely to have longer term surveillance CTs, whereas clear cut RCCs immediately had surgery, or had very short interval CT follow-ups before surgery).
I did leave forwarding email address and phone number. But as far as I know, only one attempt was made to contact me by email, and then things moved forward without me. I don’t know exactly how they had tried to contact Dr Wang, but I believe Dr Wang had already left the United States in 2013.
After I saw the 2013 Correction, I tried to find out what had happened, and was told that somebody had kept writing the journal of Radiology raising questions regarding the paper, yet this person would not identify themselves. It was really weird. I can’t understand why somebody asking justified scientific questions would feel that they have to hide themselves in the dark. Anyway, “the second group of authors” verified every step we had taken for the study, and found all our measurements saved in a digital system. Radiology published the Correction and decided case closed. That is what I was told anyway. Sounds like everybody was tired of some anonymous guy keeping asking the same questions yet ignoring the answers.
As far as I am concerned, if anybody wants to reinvestigate, I will offer my full cooperation. Other than that, I will also try to refrain myself from arguing with some anonymous guy in circles.
Dr. Zhang’s response is helpful in clarifying some of the issues and most critically, the understanding for the data to be reviewed. I agree with Dr. Zhang and it doesnt make sense why Hedvig Hricak and the second set of authors did not contact the first group of authors and allow them to answer critical questions, and why they contacted Dr Zhang only once not repeatedly and why Dr. Zhang did not respond and still dont know where Dr. Wang is? Does anyone understand why an independent statistical expert was consulted to review the data? The problem is that the second set of authors disagreed with the first set of authors in the second erratum for the paper so it is hard to tell who is correct. Is the first erratum correct? Is the second erratum which disagrees with the first correct?
Dr. Park has repeatedly questioned the accuracy of the data as well. He is a noted international radiologic authority. It would be ideal to make the medical literature accurate if the first set of authors reply in writing and publish an erratum in Radiology to the second erratum of the first erratum? When doing so, could the authors publish and share the scan data of the 7 total or 5 (10%) of RCC cases that decreased in size without any therapy ? Especially Tumors 1, 2 and 3 – large renal cell cancers which spontaneously shrank 35.8%, 35.4% and 18.1%. The entire science community would benefit from this
Characteristics of the Renal Tumors That Did Not Demonstrate Growth between Initial and Final CT Examinations
Tumor No. Tumor Type Initial Volume (mL) Final Volume (mL)
CT Examinations (d)
1 Chromophobe 2050 1315.5 150 35.8
2 Clear cell 270 823.5 174 971.8 525 35.4
3 Clear cell 21 393.25 17 517.25 120 18.1
You have to acknowledge that the main reason for you to have repeatedly raised issues over our paper was that you had failed to understand the concept of DT and RDT being not rank-preserving to each other, despite our repeated explanations. This is a mathematical concept that takes some 5 minutes to intuitively grasp, yet some others years of time and pages of teaching.
Therefore I am worried that whatever I try to explain to you may prove a difficult task. My time will be wasted just like so many people’s precious time has been wasted in the past over this. Nonetheless I will try.
FIrst of all, I cannot answer for Dr Hricak or Radiology why they did certain things in a certain way. You are asking the wrong person. All I can say is that they emailed me once, I did not see it. They moved forward without me. I contacted Radiology later on to figure out what had happened, and I have that email record as proof. I was told that the journal was fed up with some weird anonymous guy asking the same questions over and over again. Now that so many people had spent so much time and effort to clarify and document everything, they just wanted to get it over with. In terms of Dr Wang, I never said I still did not know where he was, but I don’t see why that is even relevant.
As for Dr Park, I have communicated with him directly. If he has any remaining questions, he is free to ask me directly or post here. He knows how to get in touch with me. Borrowing his reputation does not make your argument any stronger.
In terms of the paper, if you ask my opinion, after recent re-inspection, I do not see any major flaw in it. I think you are the one causing all this confusion in medical literature, simply because you could not understand a key concept, therefore cast doubt over the entire paper. Now you just want to pick any error to prove yourself right. I am sorry that you don’t like our data. But data is data, I have to present it the way it is, whether it may cause controversy or not. Please check out Jennings’ paper on lung cancer in Radiology 2006. They found 21 of 149 tumors did not show growth, slightly higher than ours, I also have referenced this paper numerous times. You might want to read it, especially its Discussion. This is just another example how hard it has been to get things through to you.
If it were up to me, I would made the entire database and images readily available for whomever is interested. Unfortunately it is not my decision to make. You may not be familiar with all the rules and regulations a major US academic institute has to follow when it comes to patient data. So please don’t fault me for this.
Hello Dr. Zhang,
You said above that you did not see any major flaws in your data. If so, why did you and your co-authors make TWO SEPARATE CORRECTIONS?
You had a chance to fully correct or retract the data in the first erratum. Obviously you did not do this fully at that time and your co-authors further corrected you afterwards (WITHOUT YOUR ACKNOWLEDGEMENT IN THE SECOND CORRECTION).
Can you explain these issues?
You logic seems to be going backwards. If I did not see any major flaws, why would I want to retract the data, and what’s there to “fully correct”?
Like I said, I do not agree with the 2013 Correction, therefore I would not say that they “further corrected” me.
BTW, the way you think is very similar to Anonymous. Can you tell me if you understand why I disagree with the 2013 Correction? If you have any specific scientific question, I am more than happy to answer. But it is a waste of time arguing in circles like this.
I have followed this article and interesting notes and replies very intently. I agree with JZ. I do not understand why some authors and the editor would be allowed to correct this publication without all the author approvals. But, I also don’t know why this cannot be clarified with a letter to the editor by the first author of the paper. In my 10 years in the science field, I have never seen a publication corrected by some authors and not others, TWO TIMES. This is very unusual to understand how this could happen. But this can be fixed with another letter to the editor as clarifying the disagreement.
In my primary language, we have an expression “Cuando el río suena, agua lleva”. In English, “If the river makes a noise its because water is running”. Why do I use this expression? Several people have objected to the science of this paper, in the journal and on this website and the clarifications don’t appear scientifically sufficient. I also don’t understand the lack of scientific civil conversation of JZ who describes one of the readers as “weird”. JZ wrote in the comments:
“My time will be wasted just like so many people’s precious time has been wasted in the past over this. Nonetheless I will try. FIrst of all, I cannot answer for Dr Hricak or Radiology why they did certain things in a certain way. You are asking the wrong person”
It is NEVER a waste of time to engage in scientific conversation with a reader even if you don’t know the reader. Why would JZ not want to contact the other authors now to clarify the paper and why did the other authors write about this paper without the first author knowing? Not answering for Dr. Hricak makes no sense for a scientific conversation. This is strange since all are authors on the same paper from the same research institute. I also read Jennings papers on lung cancer but they are not of the same disease, same stage and it doesn’t seem relevant or important to the study of renal tumors. Can JZ explain this further?
I am concerned more about a scientist who name calls a reader weird for questioning a study. I think this is sounding like the ridiculous recent Presidential Election of the USA with name calling. I hope scientific discussion can be civil.
As a scientist who has interest in human diseases, the last comments are the most important I think. There are subjects in the paper who had tumors that shrunk and no treatment. Shouldn’t the authors be doing genomic studies on these subjects? Could these subjects hold the cure for this disease? Isn’t this more scientifically important than name calling? Why cannot this data be openly shared with the scientific community?
It is really frustrating talking to you, Anonymous and Andrew, because you guys share the same traits: you keep asking questions that have already been answered, and you sometimes make up your own facts. Oh, and your “scientific conversation” is often not about science at all.
You asked me “why did the other authors write about this paper without the first author knowing? “, I have already answered this question so many times that, this is perfect proof I was right about “my time will be wasted”, AND it has been wasted on something that is not science related.
You also assumed that I did not “want to contact the other authors to clarify the paper”, again you are making up your own facts. I have contacted both Radiology and some of my key co-authors, on more than one occasions. I also have contacted both Radiology and MSKCC regarding data sharing. I wish I had made more progress on both issues, but right now it is too early to predict the outcome.
You said “Several people have objected to the science of this paper, in the journal and on this website and the clarifications don’t appear scientifically sufficient. “. I would say, a more accurate description of the events is: Several people failed to understand a key concept of the paper. They asked questions that had already been answered more than once. And now that no major flaw can be found with the paper, one of them just keeps questioning why the data is the way it is. Lung cancer is certainly different from renal tumor. But the fact that similar results have been reported in another type of cancer means that our results may not be as outlandish as you thought. In this sense, Jennings paper is very relevant, much more relevant than most of your questions.
In terms of “weird”, the people who saw the anonymous letters to Radiology told me that they felt the situation was “bizarre”, and that they really saw no reason to anonymous letters to the Editor about what were supposed to be scientific questions. I basically was describing what I had been told. If you felt offended by this word, you are being overly sensitive. People often use this word to describe something they find strange and hard to understand.
It would certainly be nice to perform genomic studies. And there are already tons of people way more qualified than radiologists working on that at MSKCC. Are you saying if we as radiologists did not follow up our paper with a molecular biology study, then there is something wrong with us?!
I do not believe it is appropriate for me to answer for somebody else why they did certain things certain way. You can say whatever you want about this.
I certainly would like to have a civil scientific discussion. However, it is really frustrating to engage with somebody who asks questions without listening to the answer, who keeps circling around nonscientific issues, and would even sometimes make up his own facts along the way. Do you call these actions civil?! I am more concerned with somebody who cannot separate facts from imagination in the science community.
Somebody close to the matter told me that they felt this was not about me or the manuscript, but rather, it was clearly part of a personal grudge about something, or a vendetta, presumably with one of my senior authors. Again, I am just telling you what I was told. But if this were true, then it really doesn’t matter what I do, does it? Whatever I say will just feed your obsession. Even if I manage to retract the Correction or make our data public, you will not stop. You will most likely come up with something else. Who knows, you may even want to recruit a few buddies or create a few aliases to help out in your “civil scientific conversation”.
If I make progress in clarifying the 2013 Correction or data sharing, you should be able to find out in the literature. Otherwise, I believe I have made my points.
BTW, Manuel, for the record, saying “weird” is not name calling. This is a good example of you making unfounded accusations on a whim.
Dear Dr. JZ,
I cannot follow your story.
Please clarify the three simple points with YES or NO answers!
Did the first erratum correct all the errors in the paper? YES or NO.
Did the second erratum correct all the errors in the paper, and the errors in the first erratum? YES or NO?
Did all of the authors in the original paper and additional authors in the second erratum believe all the errors are now corrected? YES or NO.
If your data are solid, together with your MSKCC co-authors, you can write to the editor of Radiology and clarify this study in just three sentences as above.
Simple… transparent, and academically proper!
Why do not you do this?