找回密码
 注册

微信登录,快人一步

QQ登录

只需一步,快速开始

公告区+ 发布

02-17 16:11
02-17 16:10
01-07 16:18
01-06 15:55
01-02 17:30
查看: 2370|回复: 8

[转帖] 【NATURE】自然:癌症基础研究被指大多不可靠

  [复制链接]
发表于 2012-4-14 03:12 | 显示全部楼层 |阅读模式

马上注册登录,享用更多感控资源,助你轻松入门。

您需要 登录 才可以下载或查看,没有账号?注册 |

×
前安进公司研究员发现,很多有关癌症的基础研究——很大一部分来自大学实验室——都是不可靠的。这一发现为研制新药的前景蒙上阴影。

C·格伦·贝格利曾担任安进公司全球癌症研究工作的负责人长达10年之久。他的科研小组对享有盛名的实验室发表在一流杂志上的53份“里程碑式”研究论文进行鉴定。贝格利希望能在以这些论文为基础的新药研发之前确保这些研究发现的可靠性。结果是,这53项研究发现中有47项的研究结果无法重现。他在今天出版的最新一期英国《自然》周刊上公布了这一发现。

贝格利说:“这一发现令人震惊。”无法打赢对抗癌症的战争有很多因素,比如实验对象或者是资金等。现在叉找到了一个新的原因——不可靠的基础科学研究结果太多了。这些科学研究对象都是在实验室里的动物或者细胞。

贝格利的发现与去年德国拜耳股份公司科学家的一份报告相呼应。当贝格利科研小组的100名科学家无法证实论文结果时,他们联系了论文作者。科学家们最常见的反应是说:“你们没做对。”麻省理工学院主攻癌症的生物学家、曾获得诺贝尔奖的菲尔·夏普说,事实上,癌症生物学极其复杂。

在一个癌症研究大会上,贝格利和主要负责其中一项有问题研究的科学家会晤过。贝格利说:“我们把论文一行一行、一个字一个字地看了一遍。我告诉他,我们把他们的试验重新做了50遍,但得不出他们的结果。他表示,他们做了6次试验,其中有一次能得出他们想要的结果。但他们还是将其写进论文中。因为这将会是一个完美的故事。这个消息真是太幻灭了。”

这种选择性的文章发表只不过是研究结果不可靠的其中一个原因。基础科学研究与临床试验方式不同的地方在于,实验室的研究者知道哪一个细胞系或者哪一只小鼠得到治疗或者得了癌症。研究者从而可以创造出一个理论,更好地诠释他们想要的证据。

华盛顿大学的费里埃·丰说:“在知名杂志刊登论文是你能得到资金或者工作的最好保证。这种不健康的念头会导致科学家追求轰动效应,有时候还会做出不诚实的行为。”
回复

使用道具 举报

 楼主| 发表于 2012-4-14 03:15 | 显示全部楼层
在此想起了胡教授的教导,我们不要盲目依从国外SCI论文,甚至高分论文,要走循证之路。确实值得更深一层次的反思。
回复

使用道具 举报

 楼主| 发表于 2012-4-14 03:18 | 显示全部楼层
英文原文也查到了:
In cancer science, many 'discoveries' don't hold up1      NEW YORK (Reuters) - A former researcher at Amgen Inc has found that many basic studies on cancer -- a high proportion of them from university labs -- are unreliable, with grim consequences for producing new medicines in the future.
       During a decade as head of global cancer research at Amgen, C. Glenn Begley identified 53 "landmark" publications -- papers in top journals, from reputable labs -- for his team to reproduce. Begley sought to double-check the findings before trying to build on them for drug development.Result: 47 of the 53 could not be replicated. He described his findings in a commentary piece published on Wednesday in the journal Nature.)
      "It was shocking," said Begley, now senior vice president of privately held biotechnology company TetraLogic, which develops cancer drugs. "These are the studies the pharmaceutical industry relies on to identify new targets for drug development. But if you're going to place a $1 million or $2 million or $5 million bet on an observation, you need to be sure it's true. As we tried to reproduce these papers we became convinced you can't take anything at face value."
      The failure to win "the war on cancer" has been blamed on many factors, from the use of mouse models that are irrelevant to human cancers to risk-averse funding agencies. But recently a new culprit has emerged: too many basic scientific discoveries, done in animals or cells growing in lab dishes and meant to show the way to a new drug, are wrong
     Begley's experience echoes a report from scientists at Bayer AG last year. Neither group of researchers alleges fraud, nor would they identify the research they had tried to replicate.
     But they and others fear the phenomenon is the product of a skewed system of incentives that has academics cutting corners to further their careers.
George Robertson of Dalhousie University in Nova Scotia previously worked at Merck on neurodegenerative diseases such as Parkinson's. While at Merck, he also found many academic studies that did not hold up.
    It drives people in industry crazy. Why are we seeing a collapse of the pharma and biotech industries? One possibility is that academia is not providing accurate findings," he said.
      BELIEVE IT OR NOT
     Over the last two decades, the most promising route to new cancer drugs has been one pioneered by the discoverers of Gleevec, the Novartis drug that targets a form of leukemia, and Herceptin, Genentech's breast-cancer drug. In each case, scientists discovered a genetic change that turned a normal cell into a malignant one. Those findings allowed them to develop a molecule that blocks the cancer-producing process.
     This approach led to an explosion of claims of other potential "druggable" targets. Amgen tried to replicate the new papers before launching its own drug-discovery projects.
     Scientists at Bayer did not have much more success. In a 2011 paper published in Nature Reviews Drug Discovery and titled, "Believe it or not," they analyzed in-house projects that built on "exciting published data" from basic science studies. "Often, key data could not be reproduced," wrote Dr. Khusru Asadullah, vice president and head of target discovery at Bayer HealthCare in Berlin, and colleagues
     Of 47 cancer projects at Bayer during 2011, less than one-quarter could reproduce previously reported findings, despite the efforts of three or four scientists working full time for up to a year. Bayer dropped the projects.
     Bayer and Amgen found that the prestige of a journal was no guarantee a paper would be solid. "The scientific community assumes that the claims in a preclinical study can be taken at face value," Begley and Dr. Lee Ellis of MD Anderson Cancer Center wrote in Nature. It assumes, too, that "the main message of the paper can be relied on ... Unfortunately, this is not always the case."

      When the Amgen replication team of about 100 scientists could not confirm reported results, they contacted the authors. Those who cooperated discussed what might account for the inability of Amgen to confirm the results. Some let Amgen borrow antibodies and other materials used in the original study or even repeat experiments under the original authors' direction.
      Some authors required the Amgen scientists sign a confidentiality agreement barring them from disclosing data at odds with the original findings. "The world will never know" which 47 studies -- many of them highly cited -- are apparently wrong, Begley said.
      The most common response by the challenged scientists was: "you didn't do it right." Indeed, cancer biology is fiendishly complex, noted Phil Sharp, a cancer biologist and Nobel laureate at the Massachusetts Institute of Technology.'
       Even in the most rigorous studies, the results might be reproducible only in very specific conditions, Sharp explained: "A cancer cell might respond one way in one set of conditions and another way in different conditions. I think a lot of the variability can come from that.";

      Other scientists worry that something less innocuous explains the lack of reproducibility.Part way through his project to reproduce promising studies, Begley met for breakfast at a cancer conference with the lead scientist of one of the problematic studies.
     We went through the paper line by line, figure by figure," said Begley. "I explained that we re-did their experiment 50 times and never got their result. He said they'd done it six times and got this result once, but put it in the paper because it made the best story. It's very dis**ing."$ E) H0 |! k1 x4 D2 b3 N8 V% A. W
      Such selective publication is just one reason the scientific literature is peppered with incorrect results.For one thing, basic science studies are rarely "blinded" the way clinical trials are. That is, researchers know which cell line or mouse got a treatment or had cancer. That can be a problem when data are subject to interpretation, as a researcher who is intellectually invested in a theory is more likely to interpret ambiguous evidence in its favor.

    The problem goes beyond cancer.On Tuesday, a committee of the National Academy of Sciences heard testimony that the number of scientific papers that had to be retracted increased more than tenfold over the last decade; the number of journal articles published rose only 44 percent..
     Dr. Ferric Fang of the University of Washington, speaking to the panel, said he blamed a hypercompetitive academic environment that fosters poor science and even fraud, as too many researchers compete for diminishing funding.
      "The surest ticket to getting a grant or job is getting published in a high-profile journal," said Fang. "This is an unhealthy belief that can lead a scientist to engage in sensationalism and sometimes even dishonest behavior."
      The academic reward system discourages efforts to ensure a finding was not a fluke. Nor is there an incentive to verify someone else's discovery. As recently as the late 1990s, most potential cancer-drug targets were backed by 100 to 200 publications. Now each may have fewer than half a dozen."        "If you can write it up and get it published you're not even thinking of reproducibility," said Ken Kaitin, director of the Tufts Center for the Study of Drug Development. "You make an observation and move on. There is no incentive to find out it was wrong.""
回复

使用道具 举报

发表于 2012-4-14 05:38 | 显示全部楼层
不盲从、不轻信,带着批判的眼光阅读论文恐怕应是我们应持的态度。
回复

使用道具 举报

发表于 2012-4-14 07:55 | 显示全部楼层
这是悲唉的消息,如此高影响因子的论文的研究结果也无法重复,巨额的科研经费的投入,产出的仅仅是一些无法重复的论文而已。令人深思~~~~~
回复

使用道具 举报

发表于 2012-4-14 09:14 | 显示全部楼层

可是按照meta分析的步骤,连适于纳入标准的低分文献都不能舍弃,怎么舍弃这高分文献呢?
虽然说重在评价,但是你又如何在明知其有误的前提下评价这高分文献呢?请指点。
回复

使用道具 举报

发表于 2012-4-14 09:20 | 显示全部楼层
悲哀的一幕,让我想起一首诗:
《冬夜读书示子聿》   
                        (南宋) 陆游   
古人学问无遗力,少壮工夫老始成。  
纸上得来终觉浅,绝知此事要躬行。
回复

使用道具 举报

发表于 2012-4-14 10:43 | 显示全部楼层
蓝鱼o_0 发表于 2012-4-14 03:15
在此想起了胡教授的教导,我们不要盲目依从国外SCI论文,甚至高分论文,要走循证之路。确实值得更深一层次的 ...

现在对我们来讲最重要的是打好循证科研的基础,要把这一基本功做得扎实。盲目追求国外的东东也是不可取的。在这一过程中,我们要耐得住寂寞。
回复

使用道具 举报

 楼主| 发表于 2012-4-14 11:18 | 显示全部楼层
拙凌 发表于 2012-4-14 09:14
可是按照meta分析的步骤,连适于纳入标准的低分文献都不能舍弃,怎么舍弃这高分文献呢?
虽然说重在评价 ...

点名道姓的批判肯定是不合适的,但是因为某些驱动,其结果产生的偏倚是不容忽视的。

只能根据现有的评价体系如PRISMA,从PICOS几个方面,按照一定的评价标准比如GRADE进行评价。

对META结果进行敏感度分析,异质性探讨。来分析结果的稳定性,异质性原因。
这些我认为是META分析应该去注意的。

归根结底一句话,做了这篇 meta-analysis的目的是什么,给别人什么启示?

胡教授说的好,你的研究让前人的研究成为废纸,也是价值体现。

一点浅见,仅供参考!
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 注册 |

本版积分规则

快速回复 返回顶部 返回列表