Science Policy For All

Because science policy affects everyone.

Posts Tagged ‘reproducibility

Science Policy Around the Web – July 16th, 2019

leave a comment »

By Allison Cross, PhD

Source: MaxPixel

Are dinosaur fossils ‘minerals’? The Montana Supreme Court will decide high stakes case

A property rights dispute over fossils found on a Montana ranch is now in the hands of the Montana Supreme court and the decision could have wide-spanning implications, affecting how fossil hunters operate and putting into question the ownership of fossils currently in private and public collections around the world. 

The dispute began over a piece of land in Garfield County, Montana previously owned by George Severson.  The property is located inside the Hill Creek Formation, a famous and extensively studied dinosaur fossil site spanning through Montana, the Dakotas and Wyoming.  In 2005, Severson’s sons sold the surface rights of their property to another family, the Murrays, while retaining the mineral rights.  Since this sale, the Murrays and an amateur fossil hunter, Clayton Phipps, began excavating the land.  They unearthed multiple rare fossils including the complete fossils of two dinosaurs that appear to have been fighting when they died, a triceratops foot and skull, and a complete T. rex.    

The discovery of these rare and valuable fossils sparked an ownership dispute among the Severson (who have the minerals rights to the land) and the Murrays (who hold the surface rights).  Historically, fossils have been considered part of the surface property and when the Murrays filed a lawsuit seeking ownership of the fossils the district court sided on their behalf.  The Seversons then appealed to the 9th circuit, who in a surprising decision, sided on their behalf. This decision concerned many, including the Society of Vertebrate Paleontology, the Field Museum of Natural History in Chicago, and the Museum of the Rockies. The 9th circuit was asked to reconsider the case and, after granting the rehearing, the court vacated their earlier decision and sent the question up to the Montana Supreme Court. 

The Severson vs. Murray dispute over fossil ownership has left many worried about ownership challenges to important fossils currently in academic, museums, and private collections. In April, Montana enacted a law stating that fossils are not minerals and therefore belong to the surface estate.  This law, however, but does not apply to existing disputes. 

(Jeremy P. Jacobs, E&E News, Science)

Potential Causes of Irreproducibility Revealed

Scientists and the public have long been concerned about how experiments performed in the lab will translate to patients.  These concerns are heightened by the recently acknowledged lack of reproducibility within science, particularly in the biological sciences.  If scientists in different labs are unable to reproduce the same in vitro data, we should not be surprised when these findings fail to translate to humans.  

In an attempt to explore some of the factors affecting reproducibility, five research labs in the NIH LINCS Program Consortium performed the same experiment and compared results.Each lab aimed to quantify the responsiveness of mammalian cells in culture to anti-cancer drugs. Drug response assays like those performed by these labs are considered relatively simple and are standard during drug development.  

The results of this multi-lab study, along with analysis of the technical and biological factors affecting reproducibility between the five labs, were recently published in Cell Systems. In the published study, each lab received the same detailed protocol and were provided cells, media, and drugs from the same source.  Despite this, initial experiments performed by the five groups revealed drug potencies that varied as much as 200-fold. 

Researchers were able to identify some technical factors contributing to the inconsistent data, including differences in the method used for cell counting and edge effects and non-uniform cell growth in the culture plates.   The groups were able to improve their replicability by using more standardized protocols and randomizing locations of controls and technical replicates in the cell culture plates to reduce biases introduced by edge effects and uneven cell growth.  Though these changes did result in more consistency, the replicability remained higher within groups than between groups.  

Though this study demonstrated that controlling for variability is helpful in obtaining reproducible data, James Evans, a University of Chicago researcher not involved in the recent study argues “the point isn’t just to get reproducible effects.” In order to improve the translation of preclinical findings, Evans explains “We want reproducible effects that are going to be robust to subtle changes in the experiment.” 

(Abby Olena, The Scientist) 

Have an interesting science policy link? Share it in the comments!

Advertisements

Written by sciencepolicyforall

July 16, 2019 at 4:42 pm

Science Policy Around the Web – October 16, 2018

leave a comment »

By: Sarah L. Hawes, Ph.D.

20181016_Linkpost

source: pixabay

Transparency

AAAS CEO Defends Scientific Evidence, Urges EPA to Scrap “Transparency” Rule

On October 3, a Senate subcommittee heard support and opposition to the “Transparency Rule” initiative proposed to guide which scientific evidence could be considered when forming EPA policy. The House version of the rule passed in March 2017, and the context within which the rule would be implemented is discussed in the May 8, 2018 Science Policy for All linkpost EPA Cites “Replication Crisis” in Justifying Open Science Proposal by Saurav Seshadri, PhD.

During recent Senate hearings, the American Association for the Advancement of Science CEO Rush Holt testified that in his view a requirement that research make all data publicly available would eliminate specific types of research, and that this could be used to justify reliance on a subset of science supporting particular policy, and producing politically motivated results “in order to loosen regulations” rather than for the purpose of increasing independent evaluation and reproducibility. He testified that in many cases within EPA purview, such as analyses of the effects of natural disasters or accidental human and environmental toxin exposures, reproducing results is not realistic or relevant. Furthermore, making all collected data public would violate privacy rules where medical records are involved, and studies conducted under conditions of confidentiality would be unusable although data such as individual names are irrelevant to statistical outcomes.

Professor of toxicology, Edward Calabrese from the University of Massachusetts, and Robert Hahn of the Georgetown University Center for Business and Public Policy both testified in favor of the Transparency Rule. Professor Calabrese additionally urged all data initially considered in crafting policy be included in public documentation along with explanations of why any was discarded – potentially requiring a substantial burden during the policy formation process making development of new policies prohibitively difficult. Dr. Hahn urged the Transparency rule be applied across all federal agencies.

(Anne Q. Hoy, AAAS News)

Antibiotic Resistance

New study links common herbicides and antibiotic resistance

Executive Order 13676 established the Presidential Advisory Council on Combating Antibiotic-Resistant Bacteria (PACCARB) which developed a five year (2015 – 2020) National Action Plan emphasizing surveillance, identification of resistant bacterial characteristics, resistance prevention, and development of new antibiotics. No reference to agriculture occurs outside of surveillance of antibiotic resistance within livestock, transmission of resistant pathogens to humans, and developing appropriate livestock practices. Despite thoroughly delineating the lines of inquiry expected of various agencies, nowhere does the plan mention agricultural crops or agricultural chemicals. However, a 2015 study found antibiotic resistance developed significantly faster in pathogens exposed to common herbicides in conjunction with antibiotics. According to the paper herbicides are routinely tested for toxicity “but not sublethal effects on microbes,” although it is known sublethal effects contribute substantially to antibiotic resistance.

A new study finds bacterial resistance to antibiotics increasing at rates up to 100,000 times faster in the presence of dicamba (Kamba) and glyphosate (Roundup) – herbicides commonly used worldwide. The earlier paper found the presence of herbicides increased the resistance of bacteria to the antibiotics or increased the effectiveness of the antibiotics against bacteria, depending on the combination of herbicide, bacteria type and antibiotic. The present study finds that even when the herbicide increased the lethality of the antibiotic, the rate at which the bacteria became resistant is also accelerated in the presence of herbicide. Informal peer comments note one of the antibiotics in the study (ciprofloxacin) has also been used recently as an herbicide, underscoring the importance of research into effects between these categories of chemicals.

It is becoming clear, as scientists pursue the goals of the National Action Plan to reduce antibiotic resistance, that the most carefully delineated 2015 plan cannot entirely encompass the scope of influences on antibiotic resistance. Continuing research shows that there is much we do not yet know.

(Margaret Agnew, University of Canterbury News)

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

October 17, 2018 at 3:15 pm

Science Policy Around the Web – October 5, 2018

leave a comment »

By: Cindo O. Nicholson, Ph.D.

20181005_Linkpost

source: pixabay

Food & Nutrition

More evidence that nutrition studies don’t always add up

Nutrition studies are important to public health because incidences of cardiovascular and other physiological diseases can be minimized by educating the public on what foods to eat in their correct proportions. For example, data from the National Health and Nutrition Examination Survey state that in 2015-2016 the prevalence of obesity in adults and children were 39.8% and 18.5% respectively. Among the causes of obesity is an improper diet that includes mostly high calorie foods that are low in nutrients. These examples highlight why nutrition studies are needed and important for improving public health.

Despite their importance, nutrition studies have been plagued by inconsistencies. Most recently, a prominent a food scientist from Cornell University (Dr. Brian Wansink) has resigned due to investigations that have found him guilty of “academic misconduct in his research and scholarship, including misreporting of research data.” The verdict of Dr. Brian Wansink may create a substantial ripple-effect because of his prominence in the food sciences field. Dr. Wansink’s prominence in the food and nutrition sciences filed lead to his 14-month appointment as executive director of the U.S. Department of Agriculture’s Center for Nutrition Policy and Promotion in 2007. Dr. Wansink’s research also lead to $20 million being spent by the government to re-designing school cafeterias.

So far, thirteen of Dr. Wansink’s papers have been retracted due to questions about their scientific validity. Dr. Wansink’s lab has been accused of practicing exhaustive statistical analyses like “data-dredging” or “p-hacking” in order to detect any interesting relationships in their studies that would create a “big splash” in the public. Unfortunately, data-dredging in the food and nutrition sciences is fairly widespread. What is happening with Dr. Wansink and the field of food and nutrition sciences should be a wake-up call to all fields of health sciences research due to their findings being used as rationale for implementing public policies. Striving for consistency in research is necessary for the public to have faith in scientific evaluations being used in public policy.

(Anahad O’Connor, The New York Times)

 

Human Fetal Tissue Research

Trump administration launches sweeping review of fetal-tissue research

The U.S. government has cancelled a contract with the non-profit, tissue supplier Advanced Biosciences Resources (ABR, Alameda CA) where ABR would provide the Food & Drug Administration (FDA) with human fetal-tissue samples. The human fetal-tissue samples were going to be implanted into mice to create humanized mice. These humanized mice could then be used in experimental tests to approximate how humans would respond to drug treatments. In a letter to the FDA commissioner Scott Gottlieb, 85 members of the U.S. House of Representatives claimed that ABR might have violated federal law by profiting from the sale of the “body parts of children”. This letter prompted the Department of Human Health Services (HHS) to cancel the contract with ABR. Furthermore, the HHS is auditing “all acquisitions of human fetal-tissues” to ensure that all tissue providers are adhering to federal regulations.

Though researchers support the regulations that are in place for the use of fetal-tissues, some wonder if this federal audit is a result of politicizing research done with human fetal-tissues. Strongly emotive language was by members of the House of Representatives describing the sale of “body parts of children”. In addition, the language used inaccurately portrays the human fetal-tissues used in research. As one researcher pointed out, the fetal-tissues used in research are non-viable and otherwise would have been discarded. This engenders the question would you rather discard these tissues or use them to benefit human health? The use of human fetal tissues are indispensable for studying organ development, tissue regeneration, and human development on the whole. The regulation of human fetal-tissue use in research should be fair, sensible, and non-politically motivated.

(Sara Reardon, Nature)

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

October 5, 2018 at 9:08 pm

Science Policy Around the Web – April 27, 2018

leave a comment »

By: Michael Tennekoon, PhD

20180427_Linkpost

source: pixabay

Productivity of Science

Is Science Hitting a Wall?, Part 1

Scientific research is hitting a wall- that’s the view from a recent study published by 4 economists.  The famous metric where the density of computer chips doubles every 2 years, now takes 18 times the number of researchers to accomplish. This pattern also extends to other areas of research as well. For example in medicine, “the numbers of new drugs approved per billion U.S . dollars spent on R&D has halved every 9 years since 1950”. In general, while research teams appear to be getting bigger, the number of patents being produced per researcher has declined. Alarmingly critics argue that some fields may even be regressing- for example the over-treatment of psychiatric and cancer patients may have caused more harm than the benefits.

But why would science be hitting a wall? One major factor could be the reproducibility crisis– the problem where many peer reviewed claims cannot be replicated thus calling into question the validity of the original research findings.  Researchers suggest that intense competition for funding and jobs, has resulted in the need to conduct innovative “high risk” research, in as short of a time as possible. While this type of research can gain plenty of press, they often lack the appropriate scientific rigor that ensure the findings are reliable. However, the perceived slow-down in research productivity could also be a result of the natural advancement of science- the low hanging fruit problem. Said another way, most of the easier problems have already been solved, leaving only problems that require vast scientific resources to solve.

On the other hand, researchers in some fields can rightfully pushback and argue that scientific progression is not stalling but is in fact accelerating. For example, technologies such as CRISPR and optogenetics have been able to produce a multitude of new findings particularly in the areas of neuroscience and genetics research. However, it must be noted, that even with these new technologies, the end product for general society is still relatively disappointing.

Given these concerns how scientific research moves forward raises some tough questions for the field. Given funding limitations, how much do we, as a society, value ‘pure science’- the effort to understand rather than manipulate nature? Scientific curiosity aside, in purely economic terms, is it worth understanding the out of Africa hypothesis of human origins, or sending humans to different planets? Is it worth investing in the latest innovative technology that produces new findings with limited applicability to human health? Scientists and the general society must be open to weighing the costs and benefits of scientific enterprises and deciding the avenues of research worth pursuing.

(John Horgan,  Scientific American)

Vaccine Ethics

The vaccine dilemma: how experts weigh the benefits for many against risks for a few

Cost-benefit analysis. Sure, it’s easy to do when you’re on an amazon shopping spree. But what about when millions of lives are at stake? And what if those millions of lives are of children, unable to give informed consent? Not so easy anymore, but that is the job of the Strategic Advisory Group of Experts (SAGE) for the World Health Organization, who last week decided to scale back the use of a new vaccine to protect against dengue.

2 years ago, SAGE concluded the vaccine was safe to use in children in places with high dengue infection rates, despite theoretical concerns the vaccine may increase the risk of developing a severe form of dengue in some children. Towards the end of last year, the vaccine’s manufacturer, Sanofi Pasteur, released new data validating these theoretical concerns.   How likely was this to happen? It was estimated that in a population where 70% of individuals had dengue at least once, the vaccine would prevent 7 times as many children from needing hospital care than would be needed as a result of the vaccine. If 85% of individuals had had dengue, that figure becomes 18 to 1. Those numbers were deemed not worth the risk.

What goes into making these decisions?

One factor is the prevalence of the disease. For example, the oral polio vaccine had the ability to prevent millions of children from becoming paralyzed, but it could also cause paralysis in a rare number of cases. In the 1950s and 1960s when polio was highly prevalent, it made sense to recommend this vaccine but as polio became nearly non-existent towards the end of the 20th century, using the oral vaccine was no longer prudent.

However, dengue is still rampant in today’s world, so what is different in this case?

Public perception. The modern world is highly litigious and has access to a wide variety of information, both facts and fake. This has resulted in a very skeptical perception of science where negative press for one vaccine could cause collateral damage for many other vaccines, unlike what would have happened a few decades ago. For example, in the 1950s, it was discovered that children were given a polio vaccine that mistakenly contained live viruses. This left 51 children in the US paralyzed, and killed 5. However, polio vaccinations resumed and the company responsible (Cutter Laboratories) went on and polio was virtually eradicated. On the other hand, RotaShield, a vaccine to protect against rotavirus (a virus that causes bowel blockage), had a very different experience. Approved in 1998, it was suspended one year later after the CDC estimated that for every 10,000 children there would be an extra 1 or 2 children who would get intussusception (a type of bowel blockage) over what would normally be seen. While in developing countries, the number of lives saved would have been far more than the extra cases of intussusception, the vaccine was still suspended. A safer rotavirus vaccine only made it to market in 2006. During this time, it is estimated that 3 million children died from rotavirus infections. (Note- risk of  rotavirus infections still persist even when the vaccine is given, but at far lower rates).

Given the tremendously difficult decisions that need to be made with the implementation of vaccines and the impact that public perception can have on these decisions, society has a responsibility to become more informed about the potential benefits and drawbacks of vaccines and must actively tease apart fact from fiction.

(Helen Branswell, STAT)

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

April 27, 2018 at 3:26 pm

Science Policy Around the Web – August 18, 2017

leave a comment »

By: Nivedita Sengupta, PhD

20170818_Linkpost_1

Source: pixabay

Climate Science

Effort backed by California’s flagship universities comes as US President Donald Trump shrugs off global warming

As US President Donald Trump announces to withdraw from Paris Agreement, renouncing climate science and policy, scientists in California are deciding to develop a home-grown climate research institute -‘California Climate Science and Solutions Institute’. California has always tried to protect the environment with different initiatives and this one is already getting endorsed by California’s flagship universities and being warmly received by Governor Jerry Brown. The initiative is still in the early stages of development and will also need clearance from the state legislature. The institute will aim to fund basic as well as applied research in all the topics related to climate change ranging from ocean acidification to tax policy. Priority will be given to projects and experiments that engage communities, businesses and policymakers. “The goal is to develop the research we need, and then put climate solutions into practice,” says Daniel Kammen, an energy researcher at the University of California, Berkeley. He also states that this work will have global impact. The climate research project being undertaken in California may have an ally too, as the science dean of Columbia University of New York city, Peter De Menocal, plans to build an alliance of major universities and philanthropists to support research for answering pressing questions about the impacts of climate change. De Menocal already tested the idea on a smaller scale by launching the Center for Climate and Life at Columbia University last year, which raised US$8 million of private funding. This is no the first time California has taken the initiative to support an area of science that fell out of favor in Washington DC. In 2004, President George W. Bush restricted federal support for research on human embryonic stem cells. This led to the approval of $3 billion by the state’s voters to create the California Institute for Regenerative Medicine in Oakland. Since then, the center has funded more than 750 projects. The proposal for a new climate institute also started along a similar path, as a reaction to White House policies, but its organizers say that the concept has evolved into a reflective exercise about academics’ responsibility to help create a better future. The panel members wish to put forward a complete plan to set up the institute to the California legislature this year, in the hope of persuading lawmakers to fund the effort by September 2018, before Governor Brown’s global climate summit in San Francisco.

(Jeff Tollefson, Nature News)

Retractions

Researchers pull study after several failed attempts by others to replicate findings describing a would-be alternative to CRISPR

The high-profile gene-editing paper on NgAgo was retracted by its authors on 2nd August, citing inability in replicating the main finding by different scientists around the globe. The paper was published in Nature Biotechnology in May 2016. It described an enzyme named NgAgo which could be used to knock out or replace genes in human cells by making incisions at precise regions on the DNA. The study also emphasized the findings as a better alternative to the CRISPR-Cas9 gene editing system which revolutionized gene editing and has even been used to fix genes for a heritable heart condition in human embryos. Han Chunyu, molecular biologist at Hebei University of Science and Technology in Shijiazhuang is the inventor and immediately attracted a lot of applause for his findings. However, within months, news started emerging in social media about failures to replicate the results. These doubts were confirmed after a series of papers were published stating that the NgAgo could not edit genomes as stated in the Nature paper. Earlier, Han told Nature’s news team that he and his team had identified a contaminant that can explain other groups’ struggles to replicate the results and assured that the revised results would be published within 2 months. Yet on August 2, they retracted the paper stating that “We continue to investigate the reasons for this lack of reproducibility with the aim of providing an optimized protocol.”

The retraction of the paper, however, puts in question the future of the gene-editing center that Hebei University plans to build with 224 million yuan (US$32 million) as Han as the leader. Moreover, Novozymes, a Danish enzyme manufacturer, paid the university an undisclosed sum as part of a collaboration agreement. Dongyi Chen, Novozymes’ Beijing-based press manager, told Nature’s news team in January that the technology is being tested and shows some potential, but it is at a very early stage of development and hence it is difficult to determine its relevance. Following the news of retraction, he stated that the company has explored the efficiency of NgAgo, but so far has failed to track any obvious improvement. Yet they are not giving up hope as scientific researches takes time.

(David Cyranoski, Nature News)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

August 18, 2017 at 5:11 pm

Science Policy Around the Web – June 06, 2017

leave a comment »

By: Kseniya Golovnina, PhD

Source: Flickr, by USDA, via Creative Commons     (CC BY 2.0)

Food Security

What if Food Crops Failed at the Same Time?

When one group of people is fighting with climate change and another considers it “mythical”, researchers specialized in the study of social-ecological systems are developing food supply risk assessment models. Food crops are one of the most important sources of human being existence, and less than one-fourth of the planet (“breadbaskets”) produces three-fourth of the staple crops that feed the world’s population. In fact, climate change could cause crop losses in most of the breadbaskets.

Two important factors included in the models are shocks to major land crop production and economy. Shocks like droughts and heat waves in Ukraine and Russia in 2007 and 2009 almost wiped out wheat crops, and caused global wheat prices to spike. And demand assessments project that food production may have to double by 2050 to feed a growing population. Together, the potential environmental and economic stresses are making the world food production system less resilient, and will affect both rich and poor nations. To measure the fragility of the system, researchers developed scenarios of small shocks (10 percent crop loss) and large shocks (50 percent crop loss). These were then applied to corn, wheat or rice output using an integrated assessment model, the Global Change Assessment Model, which was developed by the U.S. Department of Energy.

Among the critical findings are that “breadbasket” regions respond to shocks in different ways. For example, South Asia, where most of the arable land is already in use, is quite unresponsive to shocks occurring elsewhere in the world, because the total amount of land in agricultural production cannot be changed significantly. In Brazil the situation is opposite, it has a lot of potential to bring new land into production if large shocks occur. However, cleaning Brazil’s forests requires significant effort and would add significantly to global climate change. Within the research agenda of the Pardee Center, these risks and preventive actions are discussed in more detail. The warning is clear: humankind needs to be aware and prepared for potential multiple “breadbaskets” failure if we want to reduce the potential for catastrophe. (Anthony Janetos, The Conversation)

Reproducibility in Science

Research Transparency: Open Science

Increasing amounts of scientific data, complexity of experiments, and the hidden or proprietary nature of data has given rise to the “reproducibility crisis” in science. Reproducibility studies in cancer biology have revealed that only 40 % or less peer-reviewed analyses are replicable. Another large-scale project attempting to replicate 100 recent psychology studies was successful in replicating less than 50% of the original results.

These findings are driving scientists to look for ways to increase study reliability, and make research practices more efficient and available for evaluation. A philosophy of open science, where scientists share their primary materials and data, makes analytical approaches more transparent and allows common research practices and standards to emerge more quickly. For scientific journals and associations, open science methods enable the creation of different ways to store and utilize data. Some journals are specifically dedicated to publishing data sets for reuse (Scientific DataJournal of Open Psychology Data), others require or reward open science practices like publicly posting materials and data.

The widespread use of online repositories to share study materials and data helps to store large data sets and physical materials to help mitigate the problems of reproducibility. However, open science practice is still very much in development, and faces some significant disincentives. Habits and reward structures are two major forces work against. Researchers are used to being close, and hide their data from being stolen. Journal editors tend to favor publishing papers that tell a tidy story with perfectly clear results. This causes researchers to omit “failed” studies that don’t clearly support their theories.

While efforts to overcome these obstacles are difficult, development of fully transparent science should be encouraged, as openness helps improve understanding, and acknowledges the truth that real data are often messy. (Elizabeth Gilbert and Katie Corker, The Conversation)

 

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

June 6, 2017 at 9:00 am

Science Policy Around the Web – January 27, 2017

leave a comment »

By: Nivedita Sengupta, PhD

Source: NIH Image Gallery on Flickr, under Creative Commons

Human Research Regulation

US Agency Releases Finalized ‘Common Rule’, Which Govern Human-Subjects Research

On September 8, 2015 the US Department of Health and Human Services (HHS) proposed significant revisions to the Federal Policy for the Protection of Human Subjects which is also known as the “Common Rule”. “Common Rule” is the set of federal regulations governing the conduct of clinical research involving human subjects. Among the proposed changes, an important one was regarding getting peoples’ consent before using the biological samples for subsequent studies. On 18th January 2017, the final version of the rule was released in which the proposed change was abandoned. This is a blow to the patient-privacy advocates, however the US National Academies of Sciences, Engineering and Medicine argued against that requirement and others citing that the changes would impose an undue burden on researchers and recommended that it be withdrawn.

The current version of Common Rule has generated mixed feelings among people. Researchers are happy that the government listened to scientists’ fears about increased research burdens whereas people like Twila Brase, president and co-founder of Citizens’ Council for Health Freedom in St Paul, Minnesota, are disappointed as they believe that these specific changes are ought to be made. Moreover the new version of the Common Rule requires that scientists include a description of the study, along with the risks and benefits, on the consent forms used by patients, and federally-funded trials should post patient consent forms online. However, these requirements do not extend to trials that are conducted with non-federal funds. (Sara Reardon, Nature News)

Biomedical Research

An Open-Science Effort to Replicate Dozens of Cancer-Biology Studies is Off to a Confusing Start

The Reproducibility Project on Cancer Biology was launched in 2013 to scrutinize the findings of 50 cancer papers from high-impact journals. The aim is to determine the fraction of influential cancer biology studies that are sound. In 2012, researchers at the biotechnology firm Amgen performed a similar study and announced that they had failed to replicate 47 of 53 landmark cancer papers but they did not identify the studies involved. In contrast, the reproducibility project makes all its findings open. Full results should appear by the end of the year and eLife is already publishing five fully analyzed reports in January. Out of the five, one failed to replicate and the remaining four showed replication results that are less clear.

These five results paint a muddy picture for people waiting for the outcome to determine the extent of impact of these studies. Though some researchers praised the project, others feared unfair discredit of their work and career. According to Sean Morrison, a senior editor at eLife, the reason for the “uninterpretable” results is “Things went wrong with tests to measure the growth of tumors in the replication attempts and the replication researchers were not allowed to deviate from the protocols, which was agreed at the start of the projects in consultation with the original authors”. “Doing anything else — such as changing the experimental conditions or restarting the work — would have introduced bias”, says Errington, the manager of the reproducibility project.

According to Errington, the clearest finding from this project is that the papers include very few details about their methods. The replication researchers had to spend hours to work out the detailed protocols and reagents along with the original authors. Even after following the exact protocols, the final reports include many reasons why the replication studies might have turned out differently, including variations in laboratory temperatures to tiny variations in how a drug was delivered. He thinks that the project helps to bring out such confusing details to the surface, and it will be a great service for future follow up work to develop a cure for cancer. However, scientists think that such conflicts mean that the replication efforts are not very informative and couldn’t be compared to the original and will only cause delays in advancing future clinical trials. (Monya Baker and Elie Dolgin, Nature News)

 

Have an interesting science policy link?  Share it in the comments!