Science Policy For All

Because science policy affects everyone.

Posts Tagged ‘reproducibility

Science Policy Around the Web – January 27, 2017

leave a comment »

By: Nivedita Sengupta, PhD

Source: NIH Image Gallery on Flickr, under Creative Commons

Human Research Regulation

US Agency Releases Finalized ‘Common Rule’, Which Govern Human-Subjects Research

On September 8, 2015 the US Department of Health and Human Services (HHS) proposed significant revisions to the Federal Policy for the Protection of Human Subjects which is also known as the “Common Rule”. “Common Rule” is the set of federal regulations governing the conduct of clinical research involving human subjects. Among the proposed changes, an important one was regarding getting peoples’ consent before using the biological samples for subsequent studies. On 18th January 2017, the final version of the rule was released in which the proposed change was abandoned. This is a blow to the patient-privacy advocates, however the US National Academies of Sciences, Engineering and Medicine argued against that requirement and others citing that the changes would impose an undue burden on researchers and recommended that it be withdrawn.

The current version of Common Rule has generated mixed feelings among people. Researchers are happy that the government listened to scientists’ fears about increased research burdens whereas people like Twila Brase, president and co-founder of Citizens’ Council for Health Freedom in St Paul, Minnesota, are disappointed as they believe that these specific changes are ought to be made. Moreover the new version of the Common Rule requires that scientists include a description of the study, along with the risks and benefits, on the consent forms used by patients, and federally-funded trials should post patient consent forms online. However, these requirements do not extend to trials that are conducted with non-federal funds. (Sara Reardon, Nature News)

Biomedical Research

An Open-Science Effort to Replicate Dozens of Cancer-Biology Studies is Off to a Confusing Start

The Reproducibility Project on Cancer Biology was launched in 2013 to scrutinize the findings of 50 cancer papers from high-impact journals. The aim is to determine the fraction of influential cancer biology studies that are sound. In 2012, researchers at the biotechnology firm Amgen performed a similar study and announced that they had failed to replicate 47 of 53 landmark cancer papers but they did not identify the studies involved. In contrast, the reproducibility project makes all its findings open. Full results should appear by the end of the year and eLife is already publishing five fully analyzed reports in January. Out of the five, one failed to replicate and the remaining four showed replication results that are less clear.

These five results paint a muddy picture for people waiting for the outcome to determine the extent of impact of these studies. Though some researchers praised the project, others feared unfair discredit of their work and career. According to Sean Morrison, a senior editor at eLife, the reason for the “uninterpretable” results is “Things went wrong with tests to measure the growth of tumors in the replication attempts and the replication researchers were not allowed to deviate from the protocols, which was agreed at the start of the projects in consultation with the original authors”. “Doing anything else — such as changing the experimental conditions or restarting the work — would have introduced bias”, says Errington, the manager of the reproducibility project.

According to Errington, the clearest finding from this project is that the papers include very few details about their methods. The replication researchers had to spend hours to work out the detailed protocols and reagents along with the original authors. Even after following the exact protocols, the final reports include many reasons why the replication studies might have turned out differently, including variations in laboratory temperatures to tiny variations in how a drug was delivered. He thinks that the project helps to bring out such confusing details to the surface, and it will be a great service for future follow up work to develop a cure for cancer. However, scientists think that such conflicts mean that the replication efforts are not very informative and couldn’t be compared to the original and will only cause delays in advancing future clinical trials. (Monya Baker and Elie Dolgin, Nature News)


Have an interesting science policy link?  Share it in the comments!

Science Policy Around the Web – August 26, 2016

leave a comment »

By: Leopold Kong, PhD

Adipose Tissue  Source: Wikipedia Commons, by staff, “Blausen Gallery 2014“.

Health Policy

Is there such a thing as ‘fat but fit’?

Nearly 70% of American adults are overweight or obese, raising their risk for health problems such as heart disease, diabetes, and high blood pressure. However, about a third of obese individuals appear to have healthy levels of blood sugar and blood pressure. Whether these ‘fat but fit’ individuals are actually “fit” has been controversial. A recent study published in Cell Reports has sought to dissect differences in the fat cells of the ‘unfit’ obese versus the ‘fit’ obese using tools that probe the patterns of genes being turned on or off. Fat from non-overweight people were also examined in the study. Interestingly, fat of non-overweight individuals and obese individuals differed in over 200 genes, regardless of ‘fitness’. However, the fat of ‘fit’ versus ‘unfit’ obese individuals only differed in two genes. Dr. Mikael Rydén, the lead author of the study commented: “We think that adds fuel to the debate. It would imply that you are not protected from bad outcomes if you are a so-called fit and fat person.” The study also highlights the complexity of fat’s influence on health, and raises the possibility of ‘fat’ biopsies. For example, fat from normal weight individuals following an unhealthy lifestyle may have marked differences that are diagnostic of future obesity. With the rising cost of treating chronic diseases associated with being overweight, further studies are warranted. (Lindzi Wessel, Stat News)

Biomedical Research

Half of biomedical research studies don’t stand up to scrutiny

Reproducible results are at the heart of what makes science ‘science’. However, a large proportion of published biomedical research appears to be irreproducible. A shocking study by scientists at the biotechnology firm Amgen aiming to reproduce 53 “landmark” studies showed that only 6 them could be confirmed. The stakes are even higher when it comes to pre-clinical cancer research. In fact, they are $30 billion higher, according to a recent study, suggesting that only 50% of findings can be reproduced. Primary sources of irreproducibility can be traced to (1) poor study design, (2) instability and scarcity of biological reagents and reference materials, (3) unclear laboratory protocols, and (4) poor data analysis and reporting. A major stumbling block may be the present culture of science, which does not reward publishing replication studies, or negative results. Higher impact journals generally prioritize work that demonstrates something new and potentially groundbreaking or controversial. When winning grant money and academic posts hinges on impact factor, reproducibility suffers. However, with such high potential for wasting substantial funds on medically significant areas, radical changes in science policy towards publishing, peer review and science education is urgently needed. The recent reproducibility initiative aiming “to identify and reward high quality reproducible research via independent validation” may be a step in the right direction. However, a paradigm shift in scientists’ attitudes towards what constitutes important research might be necessary. (Ivan Orannsky, The Conversation)


In CRISPR fight, co-inventor says Broad Institute misled patent office

The intellectual property dispute over the multibillion-dollar CRISPR gene editing technology has grown increasingly heated in the last months. With the FDA giving the go-ahead for the first U.S. clinical trial using CRISPR and with China beginning a clinical trial this month using this technology, the tension is high. On one side of the dispute is University of California’s Jennifer Doudna whose initial work established the gene-editing technology in a test tube. On the other side is Broad Institute’s Feng Zhang, who within one year made the technology work in cells and organisms, and therefore broadly applicable for biotechnology. Was Zhang’s contribution a substantial enough advance to warrant its own patents? Was Doudna’s work too theoretical and basic? This week, a potentially damning email that emerged from the legal filings of the dispute was made public. The email is from a former graduate student of Zhang’s, Shuailiang Lin, to Doudna. In addition to asking for a job, Lin wrote that Zhang was unable to make the technology work until the 2012 Doudna publication revealed the key conceptual advances. Lin adds: “I think a revolutionary technology like this […] should not be mis-patented. We did not work it out before seeing your paper, it’s really a pity. But I think we should be responsible for the truth. That’s science.” A spokesperson for the Broad Institute, Lee McGuire, suggested that Lin’s claims are false, and pointed out that Lin was in a rush to renew his visa, and had sent his explosive email to Doudna after being rejected for a new post at the Broad Institute. With CRISPR technology promising to change the face of biotechnology, the drama over its intellectual property continues to escalate. (Antonio Regalado, MIT Technology Review)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

August 26, 2016 at 9:00 am

Science Policy Around the Web – August 19, 2016

leave a comment »

By: Ian McWilliams, PhD

Photo source: pixabay

Climate Change

Melting ice sheet may expose cold war base, hazardous waste

During the Cold War, the US Army Corps began a top-secret mission to determine the capability of launching nuclear missiles at Russia from a base in Greenland. The military base constructed for this mission, named Camp Century, lies approximately 125 miles inland from the Greenland coast and was later abandoned in 1964 after the Joint Chiefs of Staff rejected the plans to create a nuclear base. When soldiers abandoned the base, it was thought that leftover fuel and waste material would be safely interred, buried under ice for thousands of years.

However, climate change has now threatened those plans. The increased ice melt could reveal the base as early as 2090 and it is estimated that tens of thousands of gallons of diesel fuel, wastewater, sewage, and other chemicals could be exposed. Adding to concerns is the nuclear generator housed in the frozen base. Although the base never became a site for nuclear weapons, the low-level radioactive coolant from the nuclear generator is still stored in the base. If ice melt continues to occur at an accelerated rate, some have expressed concern that these chemicals could be released into the environment by seeping into waterways causing a potential environmental catastrophe. (Stephen Feller, UPI)


Mouse microbe may make scientific studies harder to replicate

Reproducibility is an issue that has been the subject of much debate in the scientific community recently. Now, scientists are concerned that the microbiome may further complicate the issue. The collection of commensal microorganisms that reside on or within the body is referred to as microbiota, and it is now well known to affect the health of the host. Although researchers have taken meticulous steps to ensure that experimental animals are housed in identical conditions, including sterile bedding, strict temperature control, and standard light cycles, determining experimental variability due to differences in their microbiome have remained elusive. As researchers explore the issue further they have found that mice from different vendors have very different compositions of bacteria in their gut that could explain some inconsistencies in researchers’ experiments.

Although it is not mandated, taking steps to control for microbiome may aid in the reproducibility crisis. Segmented filamentous bacteria (SFB) have been identified as a notable concern, and some vendors are providing SFB positive or SFB negative animals separately. Although it is unlikely that SFB is the only culprit for differences in studies, researchers continue to explore new variables in rodent husbandry in an effort to improve reproducibility of scientific results. To add to the dilemma, because the species that constitute the microbiome are constantly changing, it is difficult to characterize, and impossible to standardize. Since mice share their microbes through eating each other’s feces, cage-mates can have similar microbiomes that provide natural microbiota normalization for littermates. (Kelly Servick, Science)

Precision Medicine

Spiking genomic databases with misinformation could protect patient privacy

New initiatives, like the Precision Medicine Initiative (PMI), are helping to cultivate the human genome into usable sets of data for research purposes. This pursuit is founded upon the willingness of participants to allow their genetic information to be pooled for analyses, but many have expressed concerns over the privacy of this genetic information. It has previously been shown that individuals can be identified from their anonymized genomic data and this has prompted researchers to look for additional security measures. Computer scientists Bonnier Berger and Sean Simmons have developed a new tool to help achieve this goal by using an approach called differential privacy. To increase privacy, a small amount of noise, or random variation, is added to the results of a user’s database query. Although the information returned would provide useful results, it would make it more difficult to conclusively connect this data to a patient’s identity. A similar method has been used by the US Census Bureau and the US Department of Labor for many years.

However, some scientists, including Yaniv Erlich, have concerns that adding noise to the dataset will reduce users ability to generate useful results. Erlich stated that “It’s nice on paper. But from a practical perspective I’m not sure that it can be used”. In the search for privacy, free form access to the data is limited. This “privacy budget” limits the number of questions that can be asked and excludes hundreds or thousands of locations in a genome. Additionally, because noise naturally increases error, it weakens the overall conclusion that can be drawn from the query. Simmons expects that answers will be close enough to be useful for a few targeted questions. The tradeoff for increased security is that databases protected this way could be instantly accessible and searchable, which cuts down on getting access to databases such as those managed by the National Institutes of Health. Simmons added that this method is “meant to get access to data sets that you might not have access to otherwise”. The group plans to continue to refine this method to balance the needs of researchers for access to these data sets while maintaining patient privacy. (Anna Nowogrodzki, Nature)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

August 19, 2016 at 11:08 am

Science Policy Around the Web – July 26, 2016

leave a comment »

By: Ian McWilliams, Ph.D.

photo credit: Newport Geographic via photopin cc

Infectious Diseases

Research charities help marry two major South African HIV/TB institutes

Two institutes, the Wellcome Trust and the Howard Hughes Medical Institute (HHMI), have announced that they are joining efforts in to fund the fight against HIV and Tuberculosis (TB) in South Africa. South Africa has the largest population infected with HIV. Because TB thrives in HIV-infected individuals, South Africa is experiencing a co-epidemic that has been challenging to battle. This collaboration will mark the first time that HHMI and The Wellcome Trust have worked together on a global health institution.

The new Africa Health Research Institute combines the Africa Centre for Population Health’s detailed population data gathered from over 100,000 participants with basic laboratory science and medical research of the KwaZulu-Natal Research Institute TB-HIV (K-RITH). Together the organization will work towards eliminating HIV and TB by training African scientists and will “link clinical and laboratory-based studies with social science, health systems research and population studies to make fundamental discoveries about these killer diseases, as well as demonstrating how best to reduce morbidity and mortality.” Projects funded by the institute include maintaining the longest running population-based HIV treatment as prevention (TasP) trial in Africa and using genomics to study drug resistant TB.

The organization is funded by a $50 million grant from The Wellcome Trust that is renewable over the next five years. Additionally, HHMI has already spent $40 million for the construction of new facilities, including a new biosafety level 3 laboratory that is designed to handle dangerous pathogens. These new efforts aim to apply scientific breakthroughs to directly help the local community. Deenan Pillay, the director of the new institute, has expressed his support of the organization’s mission by stating “There’s been increasing pressure and need for the Africa Centre not just to observe the epidemic but to do something about it. How long can you be producing bloody maps?” (Jon Cohen, ScienceInsider)

Scientific Reproducibility

Dutch agency launches first grants programme dedicate to replication

While a reproducibility crisis is on the minds of many scientists, the Netherlands have launched a new fund to encourage Dutch scientists to test the reproducibility of ‘cornerstone’ scientific findings. The €3 million fund was announced on July 19th by the Netherlands Organisation for Scientific Research (NWO) and will focus on replicating work that “have a large impact on science, government policy or the public debate.”

The Replication Studies pilot program aims to increase transparency, quality, and completeness of reporting of results. Brian Nosek, who led studies to evaluate the reproducibility of over 100 reports from three different psychology journals, hailed the new program and stated “this is an increase of infinity percent of federal funding dedicated to replication studies.” This project is the first program in the world to focus on the replication of previous scientific findings. Dutch scientist Daniel Lakens further stated that “[t]his clearly signals that NWO feels there is imbalance in how much scientists perform replication research, and how much scientists perform novel research.” The NWO has stated that it intends to include replication in all of its research programs.

This pilot program will focus both on the reproduction of findings using datasets from the original study and replication of findings with new datasets gathered using the same research protocol in the original study. The program expects to fund 8-10 projects each year, and importantly, scientists will not be allowed to replicate their own work. The call for proposals will open in September with an expected deadline in mid-December. (Monya Baker, Nature News)

Health Care Insurance

US Sues to block Anthem-Cigna and Aetna-Human mergers

United States Attorney General Loretta Lynch has announced lawsuits to block two mergers that involve four of the largest health insurers. Co-plaintiffs in the suits include eight states, including Delaware, Florida, Georgia, Illinoi, Iowa, Ohio, Pennsylvania, Virginia, California, Colorado, Connecticut, Main, Maryland, and New Hampshire, as well as the District of Columbia. The lawsuits are an attempt by the Justice Department to block Humana’s $37 billion merger with Aetna and Anthem’s $54 billion acquisition of Cigna, the largest merger in the history of health insurers. The Justice Department says that the deals violate antitrust laws and could mean fewer choices and higher premiums for Americans. Antitrust officials also expressed concern that doctors and hospitals could lose bargaining power in these mergers.

Both proposed mergers were announced last year, and if these transactions close, the number of national providers would be reduced from five to three large companies. Furthermore, the government says that Anthem and Cigna control at least 50 percent of the national employer-based insurance market. Lynch further added that “competition would be substantially reduced for hundreds of thousands of families and individuals who buy insurance on the public exchanges established under the Affordable Care Act.” The Affordable Care Act (ACA) aimed to encourage more competition between insurers to improve health insurance options and keep plans affordable. The Obama administration has closely watched the health care industry since the passing of that legislation and has previously blocked the mergers of large hospital systems and stopped the merger of pharmaceutical giants, such as the proposed merger of Pfizer and Allergan.

Health insurers argue that these mergers are necessary to make the health care system more efficient, and would allow doctors and hospitals to better coordinate medical care. In reaction to the announcement by the Justice Department, Aetna and Humana stated that they intend to “vigorously defend” the merger and that this move “is in the best interest of consumers, particularly seniors seeking affordable, high-quality Medicare Advantage plans.” Cigna has said it is evaluating its options. (Leslie Picker and Reed Abelson, New York Times)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

July 26, 2016 at 11:00 am

Psychology’s Reproducibility Crisis

leave a comment »

By: David Pagliaccio, Ph.D.

Photo source:

Findings from the collaborative Reproducibility Project coordinated by the Center for Open Science were recently published in the journal, Science. This report raised a stir among both the scientific and lay communities. The study summarized an effort to replicate 100 studies previously published in three major psychology journals. The project found that only around one third of replication studies yielded significant results where the size of the effects observed in the replications was about half that of the original reports on average. The study was quickly met with statistical and methodological critique, and in turn by a criticism of this critique. With the concerns raised by the Reproducibility Project and the intense debate in the field, various organizations and media outlets have begun to spotlight this issue, noting that psychology may be experiencing a “reproducibility crisis.”

The authors of this recent study importantly indicted systemic forces that incentivize “scientists to prioritize novelty over replication” and journals for disregarding replication as unoriginal. Typically, a scientist’s career is most easily advanced through high profile and highly cited publications in well-respected journals. This rarely includes replication attempts as they are generally hailed as not novel or not progressing the field. Further, statistically significant results are prized while null results are often chalked up to insufficient power and are not given a forum to help shape the literature. These factors lead to a large publication bias towards often underpowered but significant findings, a rampant ‘file drawer problem’ of not being able to publish non-significant results, and what is known as “p-hacking”, where authors can analyze and reanalyze a given dataset in different ways to push towards significance of a desired result.

Several initiatives have been put forth to try to alleviate this “reproducibility crisis.” For example, the American Statistical Association released a statement regarding the use of p-values in scientific research. This served both as a general clarification on the use and interpretation of p-values in null hypothesis significance testing and an impetus to potentially include other measures in our understanding of effect size. This type of reframing is helpful in assuring good statistical practice and resisting the tendency to inaccurately interpret our arbitrary statistical threshold of p<0.05 as a marker of truth, which often biases scientific findings and reporting. Additionally, the National Institutes of Health (NIH) have adopted new guidelines for grant submissions to try to enhance rigor and reproducibility in science, for example by increasing transparency and setting basic expectations for biological covariates. Yet, the main investigator-initiated research grant from the NIH, the R01, still includes novelty as a main scoring criteria and does not have any specific provisions for including replication studies.

Publication-related initiatives have begun to be designed to help incentivize replication. For example, the Association for Psychological Science has created a registered replication report where, before data collection even begins, scientists can pre-register a replication study to be published in Perspectives on Psychological Science. This saves scientists from struggling to publish a direct replication and reframes the focus of replication away from whether a prior study was ‘true’ or ‘false’ but rather focuses on cumulative effect size across studies. While this is a step forward, few have yet to make use of this opportunity. Importantly, while the rare journal, like PLOS ONE, explicitly states that it accepts replication submissions, top-tier journals have generally not joined in on allowing for registered replications or for creating specific article submission formats to allow for replications that otherwise would not be considered ‘novel.’ Other interesting avenues for addressing this issue have begun to spring up, for example, the website,, was created as an archive of attempts to replicate psychology studies. While this does provide a way to publicize failures to replicate that may otherwise not be publishable, these reports currently do not seem to be indexed by standard databases, like PubMed or PsychNet. Thus, while more failures to replicate can be made available and could help the field, the unofficial nature of this website does not easily help or incentivize investigators in terms of publication counts, citations, or other metrics often considered for hiring, tenure, etc.

Importantly, issues of reproducibility and publication bias can have vast consequences on society and policy as well as potentially eroding public trust in science and the scientific process. While an extreme case involving falsification, the long lasting consequences of Andrew Wakefield’s erroneous, retracted, and unreplicated paper linking the MMR vaccine to autism spectrum disorders truly underscore the potential impacts of unchecked scientific findings. A much more benign example was profiled in The Atlantic, concerning findings that bilingual individuals show better cognitive performance than monolingual individuals. While many published studies confirmed this effect in various ways, several large studies found no significant difference and many negative studies went unpublished. Similarly, as detailed in Slate, doubt has recently been cast upon a long research track examining ego depletion. Failed replications of this ego depletion effects are now coming to a head. This is after, for example, this research was formed into a book exploring willpower and how individuals can use this science to flex their self-control and willpower.

While these findings have not shaped major policy, it is not a far leap to see how difficult it may be to undo the effects of publications biases towards novel, but unreplicated research findings on a variety of policies. For example, education research also suffers from replication issues. One study pointed out that replication studies represented less than 1% of published studies in the top 100 education research journals. While many of these studies did replicate the original work, most were conceptual rather than direct replications, and replication success was somewhat lower when performed by a third-party rather than including authors from the original work. While the higher successful replication rate is encouraging, this study does call strongly for an increase in the number of replication studies performed.

Despite debates over the extent of the reproducibility problem, it is clear that psychology and science more broadly would benefit from greater attempts to replicate published findings. This will involve large-scale shifts in policies ranging from journal practices to tenure decisions and governmental funding to help alleviate these issues and to support and motivate high quality science and replication of published studies. These efforts will in turn have long-term benefits on the development of policies based on research in education, social psychology, mental health, and many other domains.

Written by sciencepolicyforall

March 30, 2016 at 9:00 am

Posted in Essays

Tagged with , ,

Science Policy Around the Web – February 9, 2016

leave a comment »

By: Cheryl Jacobs Smith, Ph.D.

Photo credit: Contains LEAD via photopin (license)

Health Policy

What the Science Says About Long-Term Damage From Lead

Almost two years since the city of Flint, MI switched from Detroit water to their local water supply, the citizens are finally being listened to in regards to their drinking water. Time and time again, citizens and researchers were ignored when they tried to alert local officials to their poor water quality conditions. Finally, Flint residents and researchers were able to get the message out: not only is their local water undrinkable, it is contaminated with lead.

Lead intoxication or lead poisoning does not necessarily lead to seizures, hospitalizations or medical events. However, health care professionals are still alarmed because lead levels in children reached 5 micrograms per deciliter (5 ug/dL). The percentage of children in Flint, MI under the age of 5 with lead levels that high have now since doubled (2.4 percent to 4.9 percent). Furthermore, in areas with the highest levels of lead, more than 10 percent of children have lead levels that are at least that high.

The most worrisome statistics are the long-term and lasting effects due to lead poisoning. A study published in Pediatrics examining more than 3,400 children in Rhode Island identified that children with blood lead levels between 5 to 9 micrograms per deciliter (5-9 ug/dL) fell below reading readiness for kindergarten. Additional studies examining lead levels and child development also report increase likelihood to engage in risky behaviors such as smoking or drinking at an early age.

Now that attention is centered on Flint, MI and its trouble with lead in the water, focus needs to turn to mitigating any long-term damage children and adults may have as a result of lead poisoning. Historically, lead has been used ubiquitously in manufacturing. Not only used for pipes, lead has been an additive in gasoline, in paint and has also accumulated in soil. We should take a lesson from Flint and analyze the state of lead poisoning in our own communities. As Aaron E. Carrol comments, “Until we solve the lead problem for good, we may be condemning children to a lifetime of problems.” (Aaron E. Carroll, The New York Times)

Public Health and Infectious Disease

Governor, health officials sued over Ebola quarantines

During the Ebola epidemic in 2014, several people coming back to the United States from West Africa were quarantined: meaning they could not return back to their normal lives for at least 20 days. For several of these people, they felt the quarantine was akin to imprisonment and now have filed a lawsuit.

The lawsuit was filed by Yale Law School students against Connecticut Governor Dannel P. Malloy and state health officials on behalf of ex-quarantined plaintiffs or those plaintiffs still in West Africa. The plaintiffs who were quarantined claim that they had no Ebola symptoms that warranted their isolation upon their return. The lawsuit seeks monetary damages and an order preventing any future quarantines. Plantiffs say, “Being quarantined made me feel like a criminal.” “There was no scientific reason to confine me to my apartment, with no visitors and a police officer parked outside my door.”

Like other governors of New York or New Jersey who issued quarantines to health workers returning from areas in Africa endemic with Ebola, Governor Malloy adopted the same stringent policies. Not only did health care workers feel undue prejudice or discrimination as a result of traveling to West Africa, but so did Liberians living in Connecticut. It will be interesting to see what the court rules.   (Dave Collins, The Washington Post)

Drug Policy

Cancer drug’s usefulness against Alzheimer’s disputed

A study described in the top biomedical journal, Science, in 2012 observed that bexarotene, an FDA-approved cancer drug, was able to clear a protein, A-beta, from the brains of mice. This reduced plaque formation and smaller forms of the protein which in essence reduced the pathology of Alzheimer’s disease which is known to accumulate proteins that form plaques in the brain that reduce brain function. Excitingly, the mice treated with bexarotene showed signs of improved learning and memory—a reversal of Alzheimer’s symptoms. However, a year after their work appeared, four reports, also in, Science, disputed some of those findings.

In tests on rats, Amgen, a pharmaceutical company, found that bexarotene didn’t drop levels of plaques or smaller forms of the protein, A-beta. The author of the original Science paper, Dr. Landreth, argued that the present study did not use a formulation of the drug that would persist at high enough levels in the brain to be useful.

“Larger trials would be more informative”, says Landreth, who stands by his group’s original findings. “When we published our Science paper, it took us five years and we did the best science we could,” Landreth says. “And I am convinced that we are right.” (Laura Sanders, Science News)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

February 9, 2016 at 9:00 am

Science Policy Around the Web – October 31, 2015

leave a comment »

By: Courtney Pinard, Ph.D.

Photo credit: Novartis AG via photo pin cc


How Prevalent is Scientific Bias?

Scientists and clinicians conducting clinical trials must abide by rigorous standards to safeguard against biases. Biomedical animal research has not been held to the same standards, and advocates of robust science have argued that this lack of scientific rigor is why more than half of pre-clinical studies are irreproducible. A recent study published at the University of Edinburgh in the U.K. shows that animal researchers are not using the same standards to prevent bias in study design. Such standards include 1) using randomized trials to prevent scientists from, for example, assigning unhealthy animals to the control group to boost a drug’s effect on the treatment group; 2) ensuring that researchers are blinded when accessing outcomes of an experiment; 3) calculating the correct sample size before starting an experiment; and 4) disclosing any conflicts of interest. The authors of the study examined 2500 papers published between 1992 and 2011 on drug efficacy, and the results were dismal. Only 30% of papers analyzed outcome in a blinded manner, 25% stated randomizing animals to groups, 12% included a conflict of interest statement, and less than 1% of papers reported calculating the needed sample size in advance. When the authors looked at whether institute quality or journal impact factor predicted bias, they found no correlation. The U.K. study is one out of many studies on the topic of scientific rigor that have resulted in growing concern from scientists and the public about irreproducible results in pre-clinical biomedical research.

According to an NIH commentary published last year, the reasons for why scientific bias in animal research is so prevalent are complex and have to do with the attitudes of funding agencies, academic centers, and scientific publishers. Authors of the commentary, Francis Collins and Lawrence Tabak, discuss these attitudes: “Funding agencies often uncritically encourage the overvaluation of research published in high-profile journals. Some academic [centers] also provide incentives for publications in such journals, including promotion and tenure, and in extreme circumstances, cash rewards.”

Given the continuing budget restraints, and Congress’ awareness about the reproducibility problem, national funding agencies have started to act. The NIH, for example, organized a workshop with over 30 basic/preclinical science journal editors to put together principles and guidelines to enhance research rigor and reproducibility. One such principle is “Transparency in Reporting”, and includes the bias safeguarding standards described above. Strengthening pre-clinical biomedical research will only occur when scientists and policy makers at funding agencies, academic institutions, and journals work together to put these principles into practice, and acknowledge that the “publish or perish” attitude rampant in the scientific culture needs to change. The situation and solution was described succinctly in a recent Nature Editorial on cognitive bias: “Finding the best ways to keep scientists from fooling themselves has so far been mainly an art form and an ideal. The time has come to make it a science.” (Martin Enserink, ScienceInsider)

Big Data

Proposed Study to Track 10,000 New Yorkers

A new proposed longitudinal study will attempt to monitor thousands of households in New York City over the span of decades. Information will be gathered in intimate detail about how people in these households lead their lives, including information about diet, exercise, social activities and interactions, purchases, education, health measures, and genetics. This ambitious project is called the Kavli Human Understanding through Measurement and Analysis (HUMAN) project, and aims to quantify the human condition using rigorous science and big data approaches to understand what makes us well and what makes us ill. According to project leaders, existing large-scale data sets have only provided detailed catalogs of narrow aspects of human health and behavior, such as cardiovascular health, financial decision-making, or genetic sequencing. By measuring the feedback mechanisms between biology, behavior, and our environment over decades, researchers believe that that much more will be understood about how these factors interact to determine human health over the life cycle. For example, according to articles written by scientists in support of the project, the new data could measure the impact of cognitive decline on performing activities of daily living, on family members and caregivers, and on healthcare utilization or end-of-life decisions. A further goal of the project is to provide data to policy makers in order for them to develop evidenced-based public policies.

Anticipating privacy and cybersecurity concerns inherent in such an invasive study, Kavli HUMAN project researchers have established a Privacy & Security Advisory Council, comprised of members in the private, public, and academic sector. The Advisory Council includes bioethicists and patient privacy advocates. In addition to establishing the Advisory Council, project leaders conducted an opinion survey of diverse group of Americans asking whether they 1) think the study should be done, and 2) if they would be willing to participate. The results of the survey suggested that nearly 80% think that the study should be done and more than half were willing to participate. When questions arise about the ethics of collecting such information, Kavli HUMAN project researchers publicly argue that corporations already track Americans’ spending habits, location, and use of technology, and that “people’s data can be better used to serve them, their communities, and society.” (ScienceInsider, Kelly Servick)

Nutrition and Cancer

A Diet High in Red Meat and Processed Meat Increases Risk for Colorectal Cancer

The World Health Organization International Agency for Research (IARC) announced on Monday that eating too many processed meats are cancer-causing and eating too much red meat is “probably carcinogenic to humans.” Red meat is defined as all types of mammalian muscle meat, such as “beef, veal, pork, lamb, mutton, horse, and goat,” and processed meat is defined as meat that “has been transformed through salting, curing, fermentation, smoking, or other processes to enhance flavor or improve preservation.” The IARC reviewed 800 studies that looked at the association of cancer with consumption of red or processed meat in people around the world, of diverse ethnicities and diets. Results of this analysis revealed that the positive association between red and processed meat consumption and cancer was strongest for colorectal cancer. The Global Burden of Disease Project, an independent academic research organization, estimates that 34,000 cancer deaths per year worldwide are attributable to diets high in processed meat. Studies show that meat processing techniques and cooking this kind of meat at high temperatures can lead to the formation of carcinogenic chemicals, and that these compounds appear in parts of the digestive tract. Specifically, the agency said its experts concluded that each 50 gram portion of processed meat eaten daily increased the risk of colorectal cancer by 18 percent. Red meat was not as strongly associated with cancer as processed meat. Some public health experts criticized the bravado of the IARC announcement. In response to public inquiries, they have published a FAQ page where they state that smoking and asbestos are more likely to be causal for lung and other types of cancers. The announcement did not mark a new discovery, since the original report has been out for several years; it was meant to attract public attention and help countries looking to WHO for health advice. According to the director of IARC, “these findings further support current public-health recommendations to limit intake of meat.” (NPR; Anahad O’Connor, New York Times)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

October 31, 2015 at 9:00 am