Science Policy For All

Because science policy affects everyone.

Archive for March 2016

Psychology’s Reproducibility Crisis

leave a comment »

By: David Pagliaccio, Ph.D.

Photo source: pixabay.com

Findings from the collaborative Reproducibility Project coordinated by the Center for Open Science were recently published in the journal, Science. This report raised a stir among both the scientific and lay communities. The study summarized an effort to replicate 100 studies previously published in three major psychology journals. The project found that only around one third of replication studies yielded significant results where the size of the effects observed in the replications was about half that of the original reports on average. The study was quickly met with statistical and methodological critique, and in turn by a criticism of this critique. With the concerns raised by the Reproducibility Project and the intense debate in the field, various organizations and media outlets have begun to spotlight this issue, noting that psychology may be experiencing a “reproducibility crisis.”

The authors of this recent study importantly indicted systemic forces that incentivize “scientists to prioritize novelty over replication” and journals for disregarding replication as unoriginal. Typically, a scientist’s career is most easily advanced through high profile and highly cited publications in well-respected journals. This rarely includes replication attempts as they are generally hailed as not novel or not progressing the field. Further, statistically significant results are prized while null results are often chalked up to insufficient power and are not given a forum to help shape the literature. These factors lead to a large publication bias towards often underpowered but significant findings, a rampant ‘file drawer problem’ of not being able to publish non-significant results, and what is known as “p-hacking”, where authors can analyze and reanalyze a given dataset in different ways to push towards significance of a desired result.

Several initiatives have been put forth to try to alleviate this “reproducibility crisis.” For example, the American Statistical Association released a statement regarding the use of p-values in scientific research. This served both as a general clarification on the use and interpretation of p-values in null hypothesis significance testing and an impetus to potentially include other measures in our understanding of effect size. This type of reframing is helpful in assuring good statistical practice and resisting the tendency to inaccurately interpret our arbitrary statistical threshold of p<0.05 as a marker of truth, which often biases scientific findings and reporting. Additionally, the National Institutes of Health (NIH) have adopted new guidelines for grant submissions to try to enhance rigor and reproducibility in science, for example by increasing transparency and setting basic expectations for biological covariates. Yet, the main investigator-initiated research grant from the NIH, the R01, still includes novelty as a main scoring criteria and does not have any specific provisions for including replication studies.

Publication-related initiatives have begun to be designed to help incentivize replication. For example, the Association for Psychological Science has created a registered replication report where, before data collection even begins, scientists can pre-register a replication study to be published in Perspectives on Psychological Science. This saves scientists from struggling to publish a direct replication and reframes the focus of replication away from whether a prior study was ‘true’ or ‘false’ but rather focuses on cumulative effect size across studies. While this is a step forward, few have yet to make use of this opportunity. Importantly, while the rare journal, like PLOS ONE, explicitly states that it accepts replication submissions, top-tier journals have generally not joined in on allowing for registered replications or for creating specific article submission formats to allow for replications that otherwise would not be considered ‘novel.’ Other interesting avenues for addressing this issue have begun to spring up, for example, the website, www.PsychFileDrawer.org, was created as an archive of attempts to replicate psychology studies. While this does provide a way to publicize failures to replicate that may otherwise not be publishable, these reports currently do not seem to be indexed by standard databases, like PubMed or PsychNet. Thus, while more failures to replicate can be made available and could help the field, the unofficial nature of this website does not easily help or incentivize investigators in terms of publication counts, citations, or other metrics often considered for hiring, tenure, etc.

Importantly, issues of reproducibility and publication bias can have vast consequences on society and policy as well as potentially eroding public trust in science and the scientific process. While an extreme case involving falsification, the long lasting consequences of Andrew Wakefield’s erroneous, retracted, and unreplicated paper linking the MMR vaccine to autism spectrum disorders truly underscore the potential impacts of unchecked scientific findings. A much more benign example was profiled in The Atlantic, concerning findings that bilingual individuals show better cognitive performance than monolingual individuals. While many published studies confirmed this effect in various ways, several large studies found no significant difference and many negative studies went unpublished. Similarly, as detailed in Slate, doubt has recently been cast upon a long research track examining ego depletion. Failed replications of this ego depletion effects are now coming to a head. This is after, for example, this research was formed into a book exploring willpower and how individuals can use this science to flex their self-control and willpower.

While these findings have not shaped major policy, it is not a far leap to see how difficult it may be to undo the effects of publications biases towards novel, but unreplicated research findings on a variety of policies. For example, education research also suffers from replication issues. One study pointed out that replication studies represented less than 1% of published studies in the top 100 education research journals. While many of these studies did replicate the original work, most were conceptual rather than direct replications, and replication success was somewhat lower when performed by a third-party rather than including authors from the original work. While the higher successful replication rate is encouraging, this study does call strongly for an increase in the number of replication studies performed.

Despite debates over the extent of the reproducibility problem, it is clear that psychology and science more broadly would benefit from greater attempts to replicate published findings. This will involve large-scale shifts in policies ranging from journal practices to tenure decisions and governmental funding to help alleviate these issues and to support and motivate high quality science and replication of published studies. These efforts will in turn have long-term benefits on the development of policies based on research in education, social psychology, mental health, and many other domains.

Advertisements

Written by sciencepolicyforall

March 30, 2016 at 9:00 am

Posted in Essays

Tagged with , ,

Science Policy Around the Web – March 29, 2016

leave a comment »

By: Thaddeus Davenport, Ph.D.

Source: Ashley Fisher / Flickr

Modernizing Scientific Publishing

Handful of Biologists Went Rogue and Published Directly to Internet

Peer-reviewed scientific journals are essential for science. They motivate and reward high-quality experimental design and facilitate the dissemination of knowledge that drives innovation. A recent article in the New York Times nicely captures some of the complexity of modern scientific publishing by examining a recent push by some researchers to publish their findings directly to ‘preprint’ servers – a practice already common in physics and mathematics.

Preprint publishing has the potential to significantly speed up publishing, allowing for faster and wider dissemination of ideas into a free, modern digital forum. Some researchers worry that bypassing the traditional peer-review process might eventually erode the quality of research. Though, it could be argued that so long as articles published to preprint servers are treated as preliminary findings (as, perhaps, we should treat all findings published in even the highest tier journals), the online forum has the potential to be a more transparent, robust peer review process than the current model in which a small number of anonymous reviewers decide the value of research.

The article notes other potential hurdles to the widespread adoption of preprint publishing that are deeply embedded in the culture of research. For example, papers are the currency of science. If authors bypassed this system, they would also bypass the possibility of attaining the classic badges of honor associated with publishing in high tier journals, potentially decreasing their competitiveness when applying for jobs and grants.

A change in publishing practices will also, likely, need to coincide with a change in the culture and value system of scientific research, but it is exciting to watch publishing move into the modern world. Scientific progress thrives on new ideas, and the resources of the digital age have the potential to broaden the reach of ideas and to increase the speed of their communication. (Amy Harmon, New York Times)

Economic Policies

A “Circular Economy” to Reduce Waste and Increase Efficiency

Our current economy can largely be described by a linear flow of material in which natural resources are harvested, combined, refined, and converted into products. These products are purchased, and after some amount of use, ultimately recycled or discarded at the discretion of the owner.  In a Nature special this week, Walter R. Stahel describes the potential economic and environmental benefits of a different sort of economy – a “circular economy” – that “replaces production with sufficiency” by encouraging reuse, repair, and recycling over remanufacturing.

Originally conceived by Stahel and his colleague Geneviève Reday-Mulvey in the 1970s, the concept of a circular economy “grew out of the idea of substituting manpower for energy.” For example, Stahel observed that it requires “more labour and fewer resources to refurbish buildings than to erect new ones.” Applying this model to all products has the potential to reduce greenhouse gas emissions substantially and expand the workforce because “remanufacturing and repair of old goods, buildings and infrastructure creates skilled jobs in local workshops.”

To support a transition to a more circular economy, Stahel recommends – among other things in his article –  a change in the way economic success is measured. Rather than trying to maximize our gross domestic product (GDP), a measure of the flow of resources, perhaps we should attempt to optimize the “value-per-weight” or “labor-input-per-weight” of the manufactured products. Policies and tax structures designed to maximize these economic indicators might be effective in encouraging stewardship of the earth’s limited resources and cultivating job growth. (Walter R. Stahel, Nature News)

A Second Chance for Grants

New funding matchmaker will cater to NIH rejects

The majority of NIH grant applications do not receive funding, not necessarily because the applications are of poor quality, but rather because there are simply more good ideas than the government has the capacity to support. A recent article in Science news by Kelly Servick describes a pilot program started earlier this month by NIH in collaboration with Leidos to address this gap in funding.

The program, known as OnPAR, aims to establish a more open market in which NIH grant applications that score well (within the thirtieth percentile) but do not receive funding would then be made available to private organizations and funding agencies for consideration. It seems that this system would be of substantial benefit to grantwriters – increasing the efficiency of grant-writing and review by allowing “recycling” of grants and their associated peer reviews, which are expensive to produce in terms of time and energy, and thus, money.

Funding agencies may see value in this program through expanded access, possibly finding themselves in the position to fund and motivate inquiry for researchers who may not have applied to their organization directly. However, private funding agencies are often in a position similar to that of the federal government – they receive more good applications than they have resources to support, and Servick notes that “the success of the project will hinge on whether private funders see value in using OnPAR in addition to their existing grant review process.”

If funders do find value in OnPAR, it is conceivable that they might allocate a percentage of their annual budget for OnPAR grants. Time will reveal the ultimate value of OnPAR, but it is a step in the right direction. How else might we increase the efficiency of the scientific production cycle? (Kelly Servick, Science News)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

March 29, 2016 at 10:00 am

Science Policy Around the Web – March 25, 2016

leave a comment »

By: Nivedita Sengupta, Ph.D.

Photo source via pixabay

Genetically Engineered Foods

Policy: Reboot the debate on genetic engineering

Genetic engineering (GE) is a highly controversial topic of debate in current days partially because of its increasing impact on day to day living. In recent years, a great deal of advancement has been made in the field of GE as established by the development of sophisticated modern tools like CRISPR. This has led to increasing concern among people regarding GE and food safety laws.

One of the issues with respect to food safety laws was to determine whether the focus of the regulatory policies should be the process by which GE organisms are made or the GE products themselves. Most people in favor of product-based regulation believe that GE organisms are no different compared to the conventionally bred organisms. In United States, since mid-1980s, GE products have been overseen by the Coordinated Framework for Regulation of Biotechnology (CFRB). According to the CFRB, product-based regulation is the science-based approach and hence GE organisms could be covered by existing policies without any need for formulating new laws. Thus they could be simply channeled into particular government agencies depending on whatever category they fell into.

However, in the process of regulating GE products, the agencies realized that the process of engineering is important as well. The agencies recognized that from a scientific standpoint, a product’s traits, harmful or beneficial, depend on the process by which it is made. For example, in human gene-therapy trials, new methods for delivering genes have removed the need for potentially harmful viral vectors. Thus, product and process issues are not distinct in regulation. Though regulating GE products rather than the process is accepted in many countries beyond the United States, other countries like Brazil and Australia have laws which mandates the regulation of the mechanisms by which the GE products are developed.

The inconsistency of views among GE developers and regulators in product-versus-process arguments demands a fresh start on formulating regulatory policies involving GE. It’s time to consider a mix of product and process issues to order to identify product groups which are likely to be of concern and require regulation. These efforts should be focused on keeping in mind the polarization of product-versus-process and science-versus-values framings so that the government can form a system which will be based on information provided by science as well as the concerns and values of citizens. (Jennifer Kuzma, Nature Comment)

Infectious Diseases

Dengue vaccine aces trailblazing trial

Vaccine development is a long and complex process which can take decades to be available for clinical use. Scientists at the Johns Hopkins Bloomberg School of Public Health in Baltimore, Maryland, have developed a vaccine which may be the most potent vaccine available to date for preventing dengue infections. These researchers employed a ‘human challenge’ strategy during the development and testing of this vaccine, a method which fell out of favor during the last century. ‘Human challenge’ involves deliberately infecting healthy volunteers with a weakened form of the disease causing virus. Concerns about the safety of deliberately infecting people has limited the use of human challenge studies and usually researchers test developing vaccines on people who are already at risk of contracting the disease of interest.

The dengue virus is a difficult vaccine target because of its four serotypes. Infection with one of the serotype will render a person immune against that type for life but will offer no protection against the others and may also increase the risk of acquiring hemorrhagic fever upon exposure to a different dengue serotype. The current vaccine study tested only dengue serotype 2, the most virulent serotype. 21 volunteers were injected with the experimental vaccine, and 20 volunteers with a sham vaccine. Six months later, all 41 volunteers were injected with a weakened version of the dengue virus which causes symptoms similar to a mild dengue infection, such as rash. The vaccine provided 100% protection against the challenge and only the individuals who received the sham vaccine showed mild symptoms with 80% of them developing a rash.

As all the current dengue vaccines only protect a proportion of volunteers, if these results hold up in larger populations the vaccine could be one of the most promising dengue vaccines developed. “This is a tremendous step forward, and something that has been desperately needed for 30 years,” says Duane Gubler, a disease researcher at the Duke NUS Medical School in Singapore who was not involved in this study. Moreover, he mentioned that the lack of human challenge studies is actually one of the things that made the development of dengue vaccines very difficult. Scott Halstead, a virologist and vaccinologist at the Uniformed Services University of the Health Sciences in Bethesda, Maryland stated that “this is an incredible paper that shows what is absolutely necessary to develop a vaccine against the dengue virus. It’s a really important demonstration of the kind of proof that you really need to have before you spend US$1.5 or 2 billion on a phase III [efficacy] trial.”

Meanwhile investigators have already begun a second human-challenge study to test whether the vaccine protects against dengue serotype 3, and they hope to go on to test it against serotypes 1 and 2 using human challenge strategy. Moreover, they intend to perform studies using the human-challenge strategy to develop vaccine against Zika virus, which is related to dengue. Though scientists are enthusiastic of using human challenge strategy for developing vaccines in recent future, it demands reconsideration of the policies and consideration of the past incidents on which current laws are based. (Erika Check Hayden, Nature News)

Federal Science Funding

Biological specimen troves threatened by funding pause

Collecting biological specimens is an essential part of science and conservation and collections are used to identify species, track diseases and study climate change. One such important biological specimen collection is the collection of fish samples in Burke Museum of Natural History and Culture in Seattle which serves as a repository for the US National Oceanic and Atmospheric Administration (NOAA) for the North Pacific. NOAA uses the specimens collected each year to assess fish abundance and set fishing quotas for species conservation. In another case, a collection of eggs possessed by the Field Museum in Chicago led to the famous conservation discovery that the pesticide DDT caused widespread nesting failures in birds of prey resulting in near extinction of several species.

Despite their value to science, biological specimen collections recently lost a valuable source of funding and support. The US National Science Foundation (NSF) announced that it would indefinitely suspend a program which provides funding to maintain biological specimen collections. The NSF will maintain its current grants but not accept any new proposals. Many researchers and curators have found this disheartening and are worried because the NSF is one of the only public providers of such funding and  only roughly 0.06% of the agency’s $7.5-billion is allocated for maintaining biological specimen collections. According to NSF they are soliciting feedback on the program along with evaluating the currents grants in the collections grant program. Depending on the results of evaluation, decisions will be made and it remains unclear whether the funding hiatus is temporary or permanent.

This pause however has scientists dismayed given the importance of these scientific collections. As mentioned earlier, preserved specimens play an immense role in understanding historic range of species and provide information on species invasion and extinction. Biological collections also help researchers track species carrying human diseases for containment of disease outbreak. Moreover, due to advancements in technology, the specimens can be put to use in a way which has not yet been anticipated. For example, DNA sequencing of museum specimens which were collected before DNA identification was discovered has helped to identify previously unknown species. With the sudden change in funding options many museums are considering digitizing their collections, and indeed, the NSF’s program to support digitizing collections remains unchanged.  But “there’s no point digitizing if we don’t take care of the collections themselves”, says Barbara Thiers, director of the William and Lynda Steere Herbarium at the New York Botanical Garden. “You certainly can’t get any DNA out of an image.” (Anna Nowogrodzki, Nature News)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

March 25, 2016 at 9:00 am

Science Policy Around the Web – March 22, 2016

leave a comment »

By: Emily Petrus, Ph.D.

Forensic science

Forensics gone wrong: When DNA snares the innocent

The 2015 TV series “Making a Murderer” has shed light on a disturbing issue in criminal justice – the dubious results of forensic scientists and the tests used to convict suspects. Though sensational, this is not a new problem in the field of forensic science. In 2012, a forensic scientist in Boston was arrested for tampering with evidence and recording positive tests for substances (such as drugs or blood) to ensure convictions. Other examples of forensic mismanagement include the Amanda Knox trial in which evidence was mishandled from the crime scene to the lab to the courtroom. Although Knox is likely innocent, the Italian justice system used poorly executed forensic “evidence” to keep her in jail for four years.

DNA evidence is now considered the gold standard in the courtroom, but before sequencing strategies were available, scientists often relied on microscopic characteristics of hair to make positive identifications. Just last month Santae Tribble, who served 28 years in jail for a murder he did not commit, was awarded $13.2 million. He was convicted using the presence of hair which was just like his, with a one in 10 million chance it could belong to somebody else. Now with DNA analysis it was discovered that the stocking used to cover the murderer’s face contained hair from 3 other individuals and one dog – but not Tribble.

What does this mean for Policy? Although advances in forensics enables our justice system to link suspects to crimes not previously possible, there must be more oversight into how the experiments are performed. We need more controls, blind experiments, and supervisor oversight in crime labs. Additionally, new technology must be rigorously tested to ensure that detections and genetic analyses are accurate. For example, currently 13 different positions on genes (loci) are used to detect if there is a genetic match, but the FBI will soon require analysis of 20 loci. This increases the sensitivity of genetic tests, and presumably the quality of evidence at trials. (Douglas Starr, Science News)

Infectious Diseases

Dengue Fever Vaccine is Effective – What About Zika?

Global warming has many reasons to keep us up at night, including rising sea levels, mass extinctions and reduction in resources, but perhaps the most immediately terrifying result is the increased prevalence of diseases spread by our least favorite organism, the mosquito. With the advent of summer we can expect more people to suffer from mosquito borne pathogens such as Zika virus, Malaria or Dengue fever. Viruses transmitted by mosquitos or between humans are nothing new, but global warming has expanded the territory in which these mosquitos can be found. This increased threat to the United States population has made production of vaccines an urgent priority for American scientists.

New hope is on the horizon, as scientists have successfully demonstrated that a new vaccine for Dengue fever is 100% effective. This brings optimism for quickly producing vaccines for Zika virus and other viral vectors, as successful technology for the Dengue fever vaccine can be translated to other diseases. Scientists are characteristically hesitant to make conclusions regarding a timeline for Zika vaccines, with some predicting years before one which is safe and effective. However, with the summer Olympics in Brazil just a few months away, now is the time for major funding and initiatives to produce a Zika vaccine for Brazilians and those traveling to the games. (http://www.RT.com)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

March 22, 2016 at 9:00 am

Science Policy Around the Web – March 11, 2016

leave a comment »

By: Sophia Jeon, Ph.D.

Photo source: pixabay.com

Patent law and Intellectual Property

Accusations of errors and deception fly in CRISPR patent fight

Clustered regularly-interspaced short palindromic repeats, better known as CRISPR, is getting a lot of attention as a promising molecular engineering technique that can easily edit genes in laboratories and potentially, for therapeutic uses. Last year, Chinese researchers successfully used the technique in human embryos, raising serious ethical concerns. Perhaps designing your own pets or human babies won’t happen in the immediate future but before CRISPR can even be considered for any commercial use, two research teams at UC Berkeley and at the Broad Institute will have to settle the issue of who gets to benefit financially from its use.

In May 2012, a team led by UC Berkeley’s Jennifer Doudna submitted a patent application for CRISPR-Cas9 technology. Several months later in December 2012, Feng Zhang’s research team at the Broad Institute also initiated the process to file for a patent but ended up getting the patent before Berkeley team since they used the expedited review program. The Berkeley team requested a patent interference, which will determine who actually invented the technology first. However, the issue becomes a bit more complicated by the fact that in March 2013, the U.S. patent law was switched to a system in which whoever files first gets the patent from a system that awarded patent to whoever invented first.

So how does one go about proving that someone invented or thought of something first, especially in this age of open access journals and public data sharing? The investigation process could be messy and could take months, or even years. However, both sides seem to have a number of strategies to weaken each other’s arguments, revealing mistakes in the application process and pointing fingers at insufficient data or misrepresented information in the application. Patent fights like this aren’t too rare with biotechnologies that could be used commercially (e.g. the recent lawsuit surrounding DNA sequencing technique between Oxford Nanopore Technologies and Illumina, Inc.) but it is interesting to see such a huge legal dispute between researchers from academia. (Kelly Servick, ScienceInsider)

Abortion law and Social Science

The Return of the D.I.Y. Abortion

In the recent years, abortion clinics have been vanishing from certain states (e.g. Texas, Mississippi, Missouri, North Dakota, South Dakota, Wyoming, Florida etc.) at a record pace. Planned Parenthood facilities are many of those clinics and these closures are partially due to passage of the bill to defund Planned Parenthood and other abortion restrictions in those States. However, the more important question is whether these restriction laws have actually result in lower abortion rates. Social scientists and health experts say there are multiple factors to consider. Some argue that abortion rates were going down even before clinic closings accelerated in the first place, due to increasing acceptance of single motherhood, the recession, and more effective birth control use.

How does law affect public health or more specifically, personal decisions regarding women’s bodies? Does limited access to abortion clinics make women turn to alternative methods such as self-induced abortion? It turns out that Google searches may provide some insight. Because there aren’t large enough surveys to track behavior in different states and also because surveys often don’t tell the real story (since people can lie), Seth Stephens-Davidowitz did an interesting study using Google searches to find correlation between the number of abortion clinics and interest in self-induced abortion. Sadly, the search terms he found related to self-induced abortion methods indicated that women might be driven to risky methods such as purchasing abortion pills online, punching one’s stomach, bleaching one’s uterus, or abortion using a coat hanger.

A previous study found that a vast majority of women would be willing to travel to other states with legal abortion if needed. However, underage girls or low-income women with unwanted pregnancy could be googling for and trying alternative abortion methods that could lead to adverse health outcomes. This June, the Supreme Court is expected to make a decision about a Texas law that restricts access to abortion clinics and whether or not it places an “undue burden” on women’s rights to abortion. The justices should make decisions based on hard evidence and well-balanced research. The study using Google search methods may be limited in certain ways as it is difficult to find out about their health outcomes or whether they actually succeeded in abortions, but it is one way to look at human behavior and how law could affect public health. (Seth Stephens-Davidowitz, New York Times)

Clinical Trials and Data Sharing

STAT investigation sparked improved reporting of study results, NIH says

The results of clinical trials are required by a federal law to be publicly reported on clinicaltrials.gov at the end of the trial. The goal is to promote transparency in any clinical research and to share data among the research community and physicians, as well as enhance patient empowerment by returning the results to the participants. However, according to a 2014 analysis published in JAMA, “a recent analysis of 400 clinical studies revealed that 30% had not shared results through publication or through results reporting in ClinicalTrials.gov within 4 years of completion.”

Last December, STAT also did a quite extensive investigation looking at clinical trials led by companies, universities, hospitals and even NIH-led trials to determine who actually reported their findings and how long after study completion. Many top research institutions failed to report on time and the federal government has not imposed fines on a single trial, which was “very troubling” according to the NIH director, Francis Collins said. Possible reasons for the delay in reporting are that the investigators continue to analyze data which can take a long time even after the trial has ended, that investigators wait until they publish their findings in a peer-reviewed journal and that in some cases drug companies intentionally want to hide negative results. Whatever the reason is, there should be consequences for withholding data that could be useful for doctors and patients.

The STAT investigation has named names and it seems to have worked. The data released by NIH showed that between December 2015 and January 2016, there was a 25 percent rise in new submissions and a 6 percent increase in reporting of corrected results for trial findings that had previously been submitted. Deborah Zarin, director of Clinicaltrials.gov, said the agency’s own outreach to researchers and training efforts are paying off as well. NIH is currently working on developing a new policy to clarify, expand, and enforce the requirements for clinical trial registration and results submission. (Charles Piller, STATnews)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

March 11, 2016 at 9:00 am

Broadening the Debate: Societal Discussions on Human Genetic Editing

leave a comment »

By: Courtney Pinard, Ph.D.

Licensed via Creative Commons

In one of the most impressive feats of synthetic biology so far, researchers have harnessed the ability of bacteria to fight and destroy viruses, and have been able to precisely and cheaply edit genetic code using a genetic technology called clustered, regularly-interspaced short palindromic repeats (CRISPR) and CRISPR-associated endonuclease protein 9 (Cas9). CRISPR has been used to find and detect mutations related to some of the world’s most deadly diseases, such as HIV and malaria. Although CRISPR holds great promise for treating disease, it raises numerous bioethical concerns, which were sparked by the first report of deliberate editing of the DNA of human embryos by Chinese researchers. Previous blog posts have described scientific discussion surrounding the promise of CRISPR. At least three scientific research papers per day are published using this technique, and biotech companies have already begun to invest in CRISPR to modify disease-related genes. However, the use of CRISPR, or any genetic editing technology, to permanently alter the genome of human embryos is an issue of concern to a much broader range of stakeholders, including clinicians, policymakers, international governments, advocacy groups, and the public at large. As CRISPR moves us forward into the realm of the newly possible, the larger global, social and policy implications deserve thorough consideration and discussion. Policies on human genetic editing should encourage extensive international cooperation, and require clear communication between scientists and the rest of society.

There is no question that CRISPR has the potential to help cure disease, both indirectly and directly. CRISPR won the Science Breakthrough of the Year for 2015, in part, for the creation of a “gene drive” designed to reprogram mosquito genomes to eliminate malaria. Using CRISPR-Cas9 technology, investigators at the Universities of California (UC) have engineered transgenic Anopheles stephensi mosquitoes to carry an anti-malaria parasite effector gene. This genetic tool could help wipe out the malaria pathogen within a targeted mosquito population, by spreading the dominant malaria-resistant gene in 99.5% of progeny. The gene snipping precision of CRISPR can also treat certain genetic diseases directly, such as certain cancers, and sickle cell disease. CRISPR can even be used to cut HIV out of the human genome, and prevent subsequent HIV infection.

There are limitations of CRISPR, which include the possibility of off-target genetic alterations, and unintended consequences of on-target alterations. For example, the embryos used in the Chinese study described above, were non-viable, less than 50% were edited, and some embryos started to divide before the edits were complete. Within a single embryo, some cells were edited, while other cells were not. In addition, researchers found lack of specificity; the target gene was inserted into DNA at the wrong locus. Little is known about the physiology of cells and tissues that have undergone genome editing, and there is evidence that complete loss of a gene could lead to compensatory adaptation in cells over time.

Another issue of concern is that CRISPR could lead scientists down the road to eugenics. On May 14th 2015, Stanford’s Center for Law and the Biosciences and Stanford’s Phi Beta Kappa Chapter co-hosted a panel discussion on editing the human germline genome, entitled Human Germline Modification: Medicine, Science, Ethics, and Law. Panelist Marcy Darnovsky, from the Center for Genetics and Society, called human germline modification a society-altering technology because of “the potential for a genetics arms race within and between countries, and a future world in which affluent parents purchase the latest upgrades for their offspring.” Because of its potential for dual use, genetic editing was recently declared a weapon of mass destruction.

In response to ethical concerns, the co-inventor of CRISPR, Dr. Jennifer Doudna, called for a self-imposed temporary moratorium on the use of CRISPR on germline cells. Eighteen scientists, including two Nobel Prize winners, agreed on the moratorium. Policy recommendations were published in the journal Science. In addition to a moratorium, recommendations include continuing research on the strengths and weaknesses of CRISPR, educating young researchers about these, and holding international meetings with all interested stakeholders to discuss progress and reach agreements on dual use. Not all scientists support such recommendations. Physician and science policy expert Henry Miller disagrees on a moratorium, and argues that it is unfair to restrict the development of CRISPR in germline gene therapy because we would be denying families cures to monstrous genetic diseases.

So far, the ethical debate has been mostly among scientists and academics. In her article published last December in The Hill Congress Blog, Darnovsky asks: “Where are the thought leaders who focus, for example, on environmental protection, disability rights, reproductive rights and justice, racial justice, labor, or children’s welfare?” More of these voices will be heard as social and policy implications catch up with the science.

In early February, the National Academy of Sciences and National Academy of Medicine held an information-gathering meeting to determine how American public attitudes and decision making intersect with the potential for developing therapeutics using human genetic editing technologies. The Committee’s report on recommendations and public opinion is expected later this year. One future recommendation may be to require Food and Drug Administration (FDA) regulation of genetic editing technology as a part of medical device regulation. Up until recently, the FDA has been slow to approve gene therapy products. Given the fast pace of CRISPR technology development, guidelines on dual use, as determined by recommendations from the National Academies, should be published before the end of the year. So far, U.S. guidelines call for strong discouragement of any attempts at genome modification of reproductive cells for clinical application in humans, until the social, environmental, and ethical implications are broadly discussed among scientific and governmental organizations.

International guidelines on the alteration of human embryos are absolutely necessary to help regulate genetic editing worldwide. According to a News Feature in Nature, many countries, including Japan, India, and China, have no enforceable rules on germline modification. Four laboratories in China, for example, continue to use CRISPR in non-viable human embryonic modification. Societal concerns about designer babies are not new. In the early 2000s, a Council of Europe Treaty on Human Rights and Biomedicine declared human genetic modification off-limits. However, the U.K. now allows the testing of CRISPR on human embryos.

In a global sense, employing tacit science diplomacy to developments in synthetic biology may mitigate unethical use of CRISPR. Tacit science diplomacy is diplomacy that uses honesty, fairness, objectivity, reliability, skepticism, accountability, and openness as common norms of behavior to accomplish scientific goals that benefit all of humanity. The National Science Advisory Board for Biosecurity (NSABB) is a federal advisory committee that addresses issues related to biosecurity and dual use research at the request of the United States Government. Although NSABB only acts in the U.S., the committee has the capacity to use tacit science diplomacy by providing guidance on CRISPR dual use concerns to both American citizen and foreign national scientists working in the U.S.

Under tacit science diplomacy, scientific studies misusing CRISPR would be condemned in the literature, in government agencies, and in diplomatic venues. Tacit science diplomacy was used when the Indonesian government refused to give the World Health Organization (WHO) samples of the bird flu virus, which temporarily prevented vaccine development. After five years of international negotiations on this issue, a preparedness framework was established that encouraged member states to share vaccines and technologies. A similar preparedness framework could be developed for genetic editing technology.

Institutional oversight and bioethical training for the responsible use of genetic editing technology are necessary, but not sufficient on their own. Tacit science diplomacy can help scientists working in the U.S. and abroad develop shared norms. Promoting international health advocacy and science policy discussions on this topic among scientists, government agencies, industry, advocacy groups, and the public will be instrumental in preventing unintended consequences and dual use of genetic editing technology. 

Written by sciencepolicyforall

March 9, 2016 at 9:01 am

Science Policy Around the Web – March 8, 2016

leave a comment »

By: Swapna Mohan, DVM, Ph.D.

Kris Krüg via Photo Pin cc

Public Health Surveillance

Mystery cancers are cropping up in children in aftermath of Fukushima

After the Fukushima Daiichi Nuclear Power Plant accident in 2011 in Japan, a swift and efficient evacuation and containment plan ensured that human suffering was kept at a minimum. This included beginning a more thorough population surveillance for thyroid problems in Fukushima citizens under the age of 18. However, this thyroid screening for children and teens in the months that followed showed an unexpectedly high rate of thyroid related cancers. Anti-nuclear power activists concluded that it is the result of inhaled and ingested radioactivity from the Fukushima incident. However, scientists unequivocally disagree and stress that a majority of cases of thyroid abnormalities have not resulted from radiation exposure. Others indicate that it might be a result of overdiagnosis on the part of public health officials.  Since there are no baseline data from before the incident, it is impossible to verify whether this high report of cases is a direct result of radiation or are just indicative of a high number of thyroid carcinomas in children. Epidemiologists point out the error in comparing the results of this screening (where they used advanced devices to detect even unnoticeable abnormalities) to more traditional clinical screenings (where participants have already detected lumps or symptoms). In order to get a better idea of the baseline of thyroid abnormalities, scientists screened approximately 5000 children from other areas of Japan in comparable age groups. The data did not reveal a significant difference in the rate of thyroid abnormalities in the unexposed populations. This demonstrates that thyroid abnormalities in children is higher than previously thought and must be kept in mind when considering options such as complete or partial thyroidectomy. (Dennis Normile, Science News)

Global Health

A Zika breakthrough: Scientists detail how virus can attack fetal brain

The mechanism by which the Zika virus causes microcephaly in newborns has been described by scientists at Johns Hopkins University, Florida State University and Emory University. With lab grown stem cells, the researchers were able to demonstrate that the virus invades the brain cortex, killing the rapidly dividing stem cells there. This reduction in stem cell numbers in the cortex causes the brain to be malformed and underdeveloped. The study, published in Cell Stem Cell, is the first piece of evidence that conclusively ties Zika infections to microcephaly and developmental defects in newborns. Zika virus, known to induce only mild symptoms in adults, has been linked to an unprecedented increase in cases of microcephaly in babies born in Brazil last year. However, the link between the two had been so far, inconclusive. There was an alternate theory that the incidence of microcephaly could be caused by pesticide and use. This study showed the propensity that the virus has to neural stem cells over other cell types (such as fetal kidney cells or undifferentiated stem cells). The researchers observed that the virus used the rapidly dividing neural stem cells to replicate their numbers and in the end, this leaves the cells depleted and unable to grow properly. Scientists believe that getting a better insight into the pathogenicity of the virus on neural cells is essential for developing preventative and therapeutic measures to fight the disease. (Lena H. Sun and Brady Dennis, Washington Post)

STEM diversity

NSF makes a new bid to boost diversity

To understand why women and certain minorities are underrepresented in the science filed, the National Science Foundation (NSF) has launched an initiative aimed at increasing diversity in the scientific community. The 5-year, $75 million program named INCLUDES (Inclusion across the Nation of Communities of Learners of Underrepresented Discoverers in Engineering and Science) targets proposals for scaling up involvement of underrepresented groups in science education and STEM fields. The proposals solicited for this initiative are required to outline an effective strategy for broadening participation by working with industry, state governments, schools and nonprofit organizations. While NSF has over the years, funded several similar initiatives aimed at increasing diversity, this one is expected to test out novel ideas and approaches. The initial response to this program has been largely positive, with commentators calling it “a bold new initiative” and having high expectations on its potential to strengthen the participation of underrepresented groups in science. (Jeffrey Mervis, Science)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

March 8, 2016 at 9:00 am