Science Policy For All

Because science policy affects everyone.

Archive for March 2016

Psychology’s Reproducibility Crisis

leave a comment »

By: David Pagliaccio, Ph.D.

Photo source: pixabay.com

Findings from the collaborative Reproducibility Project coordinated by the Center for Open Science were recently published in the journal, Science. This report raised a stir among both the scientific and lay communities. The study summarized an effort to replicate 100 studies previously published in three major psychology journals. The project found that only around one third of replication studies yielded significant results where the size of the effects observed in the replications was about half that of the original reports on average. The study was quickly met with statistical and methodological critique, and in turn by a criticism of this critique. With the concerns raised by the Reproducibility Project and the intense debate in the field, various organizations and media outlets have begun to spotlight this issue, noting that psychology may be experiencing a “reproducibility crisis.”

The authors of this recent study importantly indicted systemic forces that incentivize “scientists to prioritize novelty over replication” and journals for disregarding replication as unoriginal. Typically, a scientist’s career is most easily advanced through high profile and highly cited publications in well-respected journals. This rarely includes replication attempts as they are generally hailed as not novel or not progressing the field. Further, statistically significant results are prized while null results are often chalked up to insufficient power and are not given a forum to help shape the literature. These factors lead to a large publication bias towards often underpowered but significant findings, a rampant ‘file drawer problem’ of not being able to publish non-significant results, and what is known as “p-hacking”, where authors can analyze and reanalyze a given dataset in different ways to push towards significance of a desired result.

Several initiatives have been put forth to try to alleviate this “reproducibility crisis.” For example, the American Statistical Association released a statement regarding the use of p-values in scientific research. This served both as a general clarification on the use and interpretation of p-values in null hypothesis significance testing and an impetus to potentially include other measures in our understanding of effect size. This type of reframing is helpful in assuring good statistical practice and resisting the tendency to inaccurately interpret our arbitrary statistical threshold of p<0.05 as a marker of truth, which often biases scientific findings and reporting. Additionally, the National Institutes of Health (NIH) have adopted new guidelines for grant submissions to try to enhance rigor and reproducibility in science, for example by increasing transparency and setting basic expectations for biological covariates. Yet, the main investigator-initiated research grant from the NIH, the R01, still includes novelty as a main scoring criteria and does not have any specific provisions for including replication studies.

Publication-related initiatives have begun to be designed to help incentivize replication. For example, the Association for Psychological Science has created a registered replication report where, before data collection even begins, scientists can pre-register a replication study to be published in Perspectives on Psychological Science. This saves scientists from struggling to publish a direct replication and reframes the focus of replication away from whether a prior study was ‘true’ or ‘false’ but rather focuses on cumulative effect size across studies. While this is a step forward, few have yet to make use of this opportunity. Importantly, while the rare journal, like PLOS ONE, explicitly states that it accepts replication submissions, top-tier journals have generally not joined in on allowing for registered replications or for creating specific article submission formats to allow for replications that otherwise would not be considered ‘novel.’ Other interesting avenues for addressing this issue have begun to spring up, for example, the website, www.PsychFileDrawer.org, was created as an archive of attempts to replicate psychology studies. While this does provide a way to publicize failures to replicate that may otherwise not be publishable, these reports currently do not seem to be indexed by standard databases, like PubMed or PsychNet. Thus, while more failures to replicate can be made available and could help the field, the unofficial nature of this website does not easily help or incentivize investigators in terms of publication counts, citations, or other metrics often considered for hiring, tenure, etc.

Importantly, issues of reproducibility and publication bias can have vast consequences on society and policy as well as potentially eroding public trust in science and the scientific process. While an extreme case involving falsification, the long lasting consequences of Andrew Wakefield’s erroneous, retracted, and unreplicated paper linking the MMR vaccine to autism spectrum disorders truly underscore the potential impacts of unchecked scientific findings. A much more benign example was profiled in The Atlantic, concerning findings that bilingual individuals show better cognitive performance than monolingual individuals. While many published studies confirmed this effect in various ways, several large studies found no significant difference and many negative studies went unpublished. Similarly, as detailed in Slate, doubt has recently been cast upon a long research track examining ego depletion. Failed replications of this ego depletion effects are now coming to a head. This is after, for example, this research was formed into a book exploring willpower and how individuals can use this science to flex their self-control and willpower.

While these findings have not shaped major policy, it is not a far leap to see how difficult it may be to undo the effects of publications biases towards novel, but unreplicated research findings on a variety of policies. For example, education research also suffers from replication issues. One study pointed out that replication studies represented less than 1% of published studies in the top 100 education research journals. While many of these studies did replicate the original work, most were conceptual rather than direct replications, and replication success was somewhat lower when performed by a third-party rather than including authors from the original work. While the higher successful replication rate is encouraging, this study does call strongly for an increase in the number of replication studies performed.

Despite debates over the extent of the reproducibility problem, it is clear that psychology and science more broadly would benefit from greater attempts to replicate published findings. This will involve large-scale shifts in policies ranging from journal practices to tenure decisions and governmental funding to help alleviate these issues and to support and motivate high quality science and replication of published studies. These efforts will in turn have long-term benefits on the development of policies based on research in education, social psychology, mental health, and many other domains.

Written by sciencepolicyforall

March 30, 2016 at 9:00 am

Posted in Essays

Tagged with , ,

Science Policy Around the Web – March 29, 2016

leave a comment »

By: Thaddeus Davenport, Ph.D.

Source: Ashley Fisher / Flickr

Modernizing Scientific Publishing

Handful of Biologists Went Rogue and Published Directly to Internet

Peer-reviewed scientific journals are essential for science. They motivate and reward high-quality experimental design and facilitate the dissemination of knowledge that drives innovation. A recent article in the New York Times nicely captures some of the complexity of modern scientific publishing by examining a recent push by some researchers to publish their findings directly to ‘preprint’ servers – a practice already common in physics and mathematics.

Preprint publishing has the potential to significantly speed up publishing, allowing for faster and wider dissemination of ideas into a free, modern digital forum. Some researchers worry that bypassing the traditional peer-review process might eventually erode the quality of research. Though, it could be argued that so long as articles published to preprint servers are treated as preliminary findings (as, perhaps, we should treat all findings published in even the highest tier journals), the online forum has the potential to be a more transparent, robust peer review process than the current model in which a small number of anonymous reviewers decide the value of research.

The article notes other potential hurdles to the widespread adoption of preprint publishing that are deeply embedded in the culture of research. For example, papers are the currency of science. If authors bypassed this system, they would also bypass the possibility of attaining the classic badges of honor associated with publishing in high tier journals, potentially decreasing their competitiveness when applying for jobs and grants.

A change in publishing practices will also, likely, need to coincide with a change in the culture and value system of scientific research, but it is exciting to watch publishing move into the modern world. Scientific progress thrives on new ideas, and the resources of the digital age have the potential to broaden the reach of ideas and to increase the speed of their communication. (Amy Harmon, New York Times)

Economic Policies

A “Circular Economy” to Reduce Waste and Increase Efficiency

Our current economy can largely be described by a linear flow of material in which natural resources are harvested, combined, refined, and converted into products. These products are purchased, and after some amount of use, ultimately recycled or discarded at the discretion of the owner.  In a Nature special this week, Walter R. Stahel describes the potential economic and environmental benefits of a different sort of economy – a “circular economy” – that “replaces production with sufficiency” by encouraging reuse, repair, and recycling over remanufacturing.

Originally conceived by Stahel and his colleague Geneviève Reday-Mulvey in the 1970s, the concept of a circular economy “grew out of the idea of substituting manpower for energy.” For example, Stahel observed that it requires “more labour and fewer resources to refurbish buildings than to erect new ones.” Applying this model to all products has the potential to reduce greenhouse gas emissions substantially and expand the workforce because “remanufacturing and repair of old goods, buildings and infrastructure creates skilled jobs in local workshops.”

To support a transition to a more circular economy, Stahel recommends – among other things in his article –  a change in the way economic success is measured. Rather than trying to maximize our gross domestic product (GDP), a measure of the flow of resources, perhaps we should attempt to optimize the “value-per-weight” or “labor-input-per-weight” of the manufactured products. Policies and tax structures designed to maximize these economic indicators might be effective in encouraging stewardship of the earth’s limited resources and cultivating job growth. (Walter R. Stahel, Nature News)

A Second Chance for Grants

New funding matchmaker will cater to NIH rejects

The majority of NIH grant applications do not receive funding, not necessarily because the applications are of poor quality, but rather because there are simply more good ideas than the government has the capacity to support. A recent article in Science news by Kelly Servick describes a pilot program started earlier this month by NIH in collaboration with Leidos to address this gap in funding.

The program, known as OnPAR, aims to establish a more open market in which NIH grant applications that score well (within the thirtieth percentile) but do not receive funding would then be made available to private organizations and funding agencies for consideration. It seems that this system would be of substantial benefit to grantwriters – increasing the efficiency of grant-writing and review by allowing “recycling” of grants and their associated peer reviews, which are expensive to produce in terms of time and energy, and thus, money.

Funding agencies may see value in this program through expanded access, possibly finding themselves in the position to fund and motivate inquiry for researchers who may not have applied to their organization directly. However, private funding agencies are often in a position similar to that of the federal government – they receive more good applications than they have resources to support, and Servick notes that “the success of the project will hinge on whether private funders see value in using OnPAR in addition to their existing grant review process.”

If funders do find value in OnPAR, it is conceivable that they might allocate a percentage of their annual budget for OnPAR grants. Time will reveal the ultimate value of OnPAR, but it is a step in the right direction. How else might we increase the efficiency of the scientific production cycle? (Kelly Servick, Science News)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

March 29, 2016 at 10:00 am

Science Policy Around the Web – March 25, 2016

leave a comment »

By: Nivedita Sengupta, Ph.D.

Photo source via pixabay

Genetically Engineered Foods

Policy: Reboot the debate on genetic engineering

Genetic engineering (GE) is a highly controversial topic of debate in current days partially because of its increasing impact on day to day living. In recent years, a great deal of advancement has been made in the field of GE as established by the development of sophisticated modern tools like CRISPR. This has led to increasing concern among people regarding GE and food safety laws.

One of the issues with respect to food safety laws was to determine whether the focus of the regulatory policies should be the process by which GE organisms are made or the GE products themselves. Most people in favor of product-based regulation believe that GE organisms are no different compared to the conventionally bred organisms. In United States, since mid-1980s, GE products have been overseen by the Coordinated Framework for Regulation of Biotechnology (CFRB). According to the CFRB, product-based regulation is the science-based approach and hence GE organisms could be covered by existing policies without any need for formulating new laws. Thus they could be simply channeled into particular government agencies depending on whatever category they fell into.

However, in the process of regulating GE products, the agencies realized that the process of engineering is important as well. The agencies recognized that from a scientific standpoint, a product’s traits, harmful or beneficial, depend on the process by which it is made. For example, in human gene-therapy trials, new methods for delivering genes have removed the need for potentially harmful viral vectors. Thus, product and process issues are not distinct in regulation. Though regulating GE products rather than the process is accepted in many countries beyond the United States, other countries like Brazil and Australia have laws which mandates the regulation of the mechanisms by which the GE products are developed.

The inconsistency of views among GE developers and regulators in product-versus-process arguments demands a fresh start on formulating regulatory policies involving GE. It’s time to consider a mix of product and process issues to order to identify product groups which are likely to be of concern and require regulation. These efforts should be focused on keeping in mind the polarization of product-versus-process and science-versus-values framings so that the government can form a system which will be based on information provided by science as well as the concerns and values of citizens. (Jennifer Kuzma, Nature Comment)

Infectious Diseases

Dengue vaccine aces trailblazing trial

Vaccine development is a long and complex process which can take decades to be available for clinical use. Scientists at the Johns Hopkins Bloomberg School of Public Health in Baltimore, Maryland, have developed a vaccine which may be the most potent vaccine available to date for preventing dengue infections. These researchers employed a ‘human challenge’ strategy during the development and testing of this vaccine, a method which fell out of favor during the last century. ‘Human challenge’ involves deliberately infecting healthy volunteers with a weakened form of the disease causing virus. Concerns about the safety of deliberately infecting people has limited the use of human challenge studies and usually researchers test developing vaccines on people who are already at risk of contracting the disease of interest.

The dengue virus is a difficult vaccine target because of its four serotypes. Infection with one of the serotype will render a person immune against that type for life but will offer no protection against the others and may also increase the risk of acquiring hemorrhagic fever upon exposure to a different dengue serotype. The current vaccine study tested only dengue serotype 2, the most virulent serotype. 21 volunteers were injected with the experimental vaccine, and 20 volunteers with a sham vaccine. Six months later, all 41 volunteers were injected with a weakened version of the dengue virus which causes symptoms similar to a mild dengue infection, such as rash. The vaccine provided 100% protection against the challenge and only the individuals who received the sham vaccine showed mild symptoms with 80% of them developing a rash.

As all the current dengue vaccines only protect a proportion of volunteers, if these results hold up in larger populations the vaccine could be one of the most promising dengue vaccines developed. “This is a tremendous step forward, and something that has been desperately needed for 30 years,” says Duane Gubler, a disease researcher at the Duke NUS Medical School in Singapore who was not involved in this study. Moreover, he mentioned that the lack of human challenge studies is actually one of the things that made the development of dengue vaccines very difficult. Scott Halstead, a virologist and vaccinologist at the Uniformed Services University of the Health Sciences in Bethesda, Maryland stated that “this is an incredible paper that shows what is absolutely necessary to develop a vaccine against the dengue virus. It’s a really important demonstration of the kind of proof that you really need to have before you spend US$1.5 or 2 billion on a phase III [efficacy] trial.”

Meanwhile investigators have already begun a second human-challenge study to test whether the vaccine protects against dengue serotype 3, and they hope to go on to test it against serotypes 1 and 2 using human challenge strategy. Moreover, they intend to perform studies using the human-challenge strategy to develop vaccine against Zika virus, which is related to dengue. Though scientists are enthusiastic of using human challenge strategy for developing vaccines in recent future, it demands reconsideration of the policies and consideration of the past incidents on which current laws are based. (Erika Check Hayden, Nature News)

Federal Science Funding

Biological specimen troves threatened by funding pause

Collecting biological specimens is an essential part of science and conservation and collections are used to identify species, track diseases and study climate change. One such important biological specimen collection is the collection of fish samples in Burke Museum of Natural History and Culture in Seattle which serves as a repository for the US National Oceanic and Atmospheric Administration (NOAA) for the North Pacific. NOAA uses the specimens collected each year to assess fish abundance and set fishing quotas for species conservation. In another case, a collection of eggs possessed by the Field Museum in Chicago led to the famous conservation discovery that the pesticide DDT caused widespread nesting failures in birds of prey resulting in near extinction of several species.

Despite their value to science, biological specimen collections recently lost a valuable source of funding and support. The US National Science Foundation (NSF) announced that it would indefinitely suspend a program which provides funding to maintain biological specimen collections. The NSF will maintain its current grants but not accept any new proposals. Many researchers and curators have found this disheartening and are worried because the NSF is one of the only public providers of such funding and  only roughly 0.06% of the agency’s $7.5-billion is allocated for maintaining biological specimen collections. According to NSF they are soliciting feedback on the program along with evaluating the currents grants in the collections grant program. Depending on the results of evaluation, decisions will be made and it remains unclear whether the funding hiatus is temporary or permanent.

This pause however has scientists dismayed given the importance of these scientific collections. As mentioned earlier, preserved specimens play an immense role in understanding historic range of species and provide information on species invasion and extinction. Biological collections also help researchers track species carrying human diseases for containment of disease outbreak. Moreover, due to advancements in technology, the specimens can be put to use in a way which has not yet been anticipated. For example, DNA sequencing of museum specimens which were collected before DNA identification was discovered has helped to identify previously unknown species. With the sudden change in funding options many museums are considering digitizing their collections, and indeed, the NSF’s program to support digitizing collections remains unchanged.  But “there’s no point digitizing if we don’t take care of the collections themselves”, says Barbara Thiers, director of the William and Lynda Steere Herbarium at the New York Botanical Garden. “You certainly can’t get any DNA out of an image.” (Anna Nowogrodzki, Nature News)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

March 25, 2016 at 9:00 am

Science Policy Around the Web – March 22, 2016

leave a comment »

By: Emily Petrus, Ph.D.

Forensic science

Forensics gone wrong: When DNA snares the innocent

The 2015 TV series “Making a Murderer” has shed light on a disturbing issue in criminal justice – the dubious results of forensic scientists and the tests used to convict suspects. Though sensational, this is not a new problem in the field of forensic science. In 2012, a forensic scientist in Boston was arrested for tampering with evidence and recording positive tests for substances (such as drugs or blood) to ensure convictions. Other examples of forensic mismanagement include the Amanda Knox trial in which evidence was mishandled from the crime scene to the lab to the courtroom. Although Knox is likely innocent, the Italian justice system used poorly executed forensic “evidence” to keep her in jail for four years.

DNA evidence is now considered the gold standard in the courtroom, but before sequencing strategies were available, scientists often relied on microscopic characteristics of hair to make positive identifications. Just last month Santae Tribble, who served 28 years in jail for a murder he did not commit, was awarded $13.2 million. He was convicted using the presence of hair which was just like his, with a one in 10 million chance it could belong to somebody else. Now with DNA analysis it was discovered that the stocking used to cover the murderer’s face contained hair from 3 other individuals and one dog – but not Tribble.

What does this mean for Policy? Although advances in forensics enables our justice system to link suspects to crimes not previously possible, there must be more oversight into how the experiments are performed. We need more controls, blind experiments, and supervisor oversight in crime labs. Additionally, new technology must be rigorously tested to ensure that detections and genetic analyses are accurate. For example, currently 13 different positions on genes (loci) are used to detect if there is a genetic match, but the FBI will soon require analysis of 20 loci. This increases the sensitivity of genetic tests, and presumably the quality of evidence at trials. (Douglas Starr, Science News)

Infectious Diseases

Dengue Fever Vaccine is Effective – What About Zika?

Global warming has many reasons to keep us up at night, including rising sea levels, mass extinctions and reduction in resources, but perhaps the most immediately terrifying result is the increased prevalence of diseases spread by our least favorite organism, the mosquito. With the advent of summer we can expect more people to suffer from mosquito borne pathogens such as Zika virus, Malaria or Dengue fever. Viruses transmitted by mosquitos or between humans are nothing new, but global warming has expanded the territory in which these mosquitos can be found. This increased threat to the United States population has made production of vaccines an urgent priority for American scientists.

New hope is on the horizon, as scientists have successfully demonstrated that a new vaccine for Dengue fever is 100% effective. This brings optimism for quickly producing vaccines for Zika virus and other viral vectors, as successful technology for the Dengue fever vaccine can be translated to other diseases. Scientists are characteristically hesitant to make conclusions regarding a timeline for Zika vaccines, with some predicting years before one which is safe and effective. However, with the summer Olympics in Brazil just a few months away, now is the time for major funding and initiatives to produce a Zika vaccine for Brazilians and those traveling to the games. (http://www.RT.com)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

March 22, 2016 at 9:00 am

Science Policy Around the Web – March 11, 2016

leave a comment »

By: Sophia Jeon, Ph.D.

Photo source: pixabay.com

Patent law and Intellectual Property

Accusations of errors and deception fly in CRISPR patent fight

Clustered regularly-interspaced short palindromic repeats, better known as CRISPR, is getting a lot of attention as a promising molecular engineering technique that can easily edit genes in laboratories and potentially, for therapeutic uses. Last year, Chinese researchers successfully used the technique in human embryos, raising serious ethical concerns. Perhaps designing your own pets or human babies won’t happen in the immediate future but before CRISPR can even be considered for any commercial use, two research teams at UC Berkeley and at the Broad Institute will have to settle the issue of who gets to benefit financially from its use.

In May 2012, a team led by UC Berkeley’s Jennifer Doudna submitted a patent application for CRISPR-Cas9 technology. Several months later in December 2012, Feng Zhang’s research team at the Broad Institute also initiated the process to file for a patent but ended up getting the patent before Berkeley team since they used the expedited review program. The Berkeley team requested a patent interference, which will determine who actually invented the technology first. However, the issue becomes a bit more complicated by the fact that in March 2013, the U.S. patent law was switched to a system in which whoever files first gets the patent from a system that awarded patent to whoever invented first.

So how does one go about proving that someone invented or thought of something first, especially in this age of open access journals and public data sharing? The investigation process could be messy and could take months, or even years. However, both sides seem to have a number of strategies to weaken each other’s arguments, revealing mistakes in the application process and pointing fingers at insufficient data or misrepresented information in the application. Patent fights like this aren’t too rare with biotechnologies that could be used commercially (e.g. the recent lawsuit surrounding DNA sequencing technique between Oxford Nanopore Technologies and Illumina, Inc.) but it is interesting to see such a huge legal dispute between researchers from academia. (Kelly Servick, ScienceInsider)

Abortion law and Social Science

The Return of the D.I.Y. Abortion

In the recent years, abortion clinics have been vanishing from certain states (e.g. Texas, Mississippi, Missouri, North Dakota, South Dakota, Wyoming, Florida etc.) at a record pace. Planned Parenthood facilities are many of those clinics and these closures are partially due to passage of the bill to defund Planned Parenthood and other abortion restrictions in those States. However, the more important question is whether these restriction laws have actually result in lower abortion rates. Social scientists and health experts say there are multiple factors to consider. Some argue that abortion rates were going down even before clinic closings accelerated in the first place, due to increasing acceptance of single motherhood, the recession, and more effective birth control use.

How does law affect public health or more specifically, personal decisions regarding women’s bodies? Does limited access to abortion clinics make women turn to alternative methods such as self-induced abortion? It turns out that Google searches may provide some insight. Because there aren’t large enough surveys to track behavior in different states and also because surveys often don’t tell the real story (since people can lie), Seth Stephens-Davidowitz did an interesting study using Google searches to find correlation between the number of abortion clinics and interest in self-induced abortion. Sadly, the search terms he found related to self-induced abortion methods indicated that women might be driven to risky methods such as purchasing abortion pills online, punching one’s stomach, bleaching one’s uterus, or abortion using a coat hanger.

A previous study found that a vast majority of women would be willing to travel to other states with legal abortion if needed. However, underage girls or low-income women with unwanted pregnancy could be googling for and trying alternative abortion methods that could lead to adverse health outcomes. This June, the Supreme Court is expected to make a decision about a Texas law that restricts access to abortion clinics and whether or not it places an “undue burden” on women’s rights to abortion. The justices should make decisions based on hard evidence and well-balanced research. The study using Google search methods may be limited in certain ways as it is difficult to find out about their health outcomes or whether they actually succeeded in abortions, but it is one way to look at human behavior and how law could affect public health. (Seth Stephens-Davidowitz, New York Times)

Clinical Trials and Data Sharing

STAT investigation sparked improved reporting of study results, NIH says

The results of clinical trials are required by a federal law to be publicly reported on clinicaltrials.gov at the end of the trial. The goal is to promote transparency in any clinical research and to share data among the research community and physicians, as well as enhance patient empowerment by returning the results to the participants. However, according to a 2014 analysis published in JAMA, “a recent analysis of 400 clinical studies revealed that 30% had not shared results through publication or through results reporting in ClinicalTrials.gov within 4 years of completion.”

Last December, STAT also did a quite extensive investigation looking at clinical trials led by companies, universities, hospitals and even NIH-led trials to determine who actually reported their findings and how long after study completion. Many top research institutions failed to report on time and the federal government has not imposed fines on a single trial, which was “very troubling” according to the NIH director, Francis Collins said. Possible reasons for the delay in reporting are that the investigators continue to analyze data which can take a long time even after the trial has ended, that investigators wait until they publish their findings in a peer-reviewed journal and that in some cases drug companies intentionally want to hide negative results. Whatever the reason is, there should be consequences for withholding data that could be useful for doctors and patients.

The STAT investigation has named names and it seems to have worked. The data released by NIH showed that between December 2015 and January 2016, there was a 25 percent rise in new submissions and a 6 percent increase in reporting of corrected results for trial findings that had previously been submitted. Deborah Zarin, director of Clinicaltrials.gov, said the agency’s own outreach to researchers and training efforts are paying off as well. NIH is currently working on developing a new policy to clarify, expand, and enforce the requirements for clinical trial registration and results submission. (Charles Piller, STATnews)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

March 11, 2016 at 9:00 am

Broadening the Debate: Societal Discussions on Human Genetic Editing

leave a comment »

By: Courtney Pinard, Ph.D.

Licensed via Creative Commons

In one of the most impressive feats of synthetic biology so far, researchers have harnessed the ability of bacteria to fight and destroy viruses, and have been able to precisely and cheaply edit genetic code using a genetic technology called clustered, regularly-interspaced short palindromic repeats (CRISPR) and CRISPR-associated endonuclease protein 9 (Cas9). CRISPR has been used to find and detect mutations related to some of the world’s most deadly diseases, such as HIV and malaria. Although CRISPR holds great promise for treating disease, it raises numerous bioethical concerns, which were sparked by the first report of deliberate editing of the DNA of human embryos by Chinese researchers. Previous blog posts have described scientific discussion surrounding the promise of CRISPR. At least three scientific research papers per day are published using this technique, and biotech companies have already begun to invest in CRISPR to modify disease-related genes. However, the use of CRISPR, or any genetic editing technology, to permanently alter the genome of human embryos is an issue of concern to a much broader range of stakeholders, including clinicians, policymakers, international governments, advocacy groups, and the public at large. As CRISPR moves us forward into the realm of the newly possible, the larger global, social and policy implications deserve thorough consideration and discussion. Policies on human genetic editing should encourage extensive international cooperation, and require clear communication between scientists and the rest of society.

There is no question that CRISPR has the potential to help cure disease, both indirectly and directly. CRISPR won the Science Breakthrough of the Year for 2015, in part, for the creation of a “gene drive” designed to reprogram mosquito genomes to eliminate malaria. Using CRISPR-Cas9 technology, investigators at the Universities of California (UC) have engineered transgenic Anopheles stephensi mosquitoes to carry an anti-malaria parasite effector gene. This genetic tool could help wipe out the malaria pathogen within a targeted mosquito population, by spreading the dominant malaria-resistant gene in 99.5% of progeny. The gene snipping precision of CRISPR can also treat certain genetic diseases directly, such as certain cancers, and sickle cell disease. CRISPR can even be used to cut HIV out of the human genome, and prevent subsequent HIV infection.

There are limitations of CRISPR, which include the possibility of off-target genetic alterations, and unintended consequences of on-target alterations. For example, the embryos used in the Chinese study described above, were non-viable, less than 50% were edited, and some embryos started to divide before the edits were complete. Within a single embryo, some cells were edited, while other cells were not. In addition, researchers found lack of specificity; the target gene was inserted into DNA at the wrong locus. Little is known about the physiology of cells and tissues that have undergone genome editing, and there is evidence that complete loss of a gene could lead to compensatory adaptation in cells over time.

Another issue of concern is that CRISPR could lead scientists down the road to eugenics. On May 14th 2015, Stanford’s Center for Law and the Biosciences and Stanford’s Phi Beta Kappa Chapter co-hosted a panel discussion on editing the human germline genome, entitled Human Germline Modification: Medicine, Science, Ethics, and Law. Panelist Marcy Darnovsky, from the Center for Genetics and Society, called human germline modification a society-altering technology because of “the potential for a genetics arms race within and between countries, and a future world in which affluent parents purchase the latest upgrades for their offspring.” Because of its potential for dual use, genetic editing was recently declared a weapon of mass destruction.

In response to ethical concerns, the co-inventor of CRISPR, Dr. Jennifer Doudna, called for a self-imposed temporary moratorium on the use of CRISPR on germline cells. Eighteen scientists, including two Nobel Prize winners, agreed on the moratorium. Policy recommendations were published in the journal Science. In addition to a moratorium, recommendations include continuing research on the strengths and weaknesses of CRISPR, educating young researchers about these, and holding international meetings with all interested stakeholders to discuss progress and reach agreements on dual use. Not all scientists support such recommendations. Physician and science policy expert Henry Miller disagrees on a moratorium, and argues that it is unfair to restrict the development of CRISPR in germline gene therapy because we would be denying families cures to monstrous genetic diseases.

So far, the ethical debate has been mostly among scientists and academics. In her article published last December in The Hill Congress Blog, Darnovsky asks: “Where are the thought leaders who focus, for example, on environmental protection, disability rights, reproductive rights and justice, racial justice, labor, or children’s welfare?” More of these voices will be heard as social and policy implications catch up with the science.

In early February, the National Academy of Sciences and National Academy of Medicine held an information-gathering meeting to determine how American public attitudes and decision making intersect with the potential for developing therapeutics using human genetic editing technologies. The Committee’s report on recommendations and public opinion is expected later this year. One future recommendation may be to require Food and Drug Administration (FDA) regulation of genetic editing technology as a part of medical device regulation. Up until recently, the FDA has been slow to approve gene therapy products. Given the fast pace of CRISPR technology development, guidelines on dual use, as determined by recommendations from the National Academies, should be published before the end of the year. So far, U.S. guidelines call for strong discouragement of any attempts at genome modification of reproductive cells for clinical application in humans, until the social, environmental, and ethical implications are broadly discussed among scientific and governmental organizations.

International guidelines on the alteration of human embryos are absolutely necessary to help regulate genetic editing worldwide. According to a News Feature in Nature, many countries, including Japan, India, and China, have no enforceable rules on germline modification. Four laboratories in China, for example, continue to use CRISPR in non-viable human embryonic modification. Societal concerns about designer babies are not new. In the early 2000s, a Council of Europe Treaty on Human Rights and Biomedicine declared human genetic modification off-limits. However, the U.K. now allows the testing of CRISPR on human embryos.

In a global sense, employing tacit science diplomacy to developments in synthetic biology may mitigate unethical use of CRISPR. Tacit science diplomacy is diplomacy that uses honesty, fairness, objectivity, reliability, skepticism, accountability, and openness as common norms of behavior to accomplish scientific goals that benefit all of humanity. The National Science Advisory Board for Biosecurity (NSABB) is a federal advisory committee that addresses issues related to biosecurity and dual use research at the request of the United States Government. Although NSABB only acts in the U.S., the committee has the capacity to use tacit science diplomacy by providing guidance on CRISPR dual use concerns to both American citizen and foreign national scientists working in the U.S.

Under tacit science diplomacy, scientific studies misusing CRISPR would be condemned in the literature, in government agencies, and in diplomatic venues. Tacit science diplomacy was used when the Indonesian government refused to give the World Health Organization (WHO) samples of the bird flu virus, which temporarily prevented vaccine development. After five years of international negotiations on this issue, a preparedness framework was established that encouraged member states to share vaccines and technologies. A similar preparedness framework could be developed for genetic editing technology.

Institutional oversight and bioethical training for the responsible use of genetic editing technology are necessary, but not sufficient on their own. Tacit science diplomacy can help scientists working in the U.S. and abroad develop shared norms. Promoting international health advocacy and science policy discussions on this topic among scientists, government agencies, industry, advocacy groups, and the public will be instrumental in preventing unintended consequences and dual use of genetic editing technology. 

Written by sciencepolicyforall

March 9, 2016 at 9:01 am

Science Policy Around the Web – March 8, 2016

leave a comment »

By: Swapna Mohan, DVM, Ph.D.

Kris Krüg via Photo Pin cc

Public Health Surveillance

Mystery cancers are cropping up in children in aftermath of Fukushima

After the Fukushima Daiichi Nuclear Power Plant accident in 2011 in Japan, a swift and efficient evacuation and containment plan ensured that human suffering was kept at a minimum. This included beginning a more thorough population surveillance for thyroid problems in Fukushima citizens under the age of 18. However, this thyroid screening for children and teens in the months that followed showed an unexpectedly high rate of thyroid related cancers. Anti-nuclear power activists concluded that it is the result of inhaled and ingested radioactivity from the Fukushima incident. However, scientists unequivocally disagree and stress that a majority of cases of thyroid abnormalities have not resulted from radiation exposure. Others indicate that it might be a result of overdiagnosis on the part of public health officials.  Since there are no baseline data from before the incident, it is impossible to verify whether this high report of cases is a direct result of radiation or are just indicative of a high number of thyroid carcinomas in children. Epidemiologists point out the error in comparing the results of this screening (where they used advanced devices to detect even unnoticeable abnormalities) to more traditional clinical screenings (where participants have already detected lumps or symptoms). In order to get a better idea of the baseline of thyroid abnormalities, scientists screened approximately 5000 children from other areas of Japan in comparable age groups. The data did not reveal a significant difference in the rate of thyroid abnormalities in the unexposed populations. This demonstrates that thyroid abnormalities in children is higher than previously thought and must be kept in mind when considering options such as complete or partial thyroidectomy. (Dennis Normile, Science News)

Global Health

A Zika breakthrough: Scientists detail how virus can attack fetal brain

The mechanism by which the Zika virus causes microcephaly in newborns has been described by scientists at Johns Hopkins University, Florida State University and Emory University. With lab grown stem cells, the researchers were able to demonstrate that the virus invades the brain cortex, killing the rapidly dividing stem cells there. This reduction in stem cell numbers in the cortex causes the brain to be malformed and underdeveloped. The study, published in Cell Stem Cell, is the first piece of evidence that conclusively ties Zika infections to microcephaly and developmental defects in newborns. Zika virus, known to induce only mild symptoms in adults, has been linked to an unprecedented increase in cases of microcephaly in babies born in Brazil last year. However, the link between the two had been so far, inconclusive. There was an alternate theory that the incidence of microcephaly could be caused by pesticide and use. This study showed the propensity that the virus has to neural stem cells over other cell types (such as fetal kidney cells or undifferentiated stem cells). The researchers observed that the virus used the rapidly dividing neural stem cells to replicate their numbers and in the end, this leaves the cells depleted and unable to grow properly. Scientists believe that getting a better insight into the pathogenicity of the virus on neural cells is essential for developing preventative and therapeutic measures to fight the disease. (Lena H. Sun and Brady Dennis, Washington Post)

STEM diversity

NSF makes a new bid to boost diversity

To understand why women and certain minorities are underrepresented in the science filed, the National Science Foundation (NSF) has launched an initiative aimed at increasing diversity in the scientific community. The 5-year, $75 million program named INCLUDES (Inclusion across the Nation of Communities of Learners of Underrepresented Discoverers in Engineering and Science) targets proposals for scaling up involvement of underrepresented groups in science education and STEM fields. The proposals solicited for this initiative are required to outline an effective strategy for broadening participation by working with industry, state governments, schools and nonprofit organizations. While NSF has over the years, funded several similar initiatives aimed at increasing diversity, this one is expected to test out novel ideas and approaches. The initial response to this program has been largely positive, with commentators calling it “a bold new initiative” and having high expectations on its potential to strengthen the participation of underrepresented groups in science. (Jeffrey Mervis, Science)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

March 8, 2016 at 9:00 am

Science Policy Around the Web – March 4, 2016

leave a comment »

By: Valerie Miller, Ph.D.

License: (license) Photo credit: DSC_7102 via photopin (license)

Election Policy

How different polling locations subconsciously influence voters

In his final State of the Union address, President Obama discussed the need to revamp the voting process, stating, “We’ve got to make it easier to vote, not harder. We need to modernize it for the way we live now.” Now that we are fully entrenched in election season, many are questioning the fairness of voter ID laws, which can disproportionately affect minority voters. However, there is research demonstrating another phenomenon: that the location where you vote can influence the choices you make in the voting booth. Priming, in which the identification of ideas and objects by our subconscious memory can manipulate thoughts or behaviors, may be responsible for the way in which voting patterns are influenced by outside stimuli. To help curtail priming, most states prohibit campaign materials within 100 feet of polling locations, and some forbid wearing campaign paraphernalia while voting, yet a number of studies have shown that polling locations can themselves prime voters toward specific behaviors. For example, one study found that voting in a church or other religious location can prime attitudes associated with conservative values, such as negative attitudes toward the LGBT community or same-sex marriage. Social scientists also question voting in schools, which has been shown to increase voting in favor of education-related measures on the ballot. In all, there are six published studies that examine whether voting behaviors can be influenced by polling location, and each one has concluded that polling place priming is real. As an alternative, it has been suggested that polling locations should be eliminated all together, in favor of adopting a ballot-by-mail system. While some argue against all-mail voting, because of tradition or potential voter fraud, Colorado, Oregon and Washington have already implemented a voting system that is done entirely by mail. All three states have seen significant increases in voter turnout. (Ben Pryor, Scientific American)

Global Health

In 2050, half the world will be nearsighted

In a new study published in the journal Ophthalmology, scientists have predicted that nearly 5 billion people will suffer from myopia (nearsightedness) by the year 2050, which corresponds to half the projected global population. Researchers also found that almost 10 percent (nearly 1 billion people) could develop high myopia, in which severe nearsightedness leads to increased risk for cataracts, glaucoma, retinal detachment and macular degeneration. In contrast, in the year 2010, just 28.3 percent (2 billion people) had myopia, with 4 percent (277 million people) experiencing high myopia, corresponding to a 140 percent increase by 2050.

The researchers attribute the rise in myopia to be driven by lifestyle and environmental factors, likely decreased time spent outdoors and increased time spent doing near-work activities on screens. Indeed, there are regional differences in the incidence of nearsightedness, with more cases in high-income countries in North America and Asia. In the United States, myopia has been increasing steadily, with one study finding a 66 percent increase between the early 1970s and the early 2000s. The researchers conclude that if current trends continue, with more time on screens and less time outdoors, their projections for 2050 will likely hold true, and the number of people experiencing vision loss from high myopia is likely to increase 7-fold from the year 2000, becoming a leading cause of blindness. (Julie Beck, The Atlantic)

Research Misconduct

Many surveys, about one in five, may contain fraudulent data

Scientists are often reminded of data fabrication and research misconduct in the laboratory, but what about in social sciences and survey research? Last year, a high-profile case of research misconduct brought survey fraud to the forefront, when a researcher was found to have fabricated data demonstrating that short conversations could change people’s minds on same-sex marriage. But an important question remains: how wide-spread is the problem? Two researchers, Noble Kuriakose, a research scientist at SurveyMonkey, and Michael Robbins, a researcher affiliated with the University of Michigan and Princeton University, sought to address this issue by developing a statistical test designed to detect when a survey may contain fabricated data. The test is based on determining the likelihood that two independent respondents will give highly similar answers to survey questions, and is meant to examine large-scale opinion surveys that cover broad topics and are designed to identify community differences.

Robbins, who is also the director of Arab Barometer, a project measuring opinions in the Middle East, notes that in developing countries, conducting survey research often requires in-person interviews, which can be dangerous and time-consuming. Thus, one problem that arises is that interviewers will sometimes invent survey responses to avoid risk and save time. Indeed, when Robbins and Kuriakose applied their test to 1000 sets of public data they found that nearly one in five of the surveys failed their test, indicating they were likely to contain significant portions of fabricated data. While only 5 percent of surveys conducted in westernized nations failed, 26 percent of studies conducted in developing nations were flagged by the test.

However, not all groups agree with Kuriakose and Robbins’ findings. Pew Research Center, which has performed hundreds of international surveys, is disputing the claims and have suggested that Kuriakose and Robbins should retract their findings. Courtney Kennedy, director of survey research at Pew, stated that after using the test on their own data, some were flagged, but after digging deeper, found that only a few surveys had serious questions. Kennedy believes the test is likely to find false positives, and doesn’t account for number of survey questions, number of respondents or other factors that could skew results. Pew has since posted a rebuttal online. (John Bohannon, Science Magazine)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

March 4, 2016 at 9:00 am

The importance of primate research and the responsibility it requires

leave a comment »

By: Brian Russ, Ph.D.

A bill is currently circulating in the Australian Senate to ban the importation of primates into the country for the purpose of research. If enacted into law, it would effectively limit all non-human primate research within the country to that which is currently occurring. At this time, there are three breeding facilities for primates within Australia. The majority of these colonies contain macaque monkeys, the primary primate used in biomedical research. If this bill were to pass, those colonies would not be able to bring in new animals, which in turn would restrict medical advances, as the colonies are the primary suppliers for the all biomedical research on primates in the country. Interestingly, this bill targets a very small number of cases – the Senator who proposed the bill states that between 2000 and 2015 fewer than 800 primates were imported into the country. More likely, the goal of this bill is to push Australia towards the complete cessation of biomedical research on primates. The bill is supported by the Australian organization Humane Research, which advocates for the ban of most animal research within Australia, particularly primate research.

Scientists and advocacy groups in Australia are concerned that this bill will negatively impact biomedical research within the country. Dr. James Bourne, a biomedical researcher and member of the research committee in the Australian government’s National Health & Medical Research Council, recently testified in a Senate committee hearing to express the importance of animal research in Australia and how the ban would detrimentally impact Australia’s biomedical research as a whole. He pointed to the recent outbreak of the Zika virus, to demonstrate how primate research will be critical in developing a vaccine for it and any similar future viral outbreaks. A number of international advocacy groups have also spoken out against this bill declaring that its passing would be a stranglehold on biomedical research within the country. The advocacy group Speaking of Research, which is based in the United States but operates internationally, recently published an open letter to the Australian Senate from a number of biomedical researchers in Australia detailing how primate research has been critical to helping the people of Australia, and how this ban would drastically reduce their effectiveness in the future. The letter explains how biomedical research on primates directly led to the eradication of Polio in Australia (and throughout most of the world), helped to alleviate the symptoms of Parkinson’s disease through the creation of Deep Brain Stimulation, and was critical in the creation of most vaccines used today. While all of these are of great benefit to the word, they only begin to scratch the surface of how biomedical research on animal models benefits the world’s health.

In addition to the discussing the benefits that animal research has provided the world, the letter also points out that animal importation and research is already strictly regulated, for the better. Throughout the world, ethics committees must approve research before it is allowed to begin on any animals, and if a suitable non-animal alternative exists that should be used instead. In the United States, all animal research must be approved by an Institutional Animal Care and Use Committee (IACUC), which is run at local institutions and organized by the independent group the American Association for Laboratory Animal Sciences (AALAS).  Additionally, institutes are also overseen and accredited by the Association for Assessment and Accreditation of Laboratory Animal Care (AAALAC) International. Institutions that work with AALAS and AAALAC allow these organizations to ensure institutional compliance with law and regulations dealing with the care and treatment of animals at the institutions. The majority of animal researchers welcome having such third party oversight to ensure that the public sees that biomedical researchers take the treatment of animals very seriously. AALAS and AAALAC investigations revolve around what are termed the “Three Rs”: Reduce, Refine, and Replace. That is, Reduce the number of animals needed in biomedical research through better techniques and practices, Refine the processes to minimize distress and any associated pain, and Replace animal models with alternatives when feasible. Organizations like Speaking of Research, the Foundation for Biomedical Research, the National Association for Biomedical Research, along with many others, push for not only better funding and understanding of animal research, but also for the responsible and humane use of animals in said research. Researchers therefore embrace the Three Rs and actively look for ways to encourage researchers to Reduce, Refine, and Replace.

Advocacy groups that seek to ban animal research often attack animal research by suggesting that researchers are not appropriately aiming to replace animals with other alternatives. For instance, Australia’s Humane Research group has a webpage and videos demonstrating that numerous alternatives to animal research are available. Groups that advocate for the use of animal research, however, disagree that we are at a stage in biomedical research were we could truly stop using animals. Advances in biomedical research are constantly evolving, and there may be a day in the future where the use of animals in research is no longer necessary in expanding our understanding of the world and improving the health of the populous; however, they state that we are not currently there.

One such recent advance that will help to Reduce, Refine, and in some cases, Replace animal research is the development of what is being called the mini-brain. While still in the early stages of development, these bundles of human neurons can mimic the functions and structures of parts of the human brain. These “brains” are grown by inducing stem-cells to grow into a particular class of brain cells through a form of genetic programming. This breakthrough could be of huge benefit to the biomedical community as it may allow for the testing of drugs directly on human neurological tissue. Advocacy groups looking to ban animal research and replace it with non-animal alternatives will likely point to these findings as more evidence that animals are no longer necessary for biomedical research. However, one must remember that the creation of these mini-brains would not have been possible without years of research into how stem cells operate, research that was conducted with animals. Additionally, while this research may be useful for drug development, mini-brains do not have the capacity for actual perception or behavior, meaning that tests on the effects of drugs will still require some animal testing to ensure their efficacy and safety.

As research continues to progress it may someday be possible to eliminate the need for animals in biomedical research. Currently, the state of biomedical research necessitates the use of animals, and banning of such research, or even restricting the importation of primates, could severely hinder the advance of cures and vaccines for many serious illnesses. Nevertheless, it is important that the scientist and the public continue to police animal research to ensure that all animals are treated ethically and every attempt is made to practice the Three Rs.

Written by sciencepolicyforall

March 2, 2016 at 9:00 am

Science Policy Around the Web – March 1, 2016

leave a comment »

By: Melissa Pegues, Ph.D.

Photo source: pixabay.com

European Science

Exit from European Union Could impact British Research

As Britain considers its future with the European Union (EU), academics worry that an exit could jeopardize British research. Scientists in the United Kingdom (UK) are concerned that acquiring funding for their work may become more difficult. There is also concern that collaborations between British scientists and researchers in other member states that have been fostered through the EU could be disrupted. Nobel Prize winner Professor Sir Paul Nurse has indicated that because ideas and people are easily shared, all EU scientists have benefited from the union. Science Minister Jo Johnson also believes it would be detrimental to the future of British research if the UK were to secede, and during remarks from an event hosted by the Royal Society stated that “the risks to valuable institutional partnerships, to flows of bright students and to a rich source of science funding mean the Leave campaign has serious questions to answer.”

It remains unclear whether or not scientific funding would be adversely affected by a British exit. Between 2007 and 2013 the UK has supplied over 78 billion Euros to the EU with 5.4 billion Euros specified for research and development. In that same time period, UK researchers have received 8.8 billion Euros from the EU for research. This amounts to approximately 16% of total research funding. However, it is unknown if the UK could still submit applications for funding if they chose to secede. Norway and Switzerland, non-EU members, do receive funding for scientific research through the EU, demonstrating that it may be possible for the UK as well. An exit would also raise questions as to how current large-scale, international collaborative efforts such as CERN and the European Space Agency will proceed. Additionally, the UK has worked with other EU member states to reform policies pertaining to clinical trials that would ease the bureaucratic burden through measures such as simplified reporting and lighter regulations where medicines are already authorized and promote sharing of data, while still protecting clinical trial volunteers. Opponents to staying in the EU, including Scientists for Britain, counter that the UK is not reliant on the EU for funding or participation in collaborative projects. Still, British researchers may lose priority to EU members when trying to access funds, and will lose their political voice in discussion of the future of these projects.

While the potential effects of a British exit from the EU remain under debate, Britons will have much to consider. A referendum has been set for June 23rd. (, BBC News)

Biotech and Intellectual Property

Illumina files suit over DNA sequencing technology

Illumina has recently filed a lawsuit against rival Oxford Nanopore Technologies arguing that technology used in Oxford Nanopore’s devices infringe upon patents held by Illumina for sequencing technology produced by researchers at the University of Washington and the University of Alabama at Birmingham. California-based Illumina, a leader in the development of technologies used for next generation sequencing (NGS), was once an investor in UK-based Oxford Nanopore Technologies, but that relationship ended in 2013 when Oxford Nanopore turned their focus towards technologies not covered by their agreement.

The suit is centered on Oxford Nanopore’s palm-sized MinION sequencer that has been hailed for its size, speed, and low cost. Although the device’s accuracy is not high enough for use in studying human genomics, the device is well-suited for reading smaller sequences and applications where data needs to be read in real time, such as diagnosing infections during epidemics. Indeed, the device was used to identify new infections during the recent Ebola epidemic in Western Africa. Although Illumina does not currently market a similar device, they argue that they have made “substantial investments” in nanopores, and that the pore used in the MinION infringes upon patents that Illumina holds for pores used to read DNA.

Oxford Nanopore was the first to commercialize nanopore technology for sequencing DNA and have planned the release of a higher-throughput device, PromethION, for later this year. If successful, Illumina’s suit could prevent Oxford Nanopore from selling their devices in the US. Some researchers, including Opinionomics author Mick Watson, worry that this could threaten the development of innovative sequencing methods.

Oxford Nanopore’s CEO, Dr. Gordon Sanghera responded to the litigation by stating that “[i]t is gratifying to have the commercial relevance of Oxford Nanopore proucts so public acknowledged by the market monopolist for NGS.” (Erika Check Hayden, Nature News)

Public Heath and Infectious Disease

Japanese encephalitis virus could have a new transmission route in pigs

Mosquitoes have recently been in the news for being potent disease vectors in diseases like Zika. However, many questions remain as to how these mosquito-borne diseases are maintained when their vectors die out over temperate months. A recent study assessing Japanese encephalitis virus (JEV) transmission, a mosquito-borne virus that is distantly related to the Zika virus, provided a surprising answer: pigs. According to the World Health Organization (WHO), the JEV causes approximately 68,000 clinical cases per year. While progression to encephalitis is rare, it can cause lifelong neurological damage or even death. It is well established that pigs act as a reservoir from which uninfected mosquitoes can acquire the virus before spreading the virus to other animals. Although this cycle was well-accepted, a natural question that arose from this paradigm is how the virus is maintained when mosquitoes are absent. The study identified that during the colder months, pigs can pass JEV to other pigs, where “the virus lingered for weeks in the pigs’ lymphatic tissue and tonsils.” This is the first time mosquito-free transmission of the virus has been documented in pigs, but remains to be further validated on the farms where natural transmission occurs. Interestingly, a vaccine does exist for this virus for both humans and pigs. Implementation of this vaccine has proven difficult, since “it’s not cost-effective to vaccinate pigs because they breed and turn over so quickly.” As such, the WHO suggests on their site “that JE vaccination be integrated into national immunization schedules in all areas where JE disease is recognized as a public health issue.” (Laurel Hamers, Science News)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

March 1, 2016 at 9:00 am