Science Policy For All

Because science policy affects everyone.

Archive for the ‘Essays’ Category

Science For All – Effective Science Communication and Public Engagement

leave a comment »

By: Agila Somasundaram, PhD

Image: By Scout [CC0], via Wikimedia Commons

         In 1859, Charles Darwin published the Origin of Species, laying the foundation for the theory of evolution through natural selection. Yet more than 150 years after that discovery and despite a large volume of scientific evidence supporting it, only 33% of the American population believes that humans evolved solely through natural processes. 25% of US adults believe that a supreme being guided evolution, and 34% reject evolution completely, saying that humans and all other forms of life have co-existed forever. Similarly, only 50% of American adults believe that global climate change is mostly due to human activity, with 20% saying that there is no evidence for global warming at all. A significant fraction of the public believes that there is large disagreement among scientists on evolution and climate change (the reality being there is overwhelming scientific evidence and consensus), and questions scientists’ motivations. Public skepticism about scientific evidence and scientists extends to other areas such as vaccination and genetically-modified foods.

Public mistrust in the scientific enterprise has tremendous consequences, not only for federal science funding and the advancement of science, but also for the implementation of effective policies to improve public and global health and combat issues such as global warming. In her keynote address at the 2015 annual meeting of the American Society for Cell Biology, Dr. Jane Lubchenko described the Science-Society ParadoxScientists need society, and society needs science. How then can we build public support for science, and improve public trust in scientists and scientific evidence?

Scientists need to be more actively involved in science outreach and public engagement efforts. Communicating science in its entirety, not just as sensational news, requires public understanding of science, and familiarity with the scientific process – its incremental nature, breakthrough discoveries (that don’t necessarily mean a cure), failures, and limitations alike. Who better to explain that to the public than scientists – skilled professionals who are at the center of the action? In a recent poll, more than 80% of Americans agree that scientists need to interact more with the public and policymakers. But two major hurdles need to be overcome.

Firstly, communicating science to the public is not easy. Current scientific training develops researchers to communicate science in written and oral formats largely to peers. As scientists become more specialized in their fields, technical terms and concepts (jargon) that they use frequently may be incomprehensible to non-experts (even to scientists outside their field). The scientific community would benefit tremendously from formal training in public engagement. Such training should be incorporated into early stages of professional development, including undergraduate and graduate schools. Both students and experienced scientists should be encouraged to make use of workshops and science communication opportunities offered by organizations such as AAAS, the Alan Alda Center for Communicating Science, and iBiology, to name a few. Secondly, federal funding agencies and philanthropic organizations should provide resources, and academic institutions should create avenues and incentives, for scientists to engage with the public. Both students and scientists should be allowed time away from their regular responsibilities to participate in public outreach efforts. Instead of penalizing scientists for popularizing science, scientists’ outreach efforts should be taken into consideration during promotion, grants and tenure decisions, and exceptional communicators rewarded. Trained scientist-communicators will be able to work better with their institutions’ public relations staff and science journalists to disseminate their research findings more accurately to a wider audience, and educate the public about the behind-the-scenes world of science that is rarely ever seen outside. Engaging with the public could also benefit researchers directly by increasing their scientific impact, and influence research directions to better impact society.

While increasing science outreach programs and STEM education may seem like obvious solutions, the science of science communication tells us that it is not so simple. The goals of science communication are diverse – they range from generating or sharing scientific excitement, increasing knowledge in a particular topic, understanding public’s concerns, to actually influencing people’s attitudes towards broader science policy issues. Diverse communication goals target a diverse audience, and require an assortment of communicators and communication strategies. Research has shown that simply increasing the public’s scientific knowledge does not help accomplish these various communication goals. This is because people don’t solely rely on scientific information to make decisions; they are influenced by their personal needs, experiences, values, and cultural identity, including their political, ideological or religious affiliations. People also tend to adopt shortcuts when trying to comprehend complex scientific information, and believe more in what aligns with their pre-existing notions or with the beliefs of their social groups, and what they hear repeatedly from influential figures, even if incorrect. Effective science communication requires identifying, understanding and overcoming these and other challenges.

The National Academies of Sciences, Engineering, and Medicine convened two meetings of scientists and science communicators, one in 2012 to gauge the state of the art of research on science communication, and another in 2013 to identify gaps in our understanding of science communication. The resulting research agenda outlines important questions requiring further research. For example, what are the best strategies to engage with the public, and how to adapt those methods for multiple groups, without directly challenging their beliefs or values? What are effective ways to communicate science to policymakers? How do we help citizens navigate through misinformation in rapidly changing internet and social media? How to assess the effectiveness of different science communication strategies? And lastly, how do we build the science communication research enterprise? Researchers studying communication in different disciplines, including the social sciences, need to come together and partner with science communicators to translate that research into practice. The third colloquium in this series will be held later this year.

Quoting Dr. Dan Kahan of Yale University, “A central aim of the science of science communication is to protect the value of what is arguably our society’s greatest asset…Modern science.” As evidence-based science communication approaches are being developed further, it is critical that scientists make scientific dialogue a priority, and make use of existing resources to effectively engage with the public – meet people where they are – and bring people a step closer to science – why each person should care – so that ‘post-truth’ doesn’t go from being merely the word of the year to a scary new way of life.

Have an interesting science policy link?  Share it in the comments!

Advertisements

Written by sciencepolicyforall

July 22, 2017 at 11:27 pm

The Economic Impact of Biosimilars on Healthcare

leave a comment »

By: Devika Kapuria, MD

          Biologic drugs, also defined as large molecules, are an ever-increasing source of healthcare costs in the US. In contrast to small, chemically manufactured molecules, classic active substances that make up 90 percent of the drugs on the market today, biologics are therapeutic proteins that undergo production through biotechnological processes, some of which may require over 1000 steps. The average daily cost of a biologic in the US is $45 when compared with a chemical drug that costs only $2. Though expensive, their advent has significantly changed disease management and improved outcomes for patients with chronic diseases such as inflammatory bowel disease, rheumatoid arthritis and various forms of cancer. Between 2015-2016, biologics accounted for 20% of the global health market, and they are predicted to increase to almost 30% by 2020. Worldwide revenue from biologic drugs quadrupled from US $47 billion in 2002 to over US $200 billion in 2013.

The United States’ Food and Drug Administration (FDA) has defined a biosimilar as a biologic product that is highly similar to the reference product, notwithstanding minor differences in clinically-inactive components, and for which there are no clinically meaningful differences between the biologic product and the innovator product in terms of safety, purity and efficacy. For example, CT-P13 (Inflectra) is a biosimilar to infliximab (chimeric monoclonal antibody against TNF-α) that has recently obtained approval from the FDA for use of treatment of inflammatory bowel disease. CT-P13 has similar but slightly different pharmacokinetics and efficacy compared to infliximab. With many biologics going off patent, the biosimilar industry has expanded greatly. In the last two years alone, the FDA approved 4 biosimilar medications: Zarxio (filgrastim-sndz), Inflectra (infliximab-dyyb), Erelzi (etanercept-szzs) and Amjevita (adalimumab-atto).

Unlike generic versions of chemical drugs (small molecules that are significantly cheaper than their branded counterparts), the price difference between a biosimilar and the original biologic is not huge. This is due to several reasons. First, the development time and cost for biosimilars is much more than for generic medications. It takes 8-10 years and several hundred million dollars for the development of a biosimilar compared to around 5 years and $1-$5 million for the generic version of a small molecule drug. With only single entrants per category in the US, biosimilars are priced 15-20% lower than their brand name rivals, which, though cheaper, still amount to hundreds of thousands of dollars. By the end of 2016, the estimated global sales from biosimilars amounted to US $2.6 billion, and nearly $4 billion by 2019. Estimates for the cost savings of biosimilars for the US market are variable; the Congressional Budget Office estimated that the BPCI (Biologics Price Competition and Innovation) Act of 2009 would reduce expenditures on biologics by $25 billion by 2018. Another analysis from the Rand Corporation estimated that biosimilars would result in a $44.2 billion reduction in biologic spending between 2014 and 2024.

In the United States, a regulatory approval pathway for biosimilars was not established till the Patient Protection and Affordable Care Act of 2010. However, biosimilars have been used in Europe for over a decade, and this has led to the development of strategies for quicker adaptation, including changes in manufacturing, scaling up production and adapting to local healthcare policies. These changes have led to a competitive performance of biosimilars in the European market, with first generation biosimilars taking up between 50-80% of the market across 5 European countries, with an expected cost savings of $15 to$44 billion by 2020. One example that demonstrates a significant discount involves the marketing of Remsima, a biosimilar of Remicade (infliximab). In Norway, an aggressive approach towards marketing of Remsima was adopted with a 69% discount in comparison to the reference product. After two years, Remsima has garnered 92.9% of the market share in the country.

The shift to biosimilars may be challenging for both physicians and patients. While safety concerns related to biosimilars have been alleviated with post marketing studies from Europe, there still remains a significant lack of awareness about biosimilars amongst healthcare providers, especially about prescribing and administering them. Patient acceptance remains an important aspect as well, with several patients loyal to the reference brand who may not have the same level of confidence in the biosimilar. Also, like with generics, patients may believe that biosimilars are, in some way, inferior to the reference product. Increased reporting of post marketing studies and pharmacovigilance can play a role in alleviating some of these concerns.

In 2015, the FDA approved the first biosimilar in the US, after which, it has published a series of guidelines for biosimilar approval, under the BPCA act, including demonstrating biosimilarity and interchangeability with the reference product. This includes a total of 3 final guideline documents and 5 draft guidance documents. Starting in September 2017, the World Health Organization will accept applications for prequalification into their Essential Medication list for biosimilar versions of rituximab and trastuzumab, for the treatment of cancer. This program ensures that medications purchased by international agencies like the UNICEF meet standards for quality, safety and efficacy. Hopefully, this will increase competition in the biosimilar market to reduce price and increase access to medications in low-income countries.

Both human and economic factors need to be considered in this rapidly growing field. Increasing awareness among prescribers and patients about the safety and efficacy of biosimilars as well as improving regulatory aspects are essential for the widespread adaptation of biosimilars.

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

July 19, 2017 at 10:42 am

Growing Need for More Clinical Trials in Pediatrics

leave a comment »

By: Erin Turbitt, PhD

Source: Flickr by Claudia Seidensticker via Creative Commons

      There have been substantial advances in biomedical research in recent decades in the US, yet children have not benefited through improvements in health and well-being to the same degree as adults. An illustrative example is that many drugs used to treat children have not been approved for use by the Food and Drug Administration (FDA). Comparatively, many more drugs have been approved for use in adult populations. As a result, some drugs are prescribed to pediatric patients outside the specifications for which they have been approved for use, referred to as ‘off-label’ prescribing. For example, some drugs approved for Alzheimer’s Disease are used to treat Autism in children. The drug donepezil used to treat dementia in Alzheimer’s patients is used to improve sleep quality in children with Autism. Another example is the use of the pain medication paracetamol in premature infants in the absence of the knowledge on the effects among this population. While decisions about off-label prescribing are usually informed by scientific evidence and professional judgement, there may be associated harms. There is growing recognition that children are not ‘little adults’ and their developing brains and bodies may react differently to those of fully developed adults. While doses for children are often calculated by scaling from adult dosing after adjusting for body weight, the stage of development of the child also affects responses to drugs. Babies have difficulties breaking down drugs due to the immaturity of the kidneys and liver, whereas toddlers are able to more effectively breakdown drugs.

The FDA requires data about drug safety and efficacy in children before issuing approvals for the use of drugs in pediatric populations. The best way to produce this evidence is through clinical drug trials. Historically, the use of children in research has been ethically fraught, with some of the early examples from vaccine trials, such as the development of the smallpox vaccine in the 1790s. Edward Jenner, who developed the smallpox vaccine, has famously been reported to have tested the vaccine on several young children including his own without consent from the children’s families. Over the next few centuries, many researchers would test new treatments including drugs and surgical procedures on institutionalized children. It was not until the early 20th century that these practices were criticized and debate began over the ethical use of children in research. Today, in general, the ethical guidance for inclusion of children in research specifies that individuals unable to exercise informed consent (including minors) are permitted to participate in research providing informed consent is gained from their parent or legal guardian. In addition to a guardian’s informed consent, assent (‘affirmative agreement’) of the child is also required where appropriate. Furthermore, research protocols involving children must be subject to rigorous evaluation by Institutional Review Boards to allow researchers to conduct their research.

Contributing to the lack of evidence of the effects of drugs in children is that fewer clinical trials are conducted in children than adults. One study reports that from 2005-2010, there were 10x fewer trials registered in the US for children compared to trials registered for adults. Recognizing the need to increase the number of pediatric clinical trials, the FDA introduced incentives to encourage the study of interventions in pediatric populations: the Best Pharmaceuticals for Children Act (BPCA) and the Pediatric Research Equity Act (PREA). The BPCA delays approval of competing generic drugs by six months and encourages NIH to prioritize pediatric clinical trials for drugs that require further evidence in children. The PREA requires more companies to have pediatric-focused drugs assessed in children. Combined, these initiatives have resulted in benefits such as improving the labeling of over 600 drugs to include pediatric safety information, such as approved use and dosing information. Noteworthy examples include two asthma medications, four influenza vaccines, six medications for seizure disorders and two products for treating migraines. However, downsides to these incentives have also been reported. Pediatricians have voiced concern over the increasing cost of some these drugs developed specifically for children, which have involved minimal innovation. For example, approval of liquid formulations of a drug used to treat heart problems in children has resulted in this formulation costing 700 times more than the tablet equivalent.

A further aspect that must be considered when conducting pediatric clinical trials is the large dropout rates of participants, and difficulty recruiting adequate numbers of children (especially for trials including rare disease populations) sometimes leading to discontinuation of trials. A recent report indicates that 19% of trials were discontinued early from 2008-2010 with an estimated 8,369 children enrolled in these trials that were never completed. While some trials are discontinued for safety reasons or efficacy findings that suggest changes in standard of care, many (37%) are discontinued due to poor patient accrual. There is insufficient research on the factors influencing parental decision-making for entering their child to a clinical trial and research into this area may lead to improvements in patient recruitment for these trials. This research must include or be informed by members of the community, such as parents of children deciding whether to enroll their child in a clinical trial, and disease advocacy groups. The FDA has an initiative to support the inclusion of community members in the drug development process. Through the Patient-Focused Drug Development initiative, patient perspectives are sought of the benefit-risk assessment process. For example, patients are asked to comment on what worries them the most about their condition, what they would consider to be meaningful improvement, and how they would weigh potential benefits of treatments with common side-effects. This initiative involves public meetings held from 2013-2017 focused on over 20 disease areas. While the majority of the diseases selected more commonly affect adults than children, some child-specific disease areas are included. For example, on May 4, 2017 public meeting was held on Patient-Focused Drug Development for Autism. The meeting included discussions from a panel of caregivers about the significant health effects and daily impacts of autism and current approaches to treatment.

While it is encouraging that the number of pediatric trials are increasing, ultimately leading to improved treatments and outcomes for children, there remain many challenges ahead for pediatric drug research. Future research in this area must explore parental decision-making and experiences, which can inform of the motivations and risk tolerances of parents considering entering their child to a clinical trial and potentially improve trial recruitment rates. This research can also contribute to ensuring that clinical trials are ethically conducted; adequately balancing the need for more research with the potential for harms to pediatric research participants.

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

May 24, 2017 at 5:04 pm

How Easy is it to Access Health Care in the US?

with 2 comments

By: Rachel F Smallwood, PhD

Source: pixabay

         Access to health care has been a concern as long as there has been health care, and it is one of the hot-button issues of health care policy debates. The recent repeal of the Affordable Care Act and passing of the American Health Care Act (AHCA) in the House of Representatives has again brought this debate front and center. The Congressional Budget Office’s analysis of the first iteration of the AHCA indicated that it would result in 24 million less people having health insurance by 2026. It would also place more of the financial burden on people making less than $50,000 per year. However, substantial changes were made to parts of the bill before it passed in the House, and there will likely be more if it is to be passed in the Senate. There is much debate and dissension on what level of access to health care should be provided by the government and whether health care is a right versus a privilege. In addition to that debate, there are other facets of the United States’ health care system that need examination and work to ensure access to health care.

There are many reasons a person may not have access to health care – not having health insurance is just one. To measure access to health care, one must first define it. Is there some quality standard that must be met for treatment to be considered health care? How do we determine whether one person’s health care is equivalent to another’s? With health care measures that range from necessary, recommended but not dire, to completely elective, even these differences can be difficult to quantify. Most institutions collecting data on health care use a working definition like that set by the Institute of Medicine in 1993: access to health care means a person is able to use health care services in a timely manner to achieve positive health outcomes. This implies that a person can enter the health care system, physically get to a place where they can receive health care, and find physicians whom they trust and who can provide the needed services.

Indeed, there are differing opinions on what constitutes “access”, and this heterogeneity is further compounded by the multiple barriers to access. For example, with the recent AHCA proposal, many representatives spoke about separating the concepts of health care coverage and health care access, while others believe that the two are not separable. There are at least four factors that limit a person’s access to healthcare. The first barrier is the availability of health services; if the necessary health care is not provided within reasonable traveling distance of a person seeking services, none of the other factors matter. The other three factors are personal barriers such as a person’s perceptions, attitudes, and beliefs about their own health and health care, organizational barriers such as referrals, waiting lists, and wait times, and financial barriers such as inability to afford insurance, copays, costs beyond deductibles, and lost wages.

The current policy in the United States is the Affordable Care Act, put into place under the Obama administration. One of the most contentious points of the law is its requirement that every person have health care coverage or pay a penalty. A 2015 survey released by the National Center for Health Statistics indicated a substantial drop in the percentage of the US population without insurance over the previous few years. There was a slight increase in the percentage of people with a usual place to go for health care (i.e. a primary care provider or clinic for regular check-ups), and a decrease in the number of people who failed to obtain needed health care due to cost, but simply requiring everyone to purchase health insurance did not induce a commensurate rise in people gaining access to health care, in accordance with the steps and measures discussed by the Agency for Healthcare Research and Quality. Additionally, there have been substantial increases in premiums, which means that those consumers still have a significant financial barrier to health care.

The numbers and policies referenced above address the country as a whole, but statistics vary widely across regions of the United States. US News ranked states on their access to health care using six metrics: child wellness and dental visits, adult wellness and dental visits, health insurance enrollment, and health care affordability. Some examples of the ranges seen between states in these measures are that 20% of adults do not have regular checkups in the highest ranked states, while around 40% do not have regular checkups in the lowest ranked states. In the highest ranked state for affordability, the fraction of people who needed to see a doctor but could not because of cost was around 7%, while in the lowest ranked state this percentage was just under 20%. While some of this is due to the differing demographics and living conditions from state to state, the discretion and freedom that states have in applying health care laws also factor in.

When comparing to other similar (high-income) nations, the United States falls short on access to health care. Although the Affordable Care Act improved access to health insurance, the US is still lagging when it comes to its residents receiving actual care. This is partially due to fewer physicians practicing general medicine in the US. In 2013, the US ranked below all other Organization for Economic Co-operation and Development countries, except for Greece, for the density of general practitioners per 1,000 people. A related measure showed that the US also had a lower percentage of physicians choosing general practitioner/primary care as their specialty than all other 35 countries. These countries are all World Bank-categorized high-income countries except for Mexico and Turkey, which are upper middle-income (and had better stats than the US). This disparity has been noted in the US and is driven by many factors including physician salaries, patient loads, and medical education emphasis (or lack thereof) on primary care. This shortage also disproportionately affects rural areas, likely contributing to some of the state-to-state variability noted above.

The United States is struggling when compared with similar nations to provide health care access to its citizens. The reasons for this struggle are multifaceted, including access to health insurance, financial barriers, and lack of primary care physicians. The political tensions and opposing principles held by individuals can also be barriers to working toward a more accessible health care system. We should be focused on developing a health care system where all can reasonably obtain health insurance, where health care costs are not prohibitively expensive, and medical education should emphasize the importance of primary care in our nation’s health and communicate the need for practitioners in under-served areas. Shedding light on these areas for improvement will allow people to work together to address our weaknesses and create a system that improves and sustains the health of our nation.

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

May 19, 2017 at 10:16 am

How GMOs Could Help with Sustainable Food Production

with one comment

By: Agnes Donko, PhD

World Population estimates from 1800 to 2100

           The world population has exceeded 7.5 billion and by 2050 it is expected to reach 9.7 billion. The challenge of feeding this ever-growing population is exacerbated by global warming, which may lead to more frequent droughts or the melting of Arctic sea and Greenland ice. The year 2016 was the warmest ever recorded, with the average temperature 1.1 °C above the pre-industrial period, and 0.06 °C above the previous record set in 2015. According to the United Nations, the world faces the largest humanitarian crisis in East-Africa since the foundation of the organization in 1945, particularly in Yemen, South Sudan, Somalia and Nigeria. In these countries, 20 million people face starvation and famine this year because of drought and regional political instability.

How could genetically modified organisms (GMO) help?

The two main GMO strategies  are the herbicide-tolerant (HT) and insect-resistant crops. HT crops were developed to help crops survive application of specific herbicides (glyphosate) that would otherwise destroy the crop along with the targeted weeds. Insect-resistant crops contain a gene from the soil bacterium Bt (Bacillus thuringiensis) that encodes for a protein that is toxic to specific insects, thus protecting the plant. Insect-resistant crops can reduce pesticide use, which decreases the ecological footprint of cultivation in two ways – first by reducing insecticide use, which in turn will reduce the environmental impact of insecticide production, and second by reducing the fuel usage and carbon dioxide (greenhouse gas) emission, by fewer spraying rounds and reduced tillage. Thus, adoption of GM technology by African nations and other populous countries like India could help with sustainable agriculture that can ameliorate the burden of changing climate and growing populations.

In developed nations, especially in the US, GM technology has been widely used since the mid-1990s, mainly in four crops: canola, maize, cotton and soybean. GM crops account for 93 percent of cotton, 94 percent of soybean and 92 percent of corn acreage in the US in 2016. Although the appearance of weed resistance to glyphosate increased herbicide usage, in 2015 the global insecticide savings from using herbicide-tolerant maize and cotton were 7.8 million kg (84% decrease) and 19.3 million kg (53% decrease), respectively, when compared with pesticide usage expected with conventional crops. Globally these savings resulted in more than 2.8 million kg of carbon dioxide, which is equivalent to taking 1.25 million cars off the road for one year.

Another way in which GM crops can help sustainable food production is by reducing food wastage in developed nations. The Food and Agriculture Organization of the United Nations (FAO) estimates that one-third of all food produced for human consumption in the world (around 1.3 billion tons) is lost or wasted each year, which includes 45% of all fruits. For example, when an apple is bruised, an enzyme called polyphenol oxidase initiates the degradation of polyphenols that turns the apple’s flesh brown. But nobody wants to buy brown apples, so the bruised apples are simply trashed. In Arctic apples, the level of the enzyme is reduced by gene silencing, thereby preventing browning. The Arctic Apple obtained USDA approval in 2015, and is expected to reach the market in 2017.

In 2015, the FDA approved the first GMO food for animal consumption, a genetically modified Atlantic salmon called AquAdvantage. Conventional salmon farming has terrible effects on the environment. However, AquAdvantage contains a growth hormone regulating transgene, which allows for accelerated growth rates, thus decreasing the farming time from 3 years to 16-18 months. This would dramatically reduce the ecological footprint of fish farming, leading to more sustainable food production. Even though FDA did not find any difference in the nutritional profile between AquAdvantage and its natural counterpart, AquAdvantage will not hit the U.S. market any time soon, because the FDA banned import and sale until the exact guidelines on how this product should be labelled are published.

This FDA action was initiated by bill S. 764 that was signed by former president Barack Obama in 2016. Bill S. 764 requires food companies to disclose GMOs without necessarily using a GMO text label on packaging. They may choose to label GM ingredients with a symbol or a QRC (quick response code) that, when scanned by a smartphone, will lead the consumer to a website with more information on the product. But this requires the consumer to have both a smartphone and access to the internet. The bill also has ‘lax standards and broad definition’. For instance, if the majority of a product contains meat, but some other less significant ingredient is produced from GM crops, then it need not be labelled. Oil extracted from GM soybean, or starch purified from GM corn are exempt from labeling, because they were only derived from GM sources, but no longer contain any genetic material in them. Contrarily, in the European Union (EU), regulations require that the phrase “genetically modified” or “produced from genetically modified [name of the organism]” must appear clearly next to the ingredient list. If the food is not packaged, the same phrase must be on the food display or next to it. The EU also unequivocally determines the level of GMO (below 0.9 %) in conventional food or feed that is exempt from labelling.

Despite its controversial guidelines for GMO labeling, bill S. 764 could end the long-fought battle of Just Label It campaign. The bill was a huge step toward the right to know, which will let individuals decide if they want to consume GM foods or not. GMOs can significantly support sustainable food production and reduce the destructive environmental impact of humanity, but only if we let it.

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

May 12, 2017 at 5:13 pm

How Science Policy Affects Pandemic Pathogen Research

leave a comment »

By: Samuel Porter, PhD

         In 2012, a pair of studies were published in Nature and Science weeks apart igniting one the biggest national debates about science in recent memory. These studies demonstrated that a few mutations in the highly pathogenic H5N1 strain of influenza virus (colloquially known as “bird flu”) could enable it to be transmitted through the air to mammals. At the heart of controversy was the question of whether scientists should be creating more virulent and/or pathogenic strains of deadly viruses in the lab. This controversial research is known as “gain of function” studies.

Critics claimed that the research was too dangerous that the risk of an accidental or deliberate release of these lab strains was far greater than the scientific and public health benefits. In an attempt to respond to the growing concern over their work, the community of researchers working with these pathogens voluntarily agreed to suspend this gain of function research for 60 days to discuss new policies on conducting the research safely.

But that was not enough to satisfy critics of the research, who continued to lobby the Obama administration to take official action. On October 17, 2014 the White House Office of Science and Technology Policy (OSTP), abruptly announced a pause on all U.S. Government funding of gain of function research on influenza, Middle East respiratory syndrome (MERS), and severe acute respiratory syndrome (SARS) coronavirus until the National Science Advisory Board for Biosecurity (NSABB) could make recommendations for policy regulating the research going forward. The NSABB was formed in 2005 (in the wake of the anthrax attacks in 2001), and is composed of scientists from universities around the nation, and administrators from 14 separate agencies in the federal government. The board reports to the Secretary for Health and Human Services (HHS) and is tasked primarily with recommending policies to the relevant government entities on preventing published research in the biological sciences from negatively impacting national security and public health.

The move drew harsh criticism from researchers in the field, many of whom thought that it was too broad. They claimed it would jeopardize their ability to predict, detect, and respond to potentially emerging pandemics. In the private sector, several companies said that the order would prevent them from working on new antiviral drugs and vaccines. Furthermore, many young scientists worried that an inability to do their experiments could jeopardize their careers. In an effort to bring attention to the issue, many scientists (including the two flu researchers whose research triggered the pause) formed the group Scientists for Science, which advocates against blanket bans on research. In addition, researchers were especially upset by the recommendation of the NSABB to censor the publications resulting from the experiments due to fears that this research could have a “dual use” that would threaten national security. However, not all researchers in the field support gain of function research (the opposition group is called Cambridge Working Group) and maintain that the risks of the research outweigh benefits.

The moratorium lasted until January 9th, 2017, when the OSTP released the guidelines for funding this research in the future. The new rules are essentially the same recommendations put forth by the NSABB seven months earlier. The NSABB had concluded that these studies involving “potentially pandemic pathogens” (PPP) do indeed have important benefits to public health, but warranted additional screening prior to funding approval. It directed federal agencies to create a pre-funding review mechanism using eight criteria (including whether the pathogen is likely to cause a naturally occurring pandemic, and if there are alternative methods of answering the scientific question). The results of these reviews must be reported to the White House OSTP. Importantly, the policy was implemented in the final days of the Obama administration rather than leave it to the incoming Trump administration, who, as of this date, has yet to fill nearly any top science positions, and may not have issued guidance for months, if at all.  Researchers welcomed the decision to finally lift the ban, but questioned when the projects would be allowed to resume.

What can we learn from this situation from a science policy perspective? First, we must learn not to overreact to hysteria regarding the risks of this type of research. Indeed, there are risks in performing research on potentially pandemic strains of influenza and other pathogens, as there are with other types of research. But issuing overly broad, sweeping moratoriums halting ground breaking research for years is not the answer, nor is government censorship of academic publication. While in the end, the studies were given the green light to resume, and were published without modification, there is no making up for the lost time. These studies are not machines than can simply be turned on and off on a whim without repercussions. When we delay research into learning how viruses become pandemic, we hurt our ability to detect and respond to naturally occurring outbreaks. Additionally, when American scientists are prevented from doing research that other countries are still pursuing, American leadership in the biomedical sciences is at a competitive disadvantage. (The European Academies Science Advisory Council also recently updated its recommendations for PPP research in 2015, but did not institute a moratorium.) What we learn from these studies could potentially save countless lives. Secondly, the freedom to publish without any government censorship must be valiantly defended in any and all fields, especially with a new administration with an aggressively anti-science and anti-climate stance. Lastly, the scientific community must do a better job educating the public both on the importance of these studies from a public health perspective, and on the precautions put into place to ensure that these studies are conducted safely.

In the future, there will inevitably be debates over the safety or ethics of the latest experiments in a particular field. In attempting to wade through the murky waters of a complex controversy, science policy makers should make decisions that balance public health, safety, and ethics, rather than reactionary policies like censorships and moratoriums.

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

April 21, 2017 at 8:47 am

Scientific Activism: Voting to Speed Up Discovery with Preprint Publishing

leave a comment »

By: Thaddeus Davenport, PhD

Source: Public Library of Science, via Wikimedia

         The election of Donald Trump to the Oval Office and the early actions of his administration have sparked a wave of protests in support of women’s rights and immigration, among other issues. Like other citizens, scientists have some cause to be concerned about the administration’s early actions that reveal a general disregard for facts and scientific evidence. In response, organizers have planned the March for Science for this Saturday, April 22nd, as an opportunity for people to gather in cities around the world to voice their support for factual information and scientific research. And while it is important to denounce the actions of the Trump administration that are harmful to science and health, it may be even more critical to acknowledge the underlying partisan divisions that created a niche for his rhetoric and to begin the difficult work of bridging the divide. For example, a Pew Research Center poll from 2015 indicates that 89% of liberal Democrats believe government investment in basic science pays off in the long-run, while only 61% of conservative Republicans feel the same way. Additionally, American adults with less knowledge of scientific topics are more likely to believe that government funding of basic science does not pay off. This suggests that improved science education and outreach will be important in building public support for scientific research. However, scientists often lead very busy lives and have little time outside of their professional activities to devote to valuable pursuits like science outreach. How, then, might scientists work towards building a better relationship with the public?

The products of science – knowledge, medicines, technology – are the clearest evidence of the value of research, and they are the best arguments for continued research funding. Efficiency in science is good not only for scientists hoping to make a name for themselves, but also for the public, who as the primary benefactors of academic research, must benefit from the products of that research. If taxpayers’ demand for scientific inquiry dissipates because of a perceived poor return on their investment, then the government, which supposedly represents these taxpayers, will limit its investment in science. Therefore, in addition to communicating science more clearly to the public, scientists and funding agencies should ensure that science is working efficiently and working for the public.

Information is the primary output of research, and it is arguably the most essential input for innovation. Not all research will lead to a new product that benefits the public, but most research will yield a publication that may be useful to other scientists. Science journals play a critical role in coordinating peer review and disseminating new research findings, and as the primary gatekeepers to this information, they are in the difficult position of balancing accessibility to the content of their journals with the viability of their business. This position deserves some sympathy in the case of journals published by scientific societies, which are typically non-profit organizations that perform valuable functions including scientific outreach, education and lobbying. However, for-profit journals are less justified in making a significant profit out of restricting access to information that was, in most cases, obtained through publicly-funded research.

Restricting access to information gathered in the course of research risks obscuring the value of research to a public that is already skeptical about investing in basic science, and it slows down and increases the cost of innovation. In light of this, there is growing pressure on publishers to provide options for open-access publishing. In 2008, the National Institutes of Health adopted a public access policy, which requires that “investigators funded by the NIH submit or have submitted for them to the National Library of Medicine’s PubMed Central an electronic version of their final, peer-reviewed manuscripts upon acceptance for publication, to be made publicly available no later than 12 months after the official date of publication: Provided, that the NIH shall implement the public access policy in a manner consistent with copyright law.” This policy was extended through an executive order from the Obama Administration in 2013 to include all federal agencies with research budgets greater than $100 million, with additional requirements to improve accessibility.

These requirements are changing scientific publishing and will improve access to information, but they remain limited relative to the demand for access, as evidenced by the existence of paper pirating websites, and the success of open access journals like PLoS and eLife.  Additionally, other funding agencies like the Bill and Melinda Gates Foundation and the Wellcome Trust have imposed even more stringent requirements for open access. Indeed, researchers will find a spectrum of open-access policies among the available journals, with the most rapid access to information allowed by so-called ‘preprint’ publishers like biorxiv.org. Given that many research manuscripts require months or years of revision and re-revision during submission to (usually multiple) journals, preprint servers accelerate the dissemination of information that is potentially valuable for innovation, by allowing researchers to post manuscripts prior to acceptance in a peer-reviewed journal. Many journals have now adopted explicit policies for handling manuscripts that have been previously submitted to bioRxiv, with many of them treating these manuscripts favorably.

Given that most journals accept manuscripts that have been previously published on bioRxiv, and some journals even look to bioRxiv for content, there is little incentive to submit to journals without also submitting to bioRxiv. If the goal is, as stated above, to improve the transparency and the efficiency of research in order to make science work for the public, then scientists should take every opportunity to make their data as accessible as possible, and as quickly as possible. Similarly, funding agencies should continue to push for increased access by validating preprint publications as acceptable evidence of productivity in progress reports and grant applications, and incentivizing grant recipients to simultaneously submit manuscripts to preprint servers and peer-reviewed journals. Scientists have many options when they publish, and by voting for good open-access practices with their manuscripts, they have the opportunity to guide the direction of the future of scientific publishing. These small, but important, actions may improve the vitality of research and increase the rate at which discoveries tangibly benefit taxpayers, and, in combination with science outreach and education, may ultimately strengthen the relationship between scientists and the public.

March for Science this Saturday, if it feels like the right thing to do, and then strive to make science work better for everyone by sharing the fruits of research.

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

April 20, 2017 at 11:44 am