Science Policy For All

Because science policy affects everyone.

Archive for the ‘Essays’ Category

Pharmaceutical Detailing: in the US the Details are Tied the Prescriber’s Name

leave a comment »

By: Allison Dennis B.S.

171109_essay

Source: pixabay

While U.S. privacy laws protect patients from direct pharmaceutical marketing and shield their personal information from data mining, physicians are routinely identified based on their prescribing habits and targeted by pharmaceutical companies through personalized marketing campaigns. By their very nature, these campaigns aim to influence the behavior of prescribers. In other countries, including those protected by the European Union’s Data Protection Act, the personal identification of prescribers through medical data is strictly forbidden. However, in the U.S. these personalized campaigns are made possible by a robust pipeline of data sharing.

The pipeline begins with pharmacies, who routinely sell data derived from the vast volume of prescriptions they handle. While the prescribers’ names are usually redacted, IMS Health, a key health information organization in the pipeline, can easily use the American Medical Association (AMA)-licensed Physician Masterfile to reassociate physician ID numbers with the redacted names. The physician ID numbers are issued by the U.S. Drug Enforcement Administration (DEA) and are sold to AMA through a subscription service. IMS Health uses the prescription data to develop analytic tools for sale to pharmaceutical companies desperate to gain a marketing edge with individual prescribers. The tools consolidate the activity of nurse practitioners, dentists, chiropractors, and any professionals who can legally file a prescription. Marketers can use these tools to determine how much each named physician is prescribing, how that compares to other named physicians, what their specialty is, etc.

The data contained in the AMA’s Physician Masterfile is applicable for informing research and conducting surveys of practicing physicians, yet the need to identify physicians by name is usually not needed for public health research and enables prescriber manipulation.  The prescriber reports compiled by IMS Health enable pharmaceutical companies to take a data-driven approach to direct-to-physician advertising, a practice known as detailing. During a 17-month period between 2013 and 2015, pharmaceutical companies reported spending $3.5 billion in payments to physicians covering promotional speaking, consulting, meals, travel, and royalties. While many of the expenditures may be tied to legitimate collaborations between pharmaceutical companies and medical professionals, the U.S. Department of Health and Human Services warns that free samples, sham consulting agreements, subsidized trips, and industry-sponsored continuing education opportunities are all tools used by vendors to buy medically irrelevant loyalty. Indeed, physicians themselves seem conflicted over the significance of these relationships. When residents were asked if contact with pharmaceutical representatives influenced their prescribing practices, 61% believed they were unaffected. However, the same residents felt that only 16% of their peers were similarly immune to contact with pharmaceutical representatives.

Studies examining the role of detailing  have found it associated with higher prescribing frequency, higher costs, and lower prescribing quality, all with no contrasting favorable associations. Recent concerns over conflicts  of  interest arising from increased exposure of physicians to detailers led several academic medical centers to restrict sales visits and gift giving and implement enforcement mechanisms. Compared to hospitals with no detailing limitations, hospitals with limitations underwent an 8.7% relative decrease in the market share of detailed drugs and a 5.6% relative increase in the market share of non-detailed drugs. Overuse of brand-name drugs, which are most commonly associated with detailing, cost the US approximately $73 billion between 2010 and 2012, one-third of which was shouldered by patients. Advocates of the practice lament the lack of formal academic opportunities for physicians to learn about new drugs, believing the educational materials provided by pharmaceutical representatives fulfills a need.

The most tragic example of the potential harms of detailing targeting individual prescribers comes from the early days of the prescription opioid crisis. Purdue Pharma, the maker of OxyContin, used prescriber databases to identify the most frequent and least discriminate prescribers of opioids. Sales representatives, enticed by a bonus system that tracked their success according to upswings captured in the prescriber database, showered their target prescribers with gifts while systematically underrepresenting the risk of addiction and abuse from OxyContin. Recruitment into Purdue’s national speaker bureau and subsequent paid opportunities were further used to entice lukewarm and influential prescribers.

The last decade has seen several attempts to address the influence of detailing at the institutional, professional, and executive levels. Individual hospitals have begun limiting the access of physicians to vendors. The American Medical Student Association began issuing a conflict-of-interest scorecard, allowing all U.S. medical schools to track and assess their own detail-related policies, including those related to the limiting of gifts from the industry, industry-sponsored promotional speaking relationships, permitted accesses of pharmaceutical sales representatives, and overall enforcement and sanction of these policies. In 2016, 174 institutions participated. The AMA, which licenses the list of physician names used by health information organizations companies, has offered physicians the chance to block pharmaceutical representatives and their immediate supervisors from accessing their prescribing data. However, the Physician Data Restriction Program does not limit the ability of other employees at a pharmaceutical company to access prescribing data of doctors who have opted out. Physicians must renew their request to opt out every three years and are automatically added to the Masterfile upon entering medical school. Five years after the program’s introduction in 2006, just 4% of practicing physicians listed on the file had opted out.

In 2007, the state of Vermont outlawed the practice of selling prescription data for pharmaceutical marketing without prescriber consent. The law was quickly challenged by IMS Health, the Pharmaceutical Research and Manufacturers of America, and other data aggregators and eventually struck down by the U.S. Supreme Court. Vermont legislators held that detailing compromises clinical decision making and professionalism and increases health care costs and argued that the law was needed to protect vulnerable and unaware physicians. However, the Court held that speech in the aid of pharmaceutical marketing is protected under the First Amendment and could not be discriminately limited by Vermont law.

Congress made the first federal attempt to address the issue by enacting the Physician Payment Sunshine Act in 2010, which required companies participating in Medicare, Medicaid, and the State Children’s Health Insurance Program markets to track and collect their financial relationships with physicians and teaching hospitals. The transparency gained from the disclosures have allowed many researchers to systematically evaluate connections between conflicts of interests and prescribing behavior.

As policy makers and private watchdogs scramble to address the issues of detailing, the availability of physician names and prescription habits continues to facilitate the implementation of novel tactics. Limits on face time have pushed detailers to tap into the time physicians are spending online. When the names of prescribers are known, following and connecting with prescribers through social media accounts is straightforward. Companies like Peerin have emerged, which analyze prescriber Twitter conversations to learn whose conversations are most likely to be influential and which prescribers are connected. LinkedIn, Facebook, and Twitter all offer the ability to target a list of people by name or e-mail address for advertising. While all online drug ads are limited by the U.S. Food and Drug Administration, pharmaceutical companies are experimenting with the use of unbranded awareness campaigns to circumvent direct-to-consumer regulations.

While personalized prescriber marketing campaigns may be turning a new corner in the internet age, a simple opportunity exists at the federal level to de-personalize the practice of physician detailing. It is unclear the extent that the DEA stands to gain from selling physician ID subscriptions. However, in context of the downstream costs of the overuse of name-brand drugs this may be an appropriate loss. The U.S. Government’s central role in the reassociation of prescribers’ prescriptions could be directly addressed through systematic implementation of revised policy in order to preempt downstream prescriber manipulation.

Have an interesting science policy link?  Share it in the comments!

Advertisements

Written by sciencepolicyforall

November 9, 2017 at 10:41 pm

Science For All – Effective Science Communication and Public Engagement

leave a comment »

By: Agila Somasundaram, PhD

Image: By Scout [CC0], via Wikimedia Commons

         In 1859, Charles Darwin published the Origin of Species, laying the foundation for the theory of evolution through natural selection. Yet more than 150 years after that discovery and despite a large volume of scientific evidence supporting it, only 33% of the American population believes that humans evolved solely through natural processes. 25% of US adults believe that a supreme being guided evolution, and 34% reject evolution completely, saying that humans and all other forms of life have co-existed forever. Similarly, only 50% of American adults believe that global climate change is mostly due to human activity, with 20% saying that there is no evidence for global warming at all. A significant fraction of the public believes that there is large disagreement among scientists on evolution and climate change (the reality being there is overwhelming scientific evidence and consensus), and questions scientists’ motivations. Public skepticism about scientific evidence and scientists extends to other areas such as vaccination and genetically-modified foods.

Public mistrust in the scientific enterprise has tremendous consequences, not only for federal science funding and the advancement of science, but also for the implementation of effective policies to improve public and global health and combat issues such as global warming. In her keynote address at the 2015 annual meeting of the American Society for Cell Biology, Dr. Jane Lubchenko described the Science-Society ParadoxScientists need society, and society needs science. How then can we build public support for science, and improve public trust in scientists and scientific evidence?

Scientists need to be more actively involved in science outreach and public engagement efforts. Communicating science in its entirety, not just as sensational news, requires public understanding of science, and familiarity with the scientific process – its incremental nature, breakthrough discoveries (that don’t necessarily mean a cure), failures, and limitations alike. Who better to explain that to the public than scientists – skilled professionals who are at the center of the action? In a recent poll, more than 80% of Americans agree that scientists need to interact more with the public and policymakers. But two major hurdles need to be overcome.

Firstly, communicating science to the public is not easy. Current scientific training develops researchers to communicate science in written and oral formats largely to peers. As scientists become more specialized in their fields, technical terms and concepts (jargon) that they use frequently may be incomprehensible to non-experts (even to scientists outside their field). The scientific community would benefit tremendously from formal training in public engagement. Such training should be incorporated into early stages of professional development, including undergraduate and graduate schools. Both students and experienced scientists should be encouraged to make use of workshops and science communication opportunities offered by organizations such as AAAS, the Alan Alda Center for Communicating Science, and iBiology, to name a few. Secondly, federal funding agencies and philanthropic organizations should provide resources, and academic institutions should create avenues and incentives, for scientists to engage with the public. Both students and scientists should be allowed time away from their regular responsibilities to participate in public outreach efforts. Instead of penalizing scientists for popularizing science, scientists’ outreach efforts should be taken into consideration during promotion, grants and tenure decisions, and exceptional communicators rewarded. Trained scientist-communicators will be able to work better with their institutions’ public relations staff and science journalists to disseminate their research findings more accurately to a wider audience, and educate the public about the behind-the-scenes world of science that is rarely ever seen outside. Engaging with the public could also benefit researchers directly by increasing their scientific impact, and influence research directions to better impact society.

While increasing science outreach programs and STEM education may seem like obvious solutions, the science of science communication tells us that it is not so simple. The goals of science communication are diverse – they range from generating or sharing scientific excitement, increasing knowledge in a particular topic, understanding public’s concerns, to actually influencing people’s attitudes towards broader science policy issues. Diverse communication goals target a diverse audience, and require an assortment of communicators and communication strategies. Research has shown that simply increasing the public’s scientific knowledge does not help accomplish these various communication goals. This is because people don’t solely rely on scientific information to make decisions; they are influenced by their personal needs, experiences, values, and cultural identity, including their political, ideological or religious affiliations. People also tend to adopt shortcuts when trying to comprehend complex scientific information, and believe more in what aligns with their pre-existing notions or with the beliefs of their social groups, and what they hear repeatedly from influential figures, even if incorrect. Effective science communication requires identifying, understanding and overcoming these and other challenges.

The National Academies of Sciences, Engineering, and Medicine convened two meetings of scientists and science communicators, one in 2012 to gauge the state of the art of research on science communication, and another in 2013 to identify gaps in our understanding of science communication. The resulting research agenda outlines important questions requiring further research. For example, what are the best strategies to engage with the public, and how to adapt those methods for multiple groups, without directly challenging their beliefs or values? What are effective ways to communicate science to policymakers? How do we help citizens navigate through misinformation in rapidly changing internet and social media? How to assess the effectiveness of different science communication strategies? And lastly, how do we build the science communication research enterprise? Researchers studying communication in different disciplines, including the social sciences, need to come together and partner with science communicators to translate that research into practice. The third colloquium in this series will be held later this year.

Quoting Dr. Dan Kahan of Yale University, “A central aim of the science of science communication is to protect the value of what is arguably our society’s greatest asset…Modern science.” As evidence-based science communication approaches are being developed further, it is critical that scientists make scientific dialogue a priority, and make use of existing resources to effectively engage with the public – meet people where they are – and bring people a step closer to science – why each person should care – so that ‘post-truth’ doesn’t go from being merely the word of the year to a scary new way of life.

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

July 22, 2017 at 11:27 pm

The Economic Impact of Biosimilars on Healthcare

leave a comment »

By: Devika Kapuria, MD

          Biologic drugs, also defined as large molecules, are an ever-increasing source of healthcare costs in the US. In contrast to small, chemically manufactured molecules, classic active substances that make up 90 percent of the drugs on the market today, biologics are therapeutic proteins that undergo production through biotechnological processes, some of which may require over 1000 steps. The average daily cost of a biologic in the US is $45 when compared with a chemical drug that costs only $2. Though expensive, their advent has significantly changed disease management and improved outcomes for patients with chronic diseases such as inflammatory bowel disease, rheumatoid arthritis and various forms of cancer. Between 2015-2016, biologics accounted for 20% of the global health market, and they are predicted to increase to almost 30% by 2020. Worldwide revenue from biologic drugs quadrupled from US $47 billion in 2002 to over US $200 billion in 2013.

The United States’ Food and Drug Administration (FDA) has defined a biosimilar as a biologic product that is highly similar to the reference product, notwithstanding minor differences in clinically-inactive components, and for which there are no clinically meaningful differences between the biologic product and the innovator product in terms of safety, purity and efficacy. For example, CT-P13 (Inflectra) is a biosimilar to infliximab (chimeric monoclonal antibody against TNF-α) that has recently obtained approval from the FDA for use of treatment of inflammatory bowel disease. CT-P13 has similar but slightly different pharmacokinetics and efficacy compared to infliximab. With many biologics going off patent, the biosimilar industry has expanded greatly. In the last two years alone, the FDA approved 4 biosimilar medications: Zarxio (filgrastim-sndz), Inflectra (infliximab-dyyb), Erelzi (etanercept-szzs) and Amjevita (adalimumab-atto).

Unlike generic versions of chemical drugs (small molecules that are significantly cheaper than their branded counterparts), the price difference between a biosimilar and the original biologic is not huge. This is due to several reasons. First, the development time and cost for biosimilars is much more than for generic medications. It takes 8-10 years and several hundred million dollars for the development of a biosimilar compared to around 5 years and $1-$5 million for the generic version of a small molecule drug. With only single entrants per category in the US, biosimilars are priced 15-20% lower than their brand name rivals, which, though cheaper, still amount to hundreds of thousands of dollars. By the end of 2016, the estimated global sales from biosimilars amounted to US $2.6 billion, and nearly $4 billion by 2019. Estimates for the cost savings of biosimilars for the US market are variable; the Congressional Budget Office estimated that the BPCI (Biologics Price Competition and Innovation) Act of 2009 would reduce expenditures on biologics by $25 billion by 2018. Another analysis from the Rand Corporation estimated that biosimilars would result in a $44.2 billion reduction in biologic spending between 2014 and 2024.

In the United States, a regulatory approval pathway for biosimilars was not established till the Patient Protection and Affordable Care Act of 2010. However, biosimilars have been used in Europe for over a decade, and this has led to the development of strategies for quicker adaptation, including changes in manufacturing, scaling up production and adapting to local healthcare policies. These changes have led to a competitive performance of biosimilars in the European market, with first generation biosimilars taking up between 50-80% of the market across 5 European countries, with an expected cost savings of $15 to$44 billion by 2020. One example that demonstrates a significant discount involves the marketing of Remsima, a biosimilar of Remicade (infliximab). In Norway, an aggressive approach towards marketing of Remsima was adopted with a 69% discount in comparison to the reference product. After two years, Remsima has garnered 92.9% of the market share in the country.

The shift to biosimilars may be challenging for both physicians and patients. While safety concerns related to biosimilars have been alleviated with post marketing studies from Europe, there still remains a significant lack of awareness about biosimilars amongst healthcare providers, especially about prescribing and administering them. Patient acceptance remains an important aspect as well, with several patients loyal to the reference brand who may not have the same level of confidence in the biosimilar. Also, like with generics, patients may believe that biosimilars are, in some way, inferior to the reference product. Increased reporting of post marketing studies and pharmacovigilance can play a role in alleviating some of these concerns.

In 2015, the FDA approved the first biosimilar in the US, after which, it has published a series of guidelines for biosimilar approval, under the BPCA act, including demonstrating biosimilarity and interchangeability with the reference product. This includes a total of 3 final guideline documents and 5 draft guidance documents. Starting in September 2017, the World Health Organization will accept applications for prequalification into their Essential Medication list for biosimilar versions of rituximab and trastuzumab, for the treatment of cancer. This program ensures that medications purchased by international agencies like the UNICEF meet standards for quality, safety and efficacy. Hopefully, this will increase competition in the biosimilar market to reduce price and increase access to medications in low-income countries.

Both human and economic factors need to be considered in this rapidly growing field. Increasing awareness among prescribers and patients about the safety and efficacy of biosimilars as well as improving regulatory aspects are essential for the widespread adaptation of biosimilars.

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

July 19, 2017 at 10:42 am

Growing Need for More Clinical Trials in Pediatrics

leave a comment »

By: Erin Turbitt, PhD

Source: Flickr by Claudia Seidensticker via Creative Commons

      There have been substantial advances in biomedical research in recent decades in the US, yet children have not benefited through improvements in health and well-being to the same degree as adults. An illustrative example is that many drugs used to treat children have not been approved for use by the Food and Drug Administration (FDA). Comparatively, many more drugs have been approved for use in adult populations. As a result, some drugs are prescribed to pediatric patients outside the specifications for which they have been approved for use, referred to as ‘off-label’ prescribing. For example, some drugs approved for Alzheimer’s Disease are used to treat Autism in children. The drug donepezil used to treat dementia in Alzheimer’s patients is used to improve sleep quality in children with Autism. Another example is the use of the pain medication paracetamol in premature infants in the absence of the knowledge on the effects among this population. While decisions about off-label prescribing are usually informed by scientific evidence and professional judgement, there may be associated harms. There is growing recognition that children are not ‘little adults’ and their developing brains and bodies may react differently to those of fully developed adults. While doses for children are often calculated by scaling from adult dosing after adjusting for body weight, the stage of development of the child also affects responses to drugs. Babies have difficulties breaking down drugs due to the immaturity of the kidneys and liver, whereas toddlers are able to more effectively breakdown drugs.

The FDA requires data about drug safety and efficacy in children before issuing approvals for the use of drugs in pediatric populations. The best way to produce this evidence is through clinical drug trials. Historically, the use of children in research has been ethically fraught, with some of the early examples from vaccine trials, such as the development of the smallpox vaccine in the 1790s. Edward Jenner, who developed the smallpox vaccine, has famously been reported to have tested the vaccine on several young children including his own without consent from the children’s families. Over the next few centuries, many researchers would test new treatments including drugs and surgical procedures on institutionalized children. It was not until the early 20th century that these practices were criticized and debate began over the ethical use of children in research. Today, in general, the ethical guidance for inclusion of children in research specifies that individuals unable to exercise informed consent (including minors) are permitted to participate in research providing informed consent is gained from their parent or legal guardian. In addition to a guardian’s informed consent, assent (‘affirmative agreement’) of the child is also required where appropriate. Furthermore, research protocols involving children must be subject to rigorous evaluation by Institutional Review Boards to allow researchers to conduct their research.

Contributing to the lack of evidence of the effects of drugs in children is that fewer clinical trials are conducted in children than adults. One study reports that from 2005-2010, there were 10x fewer trials registered in the US for children compared to trials registered for adults. Recognizing the need to increase the number of pediatric clinical trials, the FDA introduced incentives to encourage the study of interventions in pediatric populations: the Best Pharmaceuticals for Children Act (BPCA) and the Pediatric Research Equity Act (PREA). The BPCA delays approval of competing generic drugs by six months and encourages NIH to prioritize pediatric clinical trials for drugs that require further evidence in children. The PREA requires more companies to have pediatric-focused drugs assessed in children. Combined, these initiatives have resulted in benefits such as improving the labeling of over 600 drugs to include pediatric safety information, such as approved use and dosing information. Noteworthy examples include two asthma medications, four influenza vaccines, six medications for seizure disorders and two products for treating migraines. However, downsides to these incentives have also been reported. Pediatricians have voiced concern over the increasing cost of some these drugs developed specifically for children, which have involved minimal innovation. For example, approval of liquid formulations of a drug used to treat heart problems in children has resulted in this formulation costing 700 times more than the tablet equivalent.

A further aspect that must be considered when conducting pediatric clinical trials is the large dropout rates of participants, and difficulty recruiting adequate numbers of children (especially for trials including rare disease populations) sometimes leading to discontinuation of trials. A recent report indicates that 19% of trials were discontinued early from 2008-2010 with an estimated 8,369 children enrolled in these trials that were never completed. While some trials are discontinued for safety reasons or efficacy findings that suggest changes in standard of care, many (37%) are discontinued due to poor patient accrual. There is insufficient research on the factors influencing parental decision-making for entering their child to a clinical trial and research into this area may lead to improvements in patient recruitment for these trials. This research must include or be informed by members of the community, such as parents of children deciding whether to enroll their child in a clinical trial, and disease advocacy groups. The FDA has an initiative to support the inclusion of community members in the drug development process. Through the Patient-Focused Drug Development initiative, patient perspectives are sought of the benefit-risk assessment process. For example, patients are asked to comment on what worries them the most about their condition, what they would consider to be meaningful improvement, and how they would weigh potential benefits of treatments with common side-effects. This initiative involves public meetings held from 2013-2017 focused on over 20 disease areas. While the majority of the diseases selected more commonly affect adults than children, some child-specific disease areas are included. For example, on May 4, 2017 public meeting was held on Patient-Focused Drug Development for Autism. The meeting included discussions from a panel of caregivers about the significant health effects and daily impacts of autism and current approaches to treatment.

While it is encouraging that the number of pediatric trials are increasing, ultimately leading to improved treatments and outcomes for children, there remain many challenges ahead for pediatric drug research. Future research in this area must explore parental decision-making and experiences, which can inform of the motivations and risk tolerances of parents considering entering their child to a clinical trial and potentially improve trial recruitment rates. This research can also contribute to ensuring that clinical trials are ethically conducted; adequately balancing the need for more research with the potential for harms to pediatric research participants.

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

May 24, 2017 at 5:04 pm

How Easy is it to Access Health Care in the US?

with 2 comments

By: Rachel F Smallwood, PhD

Source: pixabay

         Access to health care has been a concern as long as there has been health care, and it is one of the hot-button issues of health care policy debates. The recent repeal of the Affordable Care Act and passing of the American Health Care Act (AHCA) in the House of Representatives has again brought this debate front and center. The Congressional Budget Office’s analysis of the first iteration of the AHCA indicated that it would result in 24 million less people having health insurance by 2026. It would also place more of the financial burden on people making less than $50,000 per year. However, substantial changes were made to parts of the bill before it passed in the House, and there will likely be more if it is to be passed in the Senate. There is much debate and dissension on what level of access to health care should be provided by the government and whether health care is a right versus a privilege. In addition to that debate, there are other facets of the United States’ health care system that need examination and work to ensure access to health care.

There are many reasons a person may not have access to health care – not having health insurance is just one. To measure access to health care, one must first define it. Is there some quality standard that must be met for treatment to be considered health care? How do we determine whether one person’s health care is equivalent to another’s? With health care measures that range from necessary, recommended but not dire, to completely elective, even these differences can be difficult to quantify. Most institutions collecting data on health care use a working definition like that set by the Institute of Medicine in 1993: access to health care means a person is able to use health care services in a timely manner to achieve positive health outcomes. This implies that a person can enter the health care system, physically get to a place where they can receive health care, and find physicians whom they trust and who can provide the needed services.

Indeed, there are differing opinions on what constitutes “access”, and this heterogeneity is further compounded by the multiple barriers to access. For example, with the recent AHCA proposal, many representatives spoke about separating the concepts of health care coverage and health care access, while others believe that the two are not separable. There are at least four factors that limit a person’s access to healthcare. The first barrier is the availability of health services; if the necessary health care is not provided within reasonable traveling distance of a person seeking services, none of the other factors matter. The other three factors are personal barriers such as a person’s perceptions, attitudes, and beliefs about their own health and health care, organizational barriers such as referrals, waiting lists, and wait times, and financial barriers such as inability to afford insurance, copays, costs beyond deductibles, and lost wages.

The current policy in the United States is the Affordable Care Act, put into place under the Obama administration. One of the most contentious points of the law is its requirement that every person have health care coverage or pay a penalty. A 2015 survey released by the National Center for Health Statistics indicated a substantial drop in the percentage of the US population without insurance over the previous few years. There was a slight increase in the percentage of people with a usual place to go for health care (i.e. a primary care provider or clinic for regular check-ups), and a decrease in the number of people who failed to obtain needed health care due to cost, but simply requiring everyone to purchase health insurance did not induce a commensurate rise in people gaining access to health care, in accordance with the steps and measures discussed by the Agency for Healthcare Research and Quality. Additionally, there have been substantial increases in premiums, which means that those consumers still have a significant financial barrier to health care.

The numbers and policies referenced above address the country as a whole, but statistics vary widely across regions of the United States. US News ranked states on their access to health care using six metrics: child wellness and dental visits, adult wellness and dental visits, health insurance enrollment, and health care affordability. Some examples of the ranges seen between states in these measures are that 20% of adults do not have regular checkups in the highest ranked states, while around 40% do not have regular checkups in the lowest ranked states. In the highest ranked state for affordability, the fraction of people who needed to see a doctor but could not because of cost was around 7%, while in the lowest ranked state this percentage was just under 20%. While some of this is due to the differing demographics and living conditions from state to state, the discretion and freedom that states have in applying health care laws also factor in.

When comparing to other similar (high-income) nations, the United States falls short on access to health care. Although the Affordable Care Act improved access to health insurance, the US is still lagging when it comes to its residents receiving actual care. This is partially due to fewer physicians practicing general medicine in the US. In 2013, the US ranked below all other Organization for Economic Co-operation and Development countries, except for Greece, for the density of general practitioners per 1,000 people. A related measure showed that the US also had a lower percentage of physicians choosing general practitioner/primary care as their specialty than all other 35 countries. These countries are all World Bank-categorized high-income countries except for Mexico and Turkey, which are upper middle-income (and had better stats than the US). This disparity has been noted in the US and is driven by many factors including physician salaries, patient loads, and medical education emphasis (or lack thereof) on primary care. This shortage also disproportionately affects rural areas, likely contributing to some of the state-to-state variability noted above.

The United States is struggling when compared with similar nations to provide health care access to its citizens. The reasons for this struggle are multifaceted, including access to health insurance, financial barriers, and lack of primary care physicians. The political tensions and opposing principles held by individuals can also be barriers to working toward a more accessible health care system. We should be focused on developing a health care system where all can reasonably obtain health insurance, where health care costs are not prohibitively expensive, and medical education should emphasize the importance of primary care in our nation’s health and communicate the need for practitioners in under-served areas. Shedding light on these areas for improvement will allow people to work together to address our weaknesses and create a system that improves and sustains the health of our nation.

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

May 19, 2017 at 10:16 am

How GMOs Could Help with Sustainable Food Production

with one comment

By: Agnes Donko, PhD

World Population estimates from 1800 to 2100

           The world population has exceeded 7.5 billion and by 2050 it is expected to reach 9.7 billion. The challenge of feeding this ever-growing population is exacerbated by global warming, which may lead to more frequent droughts or the melting of Arctic sea and Greenland ice. The year 2016 was the warmest ever recorded, with the average temperature 1.1 °C above the pre-industrial period, and 0.06 °C above the previous record set in 2015. According to the United Nations, the world faces the largest humanitarian crisis in East-Africa since the foundation of the organization in 1945, particularly in Yemen, South Sudan, Somalia and Nigeria. In these countries, 20 million people face starvation and famine this year because of drought and regional political instability.

How could genetically modified organisms (GMO) help?

The two main GMO strategies  are the herbicide-tolerant (HT) and insect-resistant crops. HT crops were developed to help crops survive application of specific herbicides (glyphosate) that would otherwise destroy the crop along with the targeted weeds. Insect-resistant crops contain a gene from the soil bacterium Bt (Bacillus thuringiensis) that encodes for a protein that is toxic to specific insects, thus protecting the plant. Insect-resistant crops can reduce pesticide use, which decreases the ecological footprint of cultivation in two ways – first by reducing insecticide use, which in turn will reduce the environmental impact of insecticide production, and second by reducing the fuel usage and carbon dioxide (greenhouse gas) emission, by fewer spraying rounds and reduced tillage. Thus, adoption of GM technology by African nations and other populous countries like India could help with sustainable agriculture that can ameliorate the burden of changing climate and growing populations.

In developed nations, especially in the US, GM technology has been widely used since the mid-1990s, mainly in four crops: canola, maize, cotton and soybean. GM crops account for 93 percent of cotton, 94 percent of soybean and 92 percent of corn acreage in the US in 2016. Although the appearance of weed resistance to glyphosate increased herbicide usage, in 2015 the global insecticide savings from using herbicide-tolerant maize and cotton were 7.8 million kg (84% decrease) and 19.3 million kg (53% decrease), respectively, when compared with pesticide usage expected with conventional crops. Globally these savings resulted in more than 2.8 million kg of carbon dioxide, which is equivalent to taking 1.25 million cars off the road for one year.

Another way in which GM crops can help sustainable food production is by reducing food wastage in developed nations. The Food and Agriculture Organization of the United Nations (FAO) estimates that one-third of all food produced for human consumption in the world (around 1.3 billion tons) is lost or wasted each year, which includes 45% of all fruits. For example, when an apple is bruised, an enzyme called polyphenol oxidase initiates the degradation of polyphenols that turns the apple’s flesh brown. But nobody wants to buy brown apples, so the bruised apples are simply trashed. In Arctic apples, the level of the enzyme is reduced by gene silencing, thereby preventing browning. The Arctic Apple obtained USDA approval in 2015, and is expected to reach the market in 2017.

In 2015, the FDA approved the first GMO food for animal consumption, a genetically modified Atlantic salmon called AquAdvantage. Conventional salmon farming has terrible effects on the environment. However, AquAdvantage contains a growth hormone regulating transgene, which allows for accelerated growth rates, thus decreasing the farming time from 3 years to 16-18 months. This would dramatically reduce the ecological footprint of fish farming, leading to more sustainable food production. Even though FDA did not find any difference in the nutritional profile between AquAdvantage and its natural counterpart, AquAdvantage will not hit the U.S. market any time soon, because the FDA banned import and sale until the exact guidelines on how this product should be labelled are published.

This FDA action was initiated by bill S. 764 that was signed by former president Barack Obama in 2016. Bill S. 764 requires food companies to disclose GMOs without necessarily using a GMO text label on packaging. They may choose to label GM ingredients with a symbol or a QRC (quick response code) that, when scanned by a smartphone, will lead the consumer to a website with more information on the product. But this requires the consumer to have both a smartphone and access to the internet. The bill also has ‘lax standards and broad definition’. For instance, if the majority of a product contains meat, but some other less significant ingredient is produced from GM crops, then it need not be labelled. Oil extracted from GM soybean, or starch purified from GM corn are exempt from labeling, because they were only derived from GM sources, but no longer contain any genetic material in them. Contrarily, in the European Union (EU), regulations require that the phrase “genetically modified” or “produced from genetically modified [name of the organism]” must appear clearly next to the ingredient list. If the food is not packaged, the same phrase must be on the food display or next to it. The EU also unequivocally determines the level of GMO (below 0.9 %) in conventional food or feed that is exempt from labelling.

Despite its controversial guidelines for GMO labeling, bill S. 764 could end the long-fought battle of Just Label It campaign. The bill was a huge step toward the right to know, which will let individuals decide if they want to consume GM foods or not. GMOs can significantly support sustainable food production and reduce the destructive environmental impact of humanity, but only if we let it.

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

May 12, 2017 at 5:13 pm

How Science Policy Affects Pandemic Pathogen Research

leave a comment »

By: Samuel Porter, PhD

         In 2012, a pair of studies were published in Nature and Science weeks apart igniting one the biggest national debates about science in recent memory. These studies demonstrated that a few mutations in the highly pathogenic H5N1 strain of influenza virus (colloquially known as “bird flu”) could enable it to be transmitted through the air to mammals. At the heart of controversy was the question of whether scientists should be creating more virulent and/or pathogenic strains of deadly viruses in the lab. This controversial research is known as “gain of function” studies.

Critics claimed that the research was too dangerous that the risk of an accidental or deliberate release of these lab strains was far greater than the scientific and public health benefits. In an attempt to respond to the growing concern over their work, the community of researchers working with these pathogens voluntarily agreed to suspend this gain of function research for 60 days to discuss new policies on conducting the research safely.

But that was not enough to satisfy critics of the research, who continued to lobby the Obama administration to take official action. On October 17, 2014 the White House Office of Science and Technology Policy (OSTP), abruptly announced a pause on all U.S. Government funding of gain of function research on influenza, Middle East respiratory syndrome (MERS), and severe acute respiratory syndrome (SARS) coronavirus until the National Science Advisory Board for Biosecurity (NSABB) could make recommendations for policy regulating the research going forward. The NSABB was formed in 2005 (in the wake of the anthrax attacks in 2001), and is composed of scientists from universities around the nation, and administrators from 14 separate agencies in the federal government. The board reports to the Secretary for Health and Human Services (HHS) and is tasked primarily with recommending policies to the relevant government entities on preventing published research in the biological sciences from negatively impacting national security and public health.

The move drew harsh criticism from researchers in the field, many of whom thought that it was too broad. They claimed it would jeopardize their ability to predict, detect, and respond to potentially emerging pandemics. In the private sector, several companies said that the order would prevent them from working on new antiviral drugs and vaccines. Furthermore, many young scientists worried that an inability to do their experiments could jeopardize their careers. In an effort to bring attention to the issue, many scientists (including the two flu researchers whose research triggered the pause) formed the group Scientists for Science, which advocates against blanket bans on research. In addition, researchers were especially upset by the recommendation of the NSABB to censor the publications resulting from the experiments due to fears that this research could have a “dual use” that would threaten national security. However, not all researchers in the field support gain of function research (the opposition group is called Cambridge Working Group) and maintain that the risks of the research outweigh benefits.

The moratorium lasted until January 9th, 2017, when the OSTP released the guidelines for funding this research in the future. The new rules are essentially the same recommendations put forth by the NSABB seven months earlier. The NSABB had concluded that these studies involving “potentially pandemic pathogens” (PPP) do indeed have important benefits to public health, but warranted additional screening prior to funding approval. It directed federal agencies to create a pre-funding review mechanism using eight criteria (including whether the pathogen is likely to cause a naturally occurring pandemic, and if there are alternative methods of answering the scientific question). The results of these reviews must be reported to the White House OSTP. Importantly, the policy was implemented in the final days of the Obama administration rather than leave it to the incoming Trump administration, who, as of this date, has yet to fill nearly any top science positions, and may not have issued guidance for months, if at all.  Researchers welcomed the decision to finally lift the ban, but questioned when the projects would be allowed to resume.

What can we learn from this situation from a science policy perspective? First, we must learn not to overreact to hysteria regarding the risks of this type of research. Indeed, there are risks in performing research on potentially pandemic strains of influenza and other pathogens, as there are with other types of research. But issuing overly broad, sweeping moratoriums halting ground breaking research for years is not the answer, nor is government censorship of academic publication. While in the end, the studies were given the green light to resume, and were published without modification, there is no making up for the lost time. These studies are not machines than can simply be turned on and off on a whim without repercussions. When we delay research into learning how viruses become pandemic, we hurt our ability to detect and respond to naturally occurring outbreaks. Additionally, when American scientists are prevented from doing research that other countries are still pursuing, American leadership in the biomedical sciences is at a competitive disadvantage. (The European Academies Science Advisory Council also recently updated its recommendations for PPP research in 2015, but did not institute a moratorium.) What we learn from these studies could potentially save countless lives. Secondly, the freedom to publish without any government censorship must be valiantly defended in any and all fields, especially with a new administration with an aggressively anti-science and anti-climate stance. Lastly, the scientific community must do a better job educating the public both on the importance of these studies from a public health perspective, and on the precautions put into place to ensure that these studies are conducted safely.

In the future, there will inevitably be debates over the safety or ethics of the latest experiments in a particular field. In attempting to wade through the murky waters of a complex controversy, science policy makers should make decisions that balance public health, safety, and ethics, rather than reactionary policies like censorships and moratoriums.

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

April 21, 2017 at 8:47 am