Science Policy For All

Because science policy affects everyone.

Archive for the ‘Essays’ Category

Genetically Modified Animal Vectors to Combat Disease

leave a comment »

By: Sarah L Hawes, PhD

Mosquito larvae: ©ProjectManhattan via Wikimedia Commons

Diseases transmitted through contact with an animal carrier, or “vector,” cause over one million deaths annually, many of these in children under the age of five. More numerous, non-fatal cases incur a variety of symptoms ranging from fevers to lesions to lasting organ damage. Vector-borne disease is most commonly contracted from the bite of an infected arthropod, such as a tick or mosquito. Mosquito-borne Zika made recent, regular headlines following a 2015-2016 surge in birth defects among infants born to women bitten during pregnancy. Other big names in vector-borne disease include Malaria, Dengue, Chagas disease, Leishmaniasis, Rocky Mountain spotted fever and Lyme.

Vaccines do not exist for many of these diseases, and the Centers for Disease Control (CDC) Division of Vector-Borne Diseases focuses on “prevention and control strategies that can reach the targeted disease or vector at multiple levels while being mindful of cost-effective delivery that is acceptable to the public, and cognizant of the world’s ecology.” Prevention through reducing human contact with vectors is classically achieved through a combination of physical barriers (i.e. bed nets and clothing), controlling vector habitat near humans (i.e. dumping standing water or mowing tall grass), and reducing vector populations with poisons. For instance, the Presidential Malaria Initiative (PMI), initiated under President Bush in 2005, and expanded under President Obama, reduces vector contact through a complement of educating the public, distributing and encouraging the use of bed nets, and spraying insecticide. Now a 600 million dollar a year program, PMI has been instrumental in preventing several million Malaria-related deaths in the last decade.

But what if a potentially safer, cheaper and more effective solution to reduce human-vector contact exists in the release of Genetically Modified (GM) vector species? Imagine a mosquito engineered to include a new or altered gene to confer disease resistance, sterility, or to otherwise impede disease transmission to humans. Release of GM mosquitos could drastically reduce the need for pesticides, which may be harmful to humans, toxic to off-target species, and have led to pesticide-resistance in heavily-sprayed areas. Health and efficacy aside, it is impossible to overturn or poison every leaf cupping rainwater where mosquitos breed. GM mosquitos could reach and “treat” the same pockets of water as their non-GM counterparts. However, an insect designed to pass on disease resistance to future generations would mean persistence of genetic modifications in the wild, which is worrisome given the possibility of unintended direct effects or further mutation. An elegant alternative is the release of GM vector animals producing non-viable offspring – and this is exactly what biotech company Oxitec has done with mosquitos.

Oxitec’s OX513A mosquitos express a gene that interferes with critical cellular functions in the mosquitos, but this gene is suppressed in captivity by administering the antibiotic tetracycline in the mosquitos’ diet. Release of thousands of non-biting OX513A males into the wild results in a local generation of larvae which, in the absence of tetracycline, die before reaching adulthood. Release of OX513A has proven successful at controlling mosquito populations in several countries since 2009, rapidly reducing local numbers by roughly 90%. Oxitec’s OX513A line may indeed be a safe and effective tool. But who is charged with making this call for OX513A and, moreover for future variations in GM vector release?

Policy governing use of genetically modified organisms must keep pace with globally available biotechnology. Regulatory procedures for the use of GM vector release are determined by country, and there is a high degree of international policy alignment. The Cartagena Protocol on Biosafety is a treaty involving 170 nations currently (the US not included) that governs transport of “living modified organisms resulting from modern biotechnology” with potential to impact environmental or human health. The World Health Organization (WHO) and the Foundation for the National Institutes of Health (FNIH) published the 2014 guidelines for evaluating safety and efficacy of GM mosquitos.

Within the US, the 2017 Update to the Coordinated Framework for the Regulation of Biotechnology was published this January in response to a solicitation by the Executive Office of the President for a cohesive report from the Food and Drug Administration (FDA), Environmental Protection Agency (EPA), and US Department of Agriculture (USDA). Separately, biotech industry has been given fresh guidance on whether to seek FDA or EPA approval (in brief):  if your GM product is designed to reduce disease load or spread, including vector population reduction, it requires New Animal Drug approval by FDA; if it is designed to reduce pest population but is un-related to disease, it requires Pesticide Product approval by EPA under the Federal Insecticide, Fungicide, and Rodenticide Act.

Thus, for a biotech company to release GM mosquitos in the US with the intent of curbing the spread of mosquito-borne disease, they must first gain FDA approval. Oxitec gained federal approval to release OX513A in a Florida suburb in August 2015 because of FDA’s “final environmental assessment (EA) and finding of no significant impact (FONSI).” These FDA assessments determined that the Florida ecosystem would not be harmed by eliminating the targeted, invasive Aedes aegypti mosquito. In addition, they affirmed that no method exists for the modified gene carried by OX513A to impact humans or other species. Risks were determined to be negligible, and include the accidental release of a few, disease-free OX513A females. For a human bitten by a rare GM female, there is zero risk of transgene transfer. There is no difference in saliva allergens, and therefore the response to a bite, from GM and non-GM mosquitos. In addition, as many as 3% of OX513A offspring manage to survive to adulthood, presumably by spawning in tetracycline-treated water for livestock. These few surviving offspring will not become a long-term problem because their survival is not a heritable loop-hole; it is instead analogous to a lucky few mosquitos avoiding contact with poison.

Solid scientific understanding of the nature of genetic modifications is key to the creation of good policy surrounding the creation and use of GMOs. In an updated draft of Guidelines For Industry 187 (GFI 187), the FDA advises industry seeking New Animal Drug Approval to include a molecular description of the intentional genetic alteration in animals, method for alteration, description of introduction to the animal, and whether the alteration is stable over time/across generations if heritable, and environmental and food safety assessments. Newer genomic DNA editing techniques such as CRISPR offer improved control over the location, and thus, the effect of genetic revisions. In light of this, the FDA is soliciting feedback from the public on the GFI 187 draft until April 19th, 2017, in part to determine whether certain types of genetic alteration in animals might represent no risk to humans or animals, and thus merit reduced federal regulation.

Following federal clearance, the decision on whether to release GM vectors rests with local government. Currently, lack of agreement among Florida voters has delayed the release of OX315A mosquitos. Similar to when GM mosquito release was first proposed in Florida following a 2009-2010 Dengue outbreak, voter concern today hinges on the perception that GM technology is “unproven and unnatural.” This illustrates both a healthy sense of skepticism in our voters, and the critical need to improve scientific education and outreach in stride with biotechnology and policy. Until we achieve better public understanding of GM organisms, including how they are created, controlled, and vetted, we may miss out on real opportunities to safely and rapidly advance public health.

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

February 16, 2017 at 9:46 am

Containing Emerging and Re-emerging Infections Through Vaccination Strategies

leave a comment »

By: Arielle Glatman Zaretsky, PhD

Source: CDC [Public Domain], via Wikimedia Commons

           Throughout history, humans have sought to understand the human body and remedy ailments. Since the realization that disease can be caused by infection and the establishment of Koch’s postulates, designed to demonstrate that a specific microbe causes a disease, humans have sought to identify and “cure” diseases. However, while we have been successful as a species at developing treatments for numerous microbes, viruses, and even parasites, pure cures that prevent future reinfection have remained elusive. Indeed, the only human disease that has been eradicated in the modern era (smallpox) was eliminated through the successful development and application of preventative vaccines, not the implementation of any treatment strategy. Furthermore, the two next most likely candidates for eradication, dracunculiasis (guinea worm disease) and poliomyelitis (polio), are approaching this status through the use of preventative measures, via water filtration and vaccination, respectively. In fact, despite the recent pushback from a scientifically unfounded anti-vaxxers movement, the use of a standardized vaccination regimen has led to clear reductions in disease incidence of numerous childhood ailments in the Americas, including measles, mumps, rubella, and many others. Thus, although the development of antibiotics and other medical interventions have dramatically improved human health, vaccines remain the gold standard of preventative treatment for the potential of disease elimination. By Centers for Disease Control and Prevention [Public domain], via Wikimedia Commons

Recently, there have been numerous outbreaks of emerging or reemerging infectious diseases. From SARS to Ebola to Zika virus, these epidemics have led to significant morbidity and mortality, and have incited global panic. In the modern era of air travel and a global economy, disease can spread quickly across continents, making containment difficult. Additionally, the low incidence of these diseases means that few efforts are exerted to the development of treatments and interventions for them, and when these are attempted, the low incidence further complicates the implementation of clinical trials. For example, though Ebola has been a public health concern since the first outbreak in 1976, no successful Ebola treatment or vaccine existed until the most recent outbreak of 2014-2016. This outbreak resulted in the deaths of more than 11,000 people, spread across more than 4 countries, and motivated the development of several treatments and 2 vaccine candidates, which have now reached human trials. However, these treatments currently remain unlicensed and are still undergoing testing, and were not available at the start or even the height of the outbreak when they were most needed. Instead, diseases that occur primarily in low income populations in developing countries are understudied, for lack of financial incentive. Thus, these pathogens can persist at low levels in populations, particularly in developing countries, creating a high likelihood of eventual outbreak and potential for future epidemics.

This stream of newly emerging diseases and the re-emergence of previously untreatable diseases brings the question of how to address these outbreaks and prevent global pandemics to the forefront for public health policy makers and agencies tasked with controlling infectious disease spread. Indeed, many regulatory bodies have integrated accelerated approval policies that can be implemented in an outbreak to hasten the bench to bedside process. Although the tools to identify new pathogens rapidly during an outbreak have advanced tremendously, the pathway from identification to treatment or prevention remains complicated. Regulatory and bureaucratic delays compound the slow and complicated research processes, and the ability to conduct clinical trials can be hindered by rare exposures to these pathogens. Thus, the World Health Organization (WHO) has compiled a blueprint for the prevention of future epidemics, meant to inspire partnerships in the development of tools, techniques, medications and approaches to reduce the frequency and severity of these disease outbreaks. Through the documentation and public declaration of disease priorities and approaches to promote research and development in these disease areas, WHO has set up a new phase of epidemic prevention through proactive research and strategy.

Recently, this inspired the establishment of the Coalition for Epidemic Preparedness Innovations (CEPI) by a mixed group of public and private funding organizations, including the Bill and Melinda Gates Foundation, inspired by the suggestion that an Ebola vaccine could have prevented the recent outbreak if not for the lack of funding slowing research and development, to begin to create a pipeline for developing solutions to control and contain outbreaks, thereby preventing epidemics. Instead of focusing on developing treatments to ongoing outbreaks, the mission at CEPI is to identify likely candidates for future outbreaks based on known epidemic threats and to lower the barriers for effective vaccine development through assisting with initial dose and safety trials, and providing support through both the research and clinical trials, and the regulatory and industry aspects. If successful, this approach could lead to a stockpile of ready-made vaccines, which could easily be deployed to sites of an outbreak and administered to aid workers to reduce their morality and improve containment. What makes this coalition both unique and exciting is the commitment to orphan vaccines, so called for their lack of financial appeal to the pharmaceutical industry that normally determines the research and development priorities, and the prioritization of vaccine development over treatment or other prophylactic approaches. The advantage of a vaccination strategy is that it prevents disease through one simple treatment, with numerous precedents for adaptation of the vaccine to a form that is permissive of the potential temperature fluctuations and shipping difficulties likely to arise in developing regions. Furthermore, it aids in containment, by preventing infection, and can be quickly administered to large at risk populations.

Thus, while the recent outbreaks have incited fear, there is reason for hope. Indeed, the realization of these vaccination approaches and improved fast tracking of planning and regulatory processes could have long reaching advantages for endemic countries, as well as global health and epidemic prevention.

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

January 26, 2017 at 9:47 am

Biosurveillance: Can We Predict And Prevent Infectious Disease Outbreaks?

leave a comment »

By: Teegan A. Dellibovi-Ragheb, PhD

The increasing frequency and scope of infectious disease outbreaks in recent years (such as SARS, MERS, Ebola and Zika) highlight the need for effective disease monitoring and response capabilities. The question is, can we implement programs to detect and prevent outbreaks before they occur, or will we always be reacting to existing outbreaks, trying to control the spread of disease and mitigate the harm to people and animals?

In some cases, the science suggests that we can predict the nature of the public health threat. For instance, scientists at the University of North Carolina at Chapel Hill identified a SARS-like virus, SHC014-CoV, that is currently circulating in Chinese horseshoe bat populations. This virus is highly pathogenic, does not respond to SARS-based therapies, and can infect human cells without the need for adaptive mutations. Furthermore, there are thought to be thousands of related coronaviruses in bat populations, some of which could emerge as human pathogens. These findings suggest that circulating SARS-like viruses have the potential to cause another global pandemic, and resources need to be dedicated to surveillance and the development of more effective therapeutics.

What is biosurveillance?

In 2012 President Obama released the first-ever National Strategy for Biosurveillance, whose purpose is to better integrate the many disparate governmental programs and non-governmental organizations that collect and monitor public health data. The Strategy defines biosurveillance as “the process of gathering, integrating, interpreting, and communicating essential information related to ‘all-hazards’ threats or disease activity affecting human, animal, or plant health to achieve early detection and warning, contribute to overall situational awareness of the health aspects of an incident, and to enable better decision making at all levels”. The “threats” described by the Strategy include emerging infectious diseases, pandemics, agricultural and food-borne illnesses, as well as the deliberate use of chemical, biological, radiological and nuclear (CBRN) weapons.

The overall goal of the Strategy is “to achieve a well-integrated national biosurveillance enterprise that saves lives by providing essential information for better decision making at all levels”. This goal is broken down into four core functions: (1) scan and discern the environment; (2) identify and integrate essential information; (3) inform and alert decision makers; and (4) forecast and advise potential impacts.

How are these programs implemented?

A number of programs were launched in response to President Obama’s Strategy. For instance, USAID’s Emerging Pandemic Threats (EPT) program created four complementary projects (Predict, Prevent, Identify, and Respond) which together aim to combat zoonotic outbreaks in 20 developing countries in Africa, Asia and Latin America that are hotspots of viral evolution and spread. Predict focuses on monitoring the wildlife-human interface to discover new and reemerging zoonotic diseases. The Prevent project aims to mitigate risk behavior associated with animal-to-human disease transmission. Identify works to strengthen laboratory diagnostic capabilities, and Respond focuses on preparing the public health workforce for an effective outbreak response.

There are many other agencies besides USAID and the State Department that participate in biosurveillance and biosecurity, including the Department of Health and Human Services (through the Biomedical Advanced Research and Development Authority). The Department of Defense and the Department of Homeland Security both have biosecurity programs as well (the Defense Threat Reduction Agency and the National Biodefense Analysis And Countermeasures Center, respectively). These focus more on protecting the health of armed forces and combatting deliberate acts of terror, however there is still a lot of overlap with emerging infectious diseases and global health. A comprehensive disaster preparedness strategy requires coordination between agencies that may not be used to working together, and who have very different structures and missions.

What are the challenges?

Global disease surveillance is a critical aspect of our biosecurity, due to accelerated population growth and migration, and worldwide movement of goods and food supplies. Political instability, cultural differences and lack of infrastructure in developing countries all present obstacles to effective global biosurveillance. These are complex issues, but are critically important to address, as rural populations in low- and middle-income countries can become hotspots of infectious disease outbreaks. This is in part due to the lack of sanitation and clean water, and the close contact with both domestic and wild animals.

Another challenge is determining the most effective metrics with which to monitor public health data. Often by the time a new pathogen has been positively identified and robust diagnostic measures implemented, a disease outbreak is well under way. In some cases, the actions of health workers can make the situation worse, such as in the tragic mishandling of the 2010 cholera outbreak in Haiti by the United Nations. One approach that has been shown to be effective for early detection is the use of syndromic surveillance systems, such as aggregating data from emergency room visits or the sale of over-the-counter medication. When combined with advanced computing techniques and adaptive machine learning methods this provides a powerful tool for the collection and integration of real-time data. This method can alert public health officials much earlier to the existence of a possible outbreak.

Scientific research on high-consequence pathogens is a key aspect of an effective biosecurity program. This is how we develop new diagnostic and therapeutic capabilities, as well as understand how pathogens spread and evolve. However, laboratories can also be the initial source of an infection, such as the laboratory-acquired tularemia outbreak, and research with the most dangerous pathogens (Select Agent Research) must be carefully monitored and regulated. It has been an ongoing challenge to balance the regulation of Select Agents with the critical need to enhance our scientific understanding of these pathogens. Of particular concern are gain-of-function studies, or Dual Use Research of Concern (DURC). From a scientific standpoint, these studies are vital to understanding pathogen evolution, which in turn helps us to predict the course of an outbreak and develop broad-spectrum therapeutics. However this also poses a security risk, since it means scientists are deliberately increasing the virulence of a given pathogen, such as the experimental adaptation of H5N1 avian influenza to mammalian transmission, which could pose a significant public health threat if deliberately misused.

How well are we doing?

The International Security Advisory Board, a committee established to provide independent analysis to the State Department on matters related to national and international security, published a report in May of 2016 on overseas disease outbreaks. They make a number of recommendations, including: (1) better integration of public health measures with foreign policy operations; (2) working with non-governmental organizations and international partners to increase preparedness planning and exercises; (3) increase financial support and reform structural issues at the World Health Organization to ensure effective communication during crises; (4) bolster lines of communication and data sharing across the federal government, in part through the establishment of interagency working groups; and (5) strengthening public health programs at the State Department and integrating public health experts into regional offices, foreign embassies and Washington for effective decision making at all levels.

The RAND Corporation, an independent think tank, conducted a review of the Department of Defense biosurveillance programs. They found that “more near-real-time analysis and better internal and external integration could enhance its performance and value”. They also found funding to be insufficient, and lacking a unified funding system. Improvements were needed in prioritizing the most critical programs, streamlining organization and governance, and increasing staff and facility resources.

RAND researchers also published an article assessing the nation’s health security research. They found that federal support is “heavily weighted toward preparing for bioterrorism and other biological threats, providing significantly less funding for challenges such as monster storms or attacks with conventional bombs”. In a study spanning seven non-defense agencies, including the National Institutes of Health (NIH) and the Centers for Disease Control (CDC), they found that fewer than 10% of federally funded projects address natural disasters. This could have broad consequences, especially considering that natural disasters such as earthquakes, hurricanes or tornadoes can create an environment for infectious diseases to take hold in a population. More work needs to be done to integrate biosurveillance and biosecurity programs across different agencies and allocate resources in a way that reflects the priorities laid out by the administration.

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

January 13, 2017 at 10:00 am

Mental Health Policy and its Impact on the American Population

leave a comment »

By: Fatima Chowdhry, MD

           In the last 50 years, the U.S. has seen a migration in which individuals diagnosed with a mental illness, defined by the Diagnostic and Statistical Manual of Mental Disorders as “a syndrome characterized by clinically significant disturbance in an individual’s cognition, emotion regulation, or behavior”, are treated not in a mental health institution but rather in prisons, nursing homes, and outpatient facilities. To understand the implications of this trend, it’s important to frame this issue as a cascade of events. For example, we can start with a member of law enforcement, not adequately trained to recognize someone in the throes of a manic phase or a schizophrenic not on their medication, arresting an individual with a mental illness. We then find that this individual, upon release, did not receive treatment and now has trouble reintegrating into their community and is unable to find gainful employment. The combination of a lack of treatment, stable community, and employment leads them to continuous run-ins with the law, restarting a vicious cycle that had led us to a prison population in which the majority has a mental illness.

The move to deinstitutionalize people with mental illness from mental institutions began in the 1960’s and accelerated with the passage of the Community Mental Health Act of 1963. This bill was an important step forward to improve the delivery of mental health care because it provided grants to states to set up community health centers. In 1981, President Ronald Reagan signed the Omnibus Budget Reconciliation Act, which sent block grants to states in order for them to provide mental health services. Aside from these two bills, and the Mental Health Parity Act of 1996, which ensured insurance coverage parity of mental health care with other types of health care, there has been little in the way of significant mental health legislation. Mental health was put on the backburner and the result is a mental health infrastructure in tatters.

During the Great Recession, states cut billions in funding dedicated to mental health. A vivid example of how decreased state funding affects mental health services can be seen in the state of Iowa. The current Governor has been put in the difficult position of balancing fiscal responsibility with maintaining access to mental health care. At one point, there were four state mental health hospitals that provided care to each corner of the state. The Governor closed down two of the facilities to save the state money. While they were old facilities built in the 19th century and cost millions to maintain, many people in Iowa felt that he moved too quickly before alternative services were in place. In addition to closing these mental health facilities, the governor obtained a waiver from the federal government to modernize the state’s Medicaid program and move from fee-for-service to managed care. Under fee-for-service, health care providers are paid for each service provided to a Medicaid enrollee. Under managed care, Medicaid enrollees get their services through a vendor under contract with the state. Since the 1990s, the share of Medicaid enrollees covered by managed care has increased, with about 72% of Medicaid enrollees covered by managed care as of July 1, 2013. The move can be difficult because hospital networks and providers have to contract with a vendor and Medicaid beneficiaries may have to switch providers. Needless to say, it can be an administrative nightmare. The transition in Iowa, to say the least, has been rocky with the vendors threatening to pull out because of tens of millions of dollars in losses. The vendors and the providers might not get paid as much as they want but the people getting the short end of the stick are people on Medicaid, which includes individuals with mental health illnesses.

Given the patchwork of mental health care across the country and the lack of funding, what can be done? According to NAMI, 43.8 million Americans experience a mental illness in a year. Many don’t receive the treatment they need. It’s a multi-faceted problem facing families, employers, health care providers and community leaders. At the federal level, lawmakers have introduced several bills to address mental health. In the United States Senate, a bipartisan group of four Senators introduced S. 2680, the Mental Health Reform Act of 2016. This bill encouraged evidence-based programs for the treatment of mental illness, provided federal dollars to states to deliver mental health services for adults and children, and created programs to develop a mental health workforce.

It was encouraging to see that many components of S.2680 were included in H.R 34, the 21st Century Cures Act, which was signed into law on December 13th, 2016.  H.R 34 faces some headwinds because some of the funding portions are subject to Congressional appropriations, and if Congress is feeling austere, they can tighten the purse strings. Moving forward, a major issue of concern for mental health is the future of the Affordable Care Act. Under the Affordable Care Act, states were initially mandated to expand their Medicaid rolls. A Supreme Court decision, however, made the decision to expand optional. So far 32 states, including Washington D.C., have expanded. Some red states, like Iowa, Arkansas and Indiana have utilized the waiver process of the ACA to expand their program. If the ACA is repealed, policymakers will have to contend with the effects on the private insurance market as well as Medicaid.

Right now, the crystal ball is murky. Only time will tell.

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

December 22, 2016 at 10:45 am

Perspective on Climate Change: Supporters versus Skeptics

leave a comment »

By: Nivedita Sengupta, PhD

        A recent United Nations report shows that earth’s surface temperature is rapidly hurtling towards a two degrees Celsius increase. Scientists say that the world must stay below two degrees to avoid the worst effects of climate change. However solving this issue can be challenging and overwhelming. The science used to generate the evidence for climate change is complicated and the predictions carry many caveats and asterisks. Nonetheless the major question that stands out is, “What is climate change and why people are skeptic about it?”

The definition of climate change itself triggers a difference in opinion. According to the Intergovernmental Panel on Climate Change (IPCC), climate change refers to “A change in the climate that persists for decades or longer, arising from either natural causes or human activity”. This definition differs from that in the United Nations Framework Convention on Climate Change (UNFCCC), where climate change preferentially refers to “A change of climate that is attributed directly or indirectly to human activity that alters the composition of the global atmosphere.” Instead, UNFCCC defines a change in climate over comparable time periods because of natural causes as climate variability.

Keeping these definitions aside, many policymakers and major corporations worldwide have agreed and expressed willingness to address climate change. They believe the scientific evidence generated so far demands action. But some scientists, economists, industry groups, and policy experts continue to insist that there is no need for policy changes. Ironically many people concede with them and insist that the entire problem is exaggerated. The debate between the supporters and the skeptics is ingrained, and both groups deride each other with countless claims and counterclaims on both the science and proposed policy solutions.

Surprisingly, some climate-change skeptics do admit that the earth is warming. But they debate the cause, its potential impact, and whether human intervention is affecting it. As Myron Ebell, the president elects’ select candidate for leading the transition of the Environmental Protection Agency, stated his views on climate change “I agree that carbon dioxide is a greenhouse gas, and its concentrations in the atmosphere are increasing as a result of human activities—primarily burning coal, oil, and natural gas, where I disagree is whether this amounts to a crisis that requires drastic action.”

So what are the premises on which the skeptics insist that the current policies addressing the issue of climate change are unwarranted and dispensable? Broadly, this question can be answered by discussing the views of skeptics versus supporters on three major points of concern.

First, what is global warming and is it really happening?

Skeptics

The skeptics argue that the earth is not warming. They contend that the satellite-based temperature measurements, taken across the earth’s surface, indicate no measurable change in the last 30 years, and that the measuring standards are different in every place resulting in inconsistent readings. Besides, the IPCC’s graph of “global” temperatures is incorrect as they do not state the earlier cool period of about 1400 or a very warm period from about 900 to 1050 when the temperatures in Europe were several degrees warmer than today. They also make the point that warming is natural and if the earth was warmer during those periods and consecutively cooled down via some natural mechanisms, then that will happen in the future too.

Supporters

According to IPCC and National Aeronautics and Space Administration (NASA), records of temperature that date back to the distant past, generated by analysis of ice cores and sediments, are quite accurate and suggest that the warming in recent decades is way higher than any period over the past millennium. Gavin Schmidt, director of NASA’s Goddard Institute for Space Studies, said, “It’s unprecedented in 1,000 years.” 15 out of the 16 hottest years in NASA’s 134-year record have occurred since 2000.

Second, is there any real impact because of climate change?

Skeptics

Skeptics believe that climate change has no impact whatsoever and is not responsible for the extreme weather catastrophes in recent times. It has happened in the past and has no connection with either global warming or increased levels of carbon dioxide.

Supporters

The supporters says that the impacts are everywhere starting from the melting of polar ice sheets to endangered biodiversity, which will eventually risk human health and society. In the US alone, numerous weather and climate-born billion-dollar disasters have occurred from 1980-2016, the most recent being the historic flood devastating a large area of southern Louisiana.

Third, and the most disputed subject is…

Are human beings really responsible for climate change?

Skeptics

According to skeptics the carbon dioxide levels are not high enough to elicit concern as the current carbon dioxide levels were exceeded in the last 150 years. Besides, they argue that water vapor, and not carbon dioxide, is the significant greenhouse gas because it absorbs more radiant heat than carbon dioxide and makes up about 3% of the atmosphere compared to 0.03% by carbon dioxide. The current level of carbon dioxide contributes to about 3% of the total warming and hence the anthropogenic carbon dioxide contribution to total warming is, at the most, about 0.1%. Therefore carbon dioxide generated because of “human interference” has no discernible role in global warming. They consider carbon dioxide as beneficial for the environment and attribute other factors like aircraft exhaust, cosmic rays, solar winds, magnetic fields and solar intensity as causes of climate change. They state that no definitive factor for climate change has been established yet and any assertive statements about current and future climates should be regarded with skepticism.

Supporters

IPCC in its 2014 climate change report states, “Human influence on the climate system is clear, and recent anthropogenic emissions of greenhouse gases are the highest in history.” Global warming is primarily a problem of too much carbon dioxide in the atmosphere. This carbon overload is caused mainly when we burn fossil fuels like coal, oil and gas or cut down and burn forests. Burning of fossil fuels to make electricity is the largest source of heat-trapping pollution. Though water vapor is the most abundant heat-trapping gas, it has a short cycle in the atmosphere and cannot build up in the same way carbon dioxide does. Preventing dangerous climate change requires very deep cuts in carbon dioxide emissions, as well as the use of alternatives to fossil fuels worldwide.

In 2015, the Paris Agreement was made within the UNFCCC to deal with climate change by reducing greenhouse gases emissions starting in 2020. So far, 114 out of 197 countries have ratified with the agreement and vouched to cut down emission. On September 2016, the United States of America joined the Paris agreement along with China, another big emission producing country. President Obama called it a top concern and said “For all the challenges that we face, the growing threat of climate change could define the contours of this century more dramatically than any other challenge”. In contrast, president-elect Donald Trump has shown a skeptic view on this matter and has described climate change as “bullshit” and a “hoax. He vowed to dismantle the EPA and withdraw United States from the Paris Agreement to reduce the damage on economy created by climate change alarmists. However, there are a handful of elected members who offer some hope to fight the cause of climate change in coming years. Five candidates with strong climate credentials won offices in Congress, and they have impressive personal and political backgrounds. In the present situation it’s critical that the world stays on course with rational, prompt and comprehensive action to mitigate climate change.

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

December 8, 2016 at 9:00 am

Streamlining Human Research by Centralizing Review: Could It Slow Things Down?

leave a comment »

By: Leopold Kong, PhD

Source: NIH Image Gallery on Flickr, under Creative Commons

       Human research in the United States in the form of clinical trials and other scientific studies has been regulated by Institutional Review Boards (IRBs) since 1974 after the passage of the National Research Act. The initial policies were inspired by the Nuremberg Code, a set of international research ethics principles developed in the aftermath of the second world war when Nazi medical officers conducted large-scale human experimentation atrocities. Policies that regulate IRBs in the United States are codified in the Common Rule, which mandates requirements such as membership qualifications and guidelines for protections of certain vulnerable research subjects. Although the Common Rule has not been modified since 1991, the changing face of medical research has led to recent proposals to improve the efficiency, accountability and qualification of IRBs. What has motivated change? The following situations may be illustrative.

In November 2015, the consumer advocacy group Public Citizen, and the American Medical Student Association contacted the Office for Human Research Protections (OHRP) to criticize two studies on how longer-than-21-hour shifts of first-year medical students may affect 30-day patient mortality rates. Public Citizen noted that even though the studies forced new residents to work “dangerously long shifts”, placing all involved in danger, they were readily approved by IRBs. Similarly, IRBs approved a study on the hazards of pediatric exposure to lead paint, in which researchers did not clearly reveal to households that they detected high levels of lead in their homes, resulting in neurological problems for at least one child. Also, a publication last year in the European journal Acta Informatica Medica found that only 26.5% of individuals in IRBs correctly answered 11 simple True or False questions designed to test understanding of study design and ethics. Part of the problem may be research fatigue since, according to OHRP, there are only about 3,500 registered IRBs that review more than 675,000 research protocols annually. Inefficiencies in the review process may further exacerbate the situation.

Late last year, Kathy Hudson and Francis Collins, the Deputy Director for Science, Outreach and Policy at the National Institutes of Health (NIH), and the Director of the NIH, respectively, published a Perspective in the New England Journal of Medicine on the proposed revisions to the Common Rule. In order to bring the Common Rule into the 21st century, the revisions will focus on implementing broad biospecimen consent, enhanced privacy safeguards, streamlined IRB review, and requirements for more agencies to follow the Common Rule. One of the more interesting and key revisions to improve review efficiency, the requirement for a single IRB (sIRB) for multisite studies, will be implemented on May 25, 2017. The rest of this essay focuses on this proposed change.

The time it takes for a clinical trial protocol to be reviewed by an IRB depends on the type of review, and varies from location to location. For example, a protocol can be deemed exempt, which might take only 1-2 weeks of review, expedited, which might take a few weeks longer, or be required for full review, which would take even longer. Re-evaluations are required if the protocol is sent through expedited or full reviews every year, after any changes to the method, or after any adverse event in the study. The review generally evaluates proof of human subjects’ training, consent, recruitment materials, and data collection instruments, as well as individual conflicts of interests, all of which may depend on the specific population studied and local restrictions. However, clinical trials are increasingly spread across multiple sites in order to recruit enough people for their studies. Under the current rule, each site must conduct local reviews of the same protocol independently of each other, potentially causing delays due to unneeded redundancies. “The problem that this [proposed sIRB] policy was trying to solve was that we were seeing delays and complications in moving research forward in a way that wasn’t providing commensurate protections for human research participants,” said Carrie D. Wolinetz, NIH associate director for science policy, to Bloomberg BNA.

From December 3, 2014 to January 29, 2015, the NIH received 167 comments from individual researchers, academic institutions, IRBs, advocacy groups, scientific societies, healthcare organizations, Tribal National representatives and members of the general public on the sIRB proposal. Many of the comments were highly positive and supportive of the revision. For example, the Federation of American Societies for Experimental Biology (FASEB), which represents over 120,000 researchers across 27 scientific societies, stated that “[t]his change would facilitate collaborative review arrangements and reduce the obstacles that investigators encounter when embarking on multi-center projects.” David M Pollock, the president of the American Physiological Society, added further support, commenting that the current rule results in “lack of uniformity” while the proposed changes may reduce administrative burden, and improve efficiency and quality of review.

However, many of the comments displayed reservations and harsh criticism. For example Harry W. Orf representing Massachusetts General Hospital was skeptical that the costs to move into the sIRB system would outweigh the benefits, commenting “there is currently little research or data to demonstrate that these potential benefits will materialize.” In much stronger terms, Curtis Meinert from the Johns Hopkins Bloomberg School of Public Health stated,” [t]he expectation is that the change will save money. Good luck on that. The reality is that the change will increase costs given what IRBs of record have to do to acquire the necessary assurances and certifications. The expectation also is that the single IRB will shorten the time to start, good luck on that one also.” Meinert and others, including the Human Subjects Protection Branch at Walter Reed Army Institute of Research, pointed out that the time it takes to start a study is mainly determined by other factors such as the time it takes for investigators to agree on a protocol, not IRB review. Meinert also warns that, “A likely unintended effect of the one IRB requirement is to further diminish the means and incentives for individual investigators to propose and initiate multicenter studies..” Finally, some communities also viewed the revision as a threat to local autonomy and representation. For example, Bill John Baker, the Principal Chief of the Cherokee Nation, commented, “Tribal IRB members have firsthand knowledge of local tribal customs, cultural values, and tribal sensitivities. If Tribal IRB members are not able to participate […] our citizens are affected by persons who are not sensitive to their distinctive needs.”

Analysis of all comments made regarding sIRB by the Council on Government Relations indicated that 51% opposed the proposal while 42% supported it and 6% offered qualified support. Interestingly, most commercial IRBs, which might be more favorably biased towards the needs of industry sponsors, supported this change. A breakdown of the numbers indicates that while the majority of advocacy groups, professional societies, disease registries and individual researchers supported the change, 89% of universities and medical centers, the organizations that are directly involved with clinical trials and representing thousands of researchers and medical support staff, opposed it. “The spirit of the changes are well intended, but it fails to address the fact that roles and responsibilities of the IRB have expanded beyond those initially dictated when the use of IRBs were first formed“ says Annika Shuali, certified clinical research coordinator at the University of Virginia.

Clearly, reforms are needed to update the aging IRB system. In theory, centralization through the sIRB may improve efficiency. However, in practice, the complexities and details of conducting clinical trials at specific sites such as resolving individual conflicts of interest, being compliant with local regulations, and accounting for the specific rights of certain populations make centralization extremely difficult. To address these site-specific issues, local IRB’s may still need to be in place, but now required to communicate to the sIRB, potentially increasing administrative burden, which undermines the original motivation to streamline review. Hopefully, the sIRB revision to be implemented next year will be further revised to address the critiques from the majority of the community.

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

December 3, 2016 at 11:46 am

Entrusting Your Life to Binary: The Increasing Popularity of Robotics in the Operating Room

leave a comment »

By: Sterling Payne, B.Sc.

Source: Flickr; by Medical Illustration, Welcome Images, under Creative Commons

       Minimally invasive surgery has been around since the late 20th century, however, technological advancement has sent robotic surgeons to the forefront of medicine in the past 20 years. The term “minimally invasive” refers to the performance of a surgery through small, precise incisions a far distance away from the target, thus having less of a physical impact on the patient in terms of pain and recovery times. As one can imagine, surgeons must use small instruments during a minimally invasive procedure and operate with a high-level of control in order to perform a successful operation. In light of these requirements, and due to fast-paced advances in robotics in the last decade, robots have become more common in the operating room. Though their use benefits all parties involved if used correctly, several questions of policy accompany the robotic advance and the goal of fully autonomous surgery.

The da Vinci system is one of the most popular devices used for minimally invasive surgeries, and was approved by the FDA in 2000 for use in surgical procedures. The newest model, the da Vinci Xi® System, includes four separate robotic arms that operate a camera and multiple arrays of tools. The camera projects a 3D view of the environment onto a monitor for the surgeon, who in turn operates the other 3 arms to perform highly precise movements. The da Vinci arms and instruments allow the surgeon more control over the subject via additional degrees of freedom (less restricted movement), and features such as tremor reduction.

Though the da Vinci system is widely used, its success still depends on the skill and experience of the operator. Surgical robotics engineer Azad Shademan and colleagues acknowledged this in a recent publication in Science, highlighting their successful design, manufacturing, and use of the Smart Tissue Autonomous Robot (STAR). The STAR contains a complex imaging system for tracking the dynamic movement of soft tissue, as well as a custom algorithm that allows the robot to perform a fully autonomous suturing procedure. Azad and colleagues demonstrated the effectiveness of their robot by having it perform various stitching procedures on non-living pig tissue in an open surgical setting. Not only did the STAR succeed in both procedures, it outperformed highly experienced surgeons that it was pitted against. More information on the STAR can be found here.

In response to the da Vinci system, Google recently announced Verb Surgical, a joint-venture company with Johnson & Johnson. Verb aims to create “a new future, a future unimagined even a few years ago, which will involve machine learning, robotic surgery, instrumentation, advanced visualization, and data analytics”. Whereas the da Vinci system helps the surgeon perform small, precise, movements, Verb will use artificial intelligence amongst other technologies to augment the surgeon’s view, providing information such as anatomy and various boundaries of bodies such as tumors. A procedure assisted by the da Vinci system can increase the physical dexterity and mobility of the surgeon, however, Verb aims to achieve that and give a “good” surgeon the knowledge and thinking modalities previously confined to expert surgeons gathered over time through hundreds of surgeries. In a way, Verb could level the playing field in more ways than one, allowing all surgeons access to a vast knowledge base accumulated through machine learning.

As proven by the introduction of fully self-driving cars by Tesla in October, autonomous robots are becoming integrated into society; surgery is no exception. A 2014 paper in the American Medical Association Journal of Ethics states that we can apply Isaac Asimov’s (author of I, Robot) three laws of robotics to robot-assisted surgery “if we acknowledge that the autonomy resides in the surgeon”. However, the policy discussion for fully autonomous robot surgeons is still emergent. In the case of malpractice, the doctor performing the operation is usually the responsible party. When you replace the doctor with an algorithm, where does the accountability lie? When a robot surgeon makes a mistake, one could argue that the human surgeon failed to step in when necessary or supervise the surgery adequately. One could also argue logically that the manufacturers should claim responsibility for a malfunction during an automated surgery. Other possibilities include the programmer(s) who designed the algorithms (like the stitching algorithm featured in the STAR), as well as the hospital housing the robot. This entry from a clinical robotics law blog highlights the aforementioned questions from a litigator’s standpoint.

A final talking-point amidst the dawn of autonomous surgical technology is the safeguarding of wireless connections to prevent “hacking” or unintended use of the machine during telesurgery. Telesurgery refers to the performance of an operation by a surgeon who is physically separated from the patient by a long distance, accomplished through wireless connections, at times open and unsecured. In 2015, a team of researchers at the University of Washington addressed the weaknesses of the procedure by hacking into a teleoperated surgical robot, the Raven II. The attacks highlighted vulnerabilities by flooding the robot with useless data, thus making intended movements less fluid, even forcing an emergency stop mechanism. Findings such as this will help with the future development and security of teleoperated surgical robots, their fully autonomous counterparts, and the policy which binds them.

When a web browser or computer application crashes, we simply hit restart, relying on autosave or some other mechanism to preserve our previous work. Unlike a computer, a human has no “refresh” button; any wrongful actions that harm the patient cannot be reversed, placing a far greater weight on all parties involved when a mistake is made. As it stands, the policy discussion for accountable, autonomous robots and algorithms is gaining much-needed momentum as said devices inch their way into society.

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

November 24, 2016 at 9:00 am

Posted in Essays

Tagged with , , ,