Science Policy For All

Because science policy affects everyone.

Archive for November 2016

Science Policy Around the Web – November 25, 2016

leave a comment »

By: Alida Palmisano, PhD

Source: pixabay

Climate Change

2016 Set to Break Heat Record Despite Slowdown in Emissions

An article published in the Washington Post discusses recent news about climate change. Temperatures around the globe are reaching a record high this year, according to a report from the U.N. weather agency. Another report from the World Meteorological Organization showed that while emissions of a key global warming gas have flattened out in the past three years, preliminary data through October showed that world temperatures are 1.2 degrees Celsius (2.2 degrees Fahrenheit) above pre-industrial levels. That’s getting close to the limit set by the global climate agreement adopted in Paris last year. It calls for limiting the temperature rise since the industrial revolution to 2 degrees C or even 1.5 degrees C. Environmental groups and climate scientists said the report underscores the need to quickly reduce emissions of carbon dioxide and other greenhouse gases blamed for warming the planet.

Another recent report delivered some positive news, showing global CO2 emissions have flattened out in the past three years. However, the authors of the study cautioned that it is far too early to declare that the slowdown, mainly caused by declining coal use in China, is a permanent trend. Even if China’s emissions have stabilized, growth in India and other developing countries could push global CO2 levels higher again. Even the recent election in the United States — the world’s No. 2 carbon polluter — could also have a significant impact.

Some researchers stressed that it’s not enough for global emissions to stabilize, saying they need to drop toward zero for the world to meet the goals of the Paris deal. “Worryingly, the reductions pledged by the nations under the Paris Agreement are not sufficient to achieve this,” said climate scientist Chris Rapley of University College London. (Karl Ritter, Washington Post)

Information and Technology

Facebook, Google Take Steps To Confront Fake News

Are we, as a society, really prepared for today’s way of receiving information from the web? In a recent article, NPR reporter Aarti Shahani talks about the issues related to viral fake news.

Facebook CEO Mark Zuckerberg has addressed (multiple times) the issue of fake news, which are inaccurate or simply false information that appears on the Web in the guise of journalism. Zuckerberg said that the notion that fake news on his platform influenced the election in any way is “a pretty crazy idea.” But many disagree; and as a former employee, Antonio Garcia-Martinez says Zuckerberg’s comment sounds “more than a little disingenuous here.” Facebook makes money by selling ad space inside its news feed. It also makes money as a broker between its advertisers and other online companies. A company spokesperson told NPR that it is not doing business with fake news apps as these outside parties are not allowed to use the ad network. But the company did not address the reality that fake news in the Facebook news feed attract people and clicks, which translate to money.

Google, another tech giant, said that it is working on a policy to keep its ads off fake news sites. Garcia-Martinez says that it’s “ambitious” of Google to make this promise. “Where does it end? Are they just going to limit it to advertising?” he asks. “Are they not going to show search results of things that are obviously false? I mean, even false content itself is free speech, even though it’s false speech.”

These issues are emerging in today’s society because of technological advances; however policy and legislation struggle to keep up with the evolving way we interact with the world. (Aarti Shahani, NPR)

Public Health

Could the FDA be Dismantled Under Trump?

A recent article reflects on how the President-elect may change the work of the Food and Drug Administration (FDA). Public health policies shift may include a surrender of the FDA’s rules for off-label promotion of drugs, the importation of more drugs from other countries, and fewer requirements for clinical trials (the gold standard for determining whether medicines are safe and effective). “Between a Trump presidency and a radically pro-business Congress, the next few years may see a removal of numerous consumer protections,” said Michael Jacobson, co-founder and president of the Center for Science in the Public Interest.

The FDA’s balancing act between patient protection and the drug and device industry’s push for a quicker path to market has never been easy. Over the past few years, spurred by patient advocacy groups and much of the pharmaceutical industry, lawmakers have fought over bills that would change how the country regulates prescription drugs and medical devices. Regardless of whether that legislation advances, Trump’s presidency is likely to enable the industry to get much of what it wants in terms of deregulation. “At the very least, President-elect Trump will support ‘Right-to-try’ laws that attempt to provide access to unapproved drugs,” the authors wrote.

One former FDA official, who spoke anonymously, said that the support for the right to try movement signals a broader disapproval of regulation. “The people who believe in that don’t believe there should be an FDA,” the former official said. Jacobson, of the Center for Science in the Public Interest, said that Congress could easily cut the FDA’s budget thereby “crippling programs to prevent foodborne infections, prevent dishonest food labels, and keep unsafe additives out of the food supply.” Others said even if he intends to overhaul the FDA, Trump may be surprised to find that there are limits to what he can do. “You can be against regulation all you want but the Food, Drug and Cosmetic Act is not something that is malleable within executive orders,” said Dr. Sidney Wolfe, founder and senior adviser to Public Citizen’s Health Research Group, which has long battled the agency for better patient protection. “There are laws, many laws, and it took a long time to get them.” (Sheila Kaplan, STAT)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

November 25, 2016 at 9:00 am

Entrusting Your Life to Binary: The Increasing Popularity of Robotics in the Operating Room

leave a comment »

By: Sterling Payne, B.Sc.

Source: Flickr; by Medical Illustration, Welcome Images, under Creative Commons

       Minimally invasive surgery has been around since the late 20th century, however, technological advancement has sent robotic surgeons to the forefront of medicine in the past 20 years. The term “minimally invasive” refers to the performance of a surgery through small, precise incisions a far distance away from the target, thus having less of a physical impact on the patient in terms of pain and recovery times. As one can imagine, surgeons must use small instruments during a minimally invasive procedure and operate with a high-level of control in order to perform a successful operation. In light of these requirements, and due to fast-paced advances in robotics in the last decade, robots have become more common in the operating room. Though their use benefits all parties involved if used correctly, several questions of policy accompany the robotic advance and the goal of fully autonomous surgery.

The da Vinci system is one of the most popular devices used for minimally invasive surgeries, and was approved by the FDA in 2000 for use in surgical procedures. The newest model, the da Vinci Xi® System, includes four separate robotic arms that operate a camera and multiple arrays of tools. The camera projects a 3D view of the environment onto a monitor for the surgeon, who in turn operates the other 3 arms to perform highly precise movements. The da Vinci arms and instruments allow the surgeon more control over the subject via additional degrees of freedom (less restricted movement), and features such as tremor reduction.

Though the da Vinci system is widely used, its success still depends on the skill and experience of the operator. Surgical robotics engineer Azad Shademan and colleagues acknowledged this in a recent publication in Science, highlighting their successful design, manufacturing, and use of the Smart Tissue Autonomous Robot (STAR). The STAR contains a complex imaging system for tracking the dynamic movement of soft tissue, as well as a custom algorithm that allows the robot to perform a fully autonomous suturing procedure. Azad and colleagues demonstrated the effectiveness of their robot by having it perform various stitching procedures on non-living pig tissue in an open surgical setting. Not only did the STAR succeed in both procedures, it outperformed highly experienced surgeons that it was pitted against. More information on the STAR can be found here.

In response to the da Vinci system, Google recently announced Verb Surgical, a joint-venture company with Johnson & Johnson. Verb aims to create “a new future, a future unimagined even a few years ago, which will involve machine learning, robotic surgery, instrumentation, advanced visualization, and data analytics”. Whereas the da Vinci system helps the surgeon perform small, precise, movements, Verb will use artificial intelligence amongst other technologies to augment the surgeon’s view, providing information such as anatomy and various boundaries of bodies such as tumors. A procedure assisted by the da Vinci system can increase the physical dexterity and mobility of the surgeon, however, Verb aims to achieve that and give a “good” surgeon the knowledge and thinking modalities previously confined to expert surgeons gathered over time through hundreds of surgeries. In a way, Verb could level the playing field in more ways than one, allowing all surgeons access to a vast knowledge base accumulated through machine learning.

As proven by the introduction of fully self-driving cars by Tesla in October, autonomous robots are becoming integrated into society; surgery is no exception. A 2014 paper in the American Medical Association Journal of Ethics states that we can apply Isaac Asimov’s (author of I, Robot) three laws of robotics to robot-assisted surgery “if we acknowledge that the autonomy resides in the surgeon”. However, the policy discussion for fully autonomous robot surgeons is still emergent. In the case of malpractice, the doctor performing the operation is usually the responsible party. When you replace the doctor with an algorithm, where does the accountability lie? When a robot surgeon makes a mistake, one could argue that the human surgeon failed to step in when necessary or supervise the surgery adequately. One could also argue logically that the manufacturers should claim responsibility for a malfunction during an automated surgery. Other possibilities include the programmer(s) who designed the algorithms (like the stitching algorithm featured in the STAR), as well as the hospital housing the robot. This entry from a clinical robotics law blog highlights the aforementioned questions from a litigator’s standpoint.

A final talking-point amidst the dawn of autonomous surgical technology is the safeguarding of wireless connections to prevent “hacking” or unintended use of the machine during telesurgery. Telesurgery refers to the performance of an operation by a surgeon who is physically separated from the patient by a long distance, accomplished through wireless connections, at times open and unsecured. In 2015, a team of researchers at the University of Washington addressed the weaknesses of the procedure by hacking into a teleoperated surgical robot, the Raven II. The attacks highlighted vulnerabilities by flooding the robot with useless data, thus making intended movements less fluid, even forcing an emergency stop mechanism. Findings such as this will help with the future development and security of teleoperated surgical robots, their fully autonomous counterparts, and the policy which binds them.

When a web browser or computer application crashes, we simply hit restart, relying on autosave or some other mechanism to preserve our previous work. Unlike a computer, a human has no “refresh” button; any wrongful actions that harm the patient cannot be reversed, placing a far greater weight on all parties involved when a mistake is made. As it stands, the policy discussion for accountable, autonomous robots and algorithms is gaining much-needed momentum as said devices inch their way into society.

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

November 24, 2016 at 9:00 am

Posted in Essays

Tagged with , , ,

Science Policy Around the Web – November 22, 2016

leave a comment »

By: Rachel Smallwood, PhD

Photo source: pixabay

Federal Research Funding

US R&D Spending at All-Time High, Federal Share Reaches Record Low

Recently released data from the National Science Foundation (NSF) showed trending increases in scientific research funding in the US across the past several years. Estimates of the total funding for 2015 put the value at an all-time high for research and development (R&D) funding for any country in a single year. In 2009, President Obama stated a goal to devote 3% of the USA’s gross domestic product (GDP) to research, and we have been making slow progress to that point; in 2015, 2.78% of the GDP went to research. Businesses accounted for the largest portion of overall scientific funding, contributing 69% of the funds. The second largest contributor was the federal government; however, it had the lowest percentage share of the total since the NSF started tracking funding in 1953, and the actual dollar amount contributed has been declining since 2011. Therefore, although the overall percentage of GDP going to research is increasing, that increase is driven by businesses, whereas the GDP percentage contributed by the federal government has dropped to almost 0.6%.

When taking a closer look at types of research, the federal government is the largest funding source for basic science research, covering 45% of the total. However, businesses make up the majority of the funding for applied research (52% in 2014) and experimental development (82% in 2014). This disproportionality in funding types combined with the decreases in federal research spending are concerning for the basic science field. There is more competition for less money, and this concern is compounded by uncertainty and questions about President-Elect Trump’s position on and plans for scientific funding. Aside from a couple of issues, primarily concerning climate change and the environment, he has said very little about science and research. Many scientists, institutions, and concerned citizens will be watching closely to see how science policy develops under Trump’s administration and its effects on federal spending and beyond. (Mike Henry, American Institute of Physics)

Biomedical Research

‘Minibrains’ Could Help Drug Discovery for Zika and for Alzheimer’s

A group of researchers at Johns Hopkins University (JHU) is working on a promising tool for evaluating disease and drug effects in humans without actually using humans for the tests. ‘Minibrains’ are clusters of human cells that originated as skin cells, reprogrammed to an earlier stage of development, and then forced to differentiate into human neural cells. They mimic the human brain as far as cell types and connections, but will never be anywhere near as large as a human brain and can never learn or become conscious.

A presentation earlier this year at the American Association for the Advancement of Science conference showcased the potential utility for minibrains. A large majority of drugs that are tested in animals fail when introduced in humans. Minibrains provide a way to test these drugs in human tissue at a much earlier stage – saving time, money, and animal testing – without risking harm to humans. Minibrains to test for biocompatibility can be made from skin cells of healthy humans, but skin cells from people with diseases or genetic traits can also be used to study disease effects.

A presentation at the Society for Neuroscience conference this month demonstrated one such disease – Zika. The minibrains’ growth is similar to fetal brain growth during early pregnancy. Using the minibrains, Dr. Hongjun Song’s team at JHU was able to see how the Zika virus affected the cells; the affected minibrains were much smaller than normal, a result that appears analogous to the microcephaly observed in infants whose mothers were infected with Zika during the first trimester.

Other presentations at the meeting showcased work from several research groups that are already using minibrains to study diseases and disorders including brain cancer, Down syndrome, and Rett syndrome, and plans are underway to utilize it in autism, schizophrenia, and Alzheimer’s disease. Though there might be a bit of an acceptance curve with the general public, minibrains potentially offer an avenue of testing that is a better representation of actual human cell behavior and response, is safer and more affordable, and reduces the need for animal testing. (Jon Hamilton, NPR)

Health Policy

A Twist on ‘Involuntary Commitment’: Some Heroin Users Request It

The opioid addiction epidemic has become a significant healthcare crisis in the United States. Just last week the US Surgeon General announced plans to target addiction and substance abuse. He also stated the desire for a change in perception of addiction – it is a medical condition rather than a moral or character flaw. Earlier this year, the Centers for Disease Control published guidelines that address opioid prescribing practices for chronic pain, strongly urging physicians to exhaust non-pharmacologic options before utilizing opioids. In response to the rising concern over prescription opioid abuse, steps have been taken to reduce prescriptions and access. This has resulted in many turning to heroin – which is usually a cheaper alternative anyway – to get their opioid fix.

One of the first steps in treatment and recovery for addiction and dependence is detoxing. However, opioids are highly addictive and many people struggle with the temptation to relapse. Additionally, many of the programs designed to help with the initial detox have long wait lists, are expensive, and may not be covered by insurance, further deterring those with addiction and dependence from getting the help they need. These factors have caused many to start turning to their states, asking to be voluntarily committed to a program on the basis that they are a danger to themselves or others because of their substance abuse. This is currently an option in 38 states. These programs can be held in either privately-run institutions or in state prisons. However, this practice is controversial because if the person’s insurance does not cover their stay, it falls to tax payers to foot the bill. While this is unpopular with some, advocates say the civil commitment laws are important options while there may be no other immediate ways for an individual to get help. (Karen Brown, NPR)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

November 22, 2016 at 9:00 am

Science Policy Around the Web – November 18, 2016

leave a comment »

By: Thaddeus Davenport, PhD

Source: pixabay

2016 Presidential Elections

How the Trump Administration Might Impact Science

Donald Trump is now the President-elect of the United States of America. Mr. Trump’s loose speaking (and tweeting) style, affinity for controversy, relative disregard for facts, and his lack of experience in domestic and foreign policy, led him to make a number of vague, and sometimes contradictory statements about his specific policy positions over the course of his campaign. In light of this, there are few people on earth – and perhaps no people on earth, including Mr. Trump – who know exactly what to expect from his presidency. In Nature News last week, Sara Reardon, Jeff Tollefson, Alexandra Witze and Lauren Morello considered how Mr. Trump’s presidency might affect science, focusing on what is known about his positions on biomedical research, climate change, the space program, and immigration. The authors’ analyses are summarized below:

Biomedical Research – Mr. Trump will be in a position to undo the executive order signed by President Obama in 2009, which eased some restrictions on work with human embryonic stem cells, a decision criticized at the time by the current vice-president elect, Mike Pence. In his characteristically brash speaking style, Mr. Trump also called the NIH ‘terrible’ in a radio interview last year, but beyond this, he has said little about his plans for biomedical research.

Climate Change – Early signs suggest that Mr. Trump will dramatically shift the direction of the Environmental Protection Agency (EPA) and undo some of its work to curb greenhouse gas emissions under the Clean Power Plan implemented by President Obama. Mr. Trump has already appointed Myron Ebell, a denier of climate change, to lead the transition at the EPA and other federal agencies involved in climate change and environmental policy. Mr. Trump has also vowed to pull out of the Paris Climate Agreement which, under the terms of the agreement, may not happen immediately, but it may influence how and whether other countries participate in the agreement in the future.

Space Program – Based on writings from Trump’s campaign advisers there may be continued support for deep space exploration, especially through public-private partnerships with companies such as Orbital and SpaceX, but not earth observation and climate monitoring programs, which account for one third of NASA’s budget.

Immigration – A central pillar of Mr. Trump’s campaign was his strong and divisive stance on immigration. He has vowed to build a wall on the US border with Mexico, deport millions of illegal immigrants, defund ‘sanctuary cities’ throughout the United States, impose “extreme vetting” of immigrants, and stop immigration from countries where “adequate screening cannot occur”, which he believes includes Syria and Libya, and set new “caps” on legal immigration into the United States. These proposals have drawn objections from human rights advocates, and scientists worry that they may discourage international students and researchers from working in, and contributing their expertise to, the United States.

It remains to be seen how Mr. Trump will shape the future of science in the United States and the world, but it is clear that he is taking office at a pivotal moment. He would do well to seriously consider how his policies and his words will impact research, discovery, and innovation within the United States, and more importantly, the long-term health of vulnerable populations, economies, and ecosystems around the globe. (Sara Reardon, Jeff Tollefson, Alexandra Witze and Lauren Morello, Nature News)

Public Health

Soda Taxes on the Ballot

Given the focus that has been placed on the outcome of the Presidential election, you may NOT have heard about the results of smaller ballot items including a decision to begin taxing sodas in four US cities – San Francisco, Oakland, and Albany, California, and Boulder, Colorado – as reported by Margot Sanger-Katz for the New York Times. These cities join Berkeley, California and Philadelphia, Pennsylvania, which passed soda taxes of their own in 2014 and June of 2016, respectively. The victory for proponents of soda taxes came after a costly campaign, with total spending in the Bay Area region campaign on the order of $50 million. Former New York City mayor, Michael Bloomberg, and Laura and John Arnold spent heavily in support of taxing sodas, but did not equal the spending by the soda industry, which opposed the taxes. During his time as mayor, Mr. Bloomberg attempted to ban the sale of sodas larger than 16 ounces in New York City in 2012, but this was struck down in the New York State Court of Appeals in 2014.

Soda tax advocates see the outcome of this year’s ballot initiatives as a sign of a sea change in public acceptance of programs intended to discourage soda consumption (and increase revenue for municipalities), but it is indisputable, especially in light of the results of the presidential election, that the set of relatively liberal cities that have adopted soda tax measures do not accurately represent the thinking of people throughout the United States. Though it is still too early to know if soda tax programs lead to improvements in public health, evidence from Berkeley and Mexico – which passed a soda tax in 2013 – indicates that these programs have the potential to decrease soda consumption. Regardless of how similar initiatives may perform in other cities on future ballots, the increasing number of cities participating in soda tax programs will provide valuable data to inform policy decisions aimed at reducing obesity and diabetes. (Margot Sanger-Katz, New York Times)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

November 18, 2016 at 9:00 am

How Much Neuroscience Funding is the Right Amount?

leave a comment »

By: Brian Russ, PhD

Source: pixabay

       Scientific funding can be a very tricky proposition. Unfortunately, there is a finite amount of money that is put towards science each funding cycle. This means that at any given time funding agencies need to decide where they believe their funds will be best spent. Every funding cycle, one can find different groups lamenting that their favorite topic is “being underfunded” while some other group is getting “too big a piece of the pie”. There is often no right answer to the question of how much is the right amount of funding to provide different topics, and the likelihood is that at the end of the day every group will feel that they are not getting the right amount of respect and funding.

This debate has come to the forefront recently in the fields of psychiatry and neuroscience with a change in the leadership at the National Institute of Mental Health (NIMH). In September, Dr. Joshua Gordon became the new director of the NIMH. Dr. Gordon’s directorship of the NIMH comes after a 13-year period of leadership by Dr. Tom Insel. During the previous administration, there had been an increasing focus on funding neuroscience related work, often at the expense of purely behavioral work, such as cognitive behavioral therapies for psychiatry patients. It is important to point out that the NIMH’s definition of neuroscience research includes basic, translational, and clinical neuroscience research. This direction led to a new research framework for studying mental health disorders termed the Research Domain Criteria (RDoC), which has a very strong neuroscience component. The goal of RDoC is to provide a new framework in which researchers and clinicians can study and treat mental health disorders. The RDoC framework involves neuroscience components of brain circuits and physiology, and cognitive components of behavior and self-reports. The end goal is to provide a more comprehensive description of mental health disorders with the intention of developing cures and treatments. This push toward RDoC, and more neuroscience in general, has led to both praise and criticism of where the NIMH is directing its funding opportunities.

Recently, an opinion piece was published in the New York Times stating that the NIMH needs to reverse their push towards more neuroscience. Specifically, Dr. Markowitz, a research psychiatrist from Columbia University, believes that the NIMH has been funding neuroscience at the expense of clinical psychological research, in the absence of a brain oriented component. His argument is that in the current funding environment only 10% of the NIMH’s research budget is going towards clinical research. From the content of his article the research he is speaking of involves behavioral studies and interventions that contain no neuroscience component. Dr. Markowitz brings up many important points, and his main thesis that we cannot forget about behavioral interventions while pursuing the biological bases of clinical disorders is critical. For example, he makes the strong point that neuroscience research is unlikely to help solve the problem of suicide. And his final argument is for a “more balanced approach to funding clinical and neuroscience research.”

However, one can argue what that balance should actually look like. Is ten percent of the budget actually a small amount? And does that number include the multitude of basic neuroscience studies that are investigating the neural underpinning of a given disease? For example, based on the NIH reporter, schizophrenia research has been funded for approximately 250 million dollars for each of the last three years. A quick look at the total budget (32.3 billion in 2016, with ~25 billion going to research grants) suggests that that would be on the order of about 1% of the total NIH research budget. This is only one disease, and is being calculated from the whole NIH budget, not just from the NIMH budget. Only a portion of that funding is going towards clinical research, as Dr. Markowitz would define it, however the rest of that funding is going to research that will in all likelihood provide clinical benefits to patients down the road, in the form of new physiological targets or potentially new drugs.

So how can one make a determination about the correct of amount of funding that should go towards different mental health fields? Should 25% or 50% of the budget go towards clinical research? It seems that comparing the percent of money going to clinical research versus neuroscience is simply a bad comparison. Neuroscience is not one homogenous topic; it includes tens of, if not over a hundred, different fields. The various mental health fields fighting each other over funding doesn’t help anyone. Both neuroscience and clinical research need to be funded. It seems that the best way to divide the funding from NIMH would not be to specify what field gets priority but instead to fund the best grants regardless of whether there is a specific component involved. This would open the door to more clinical research while not requiring a shift in the priorities of the NIMH, whose mission is to understand and treat mental illnesses though both basic and clinical research. For instance, RDoC already contains both behavioral and self-report components. These components should be given as much priority as the other neuroscience components. If 10% of the budget is given to behavioral work, in this way, that would seem reasonable, possibly even greater than other areas might be getting.

On a final note, while we should always be looking internally at how we are funding different types of science, and if we, the public, are getting our money’s worth out of projects, it is also important for us to ensure that science funding as a whole is increasing. The current funding environment has been relatively static for years. We need, through advocacy and outreach, to get the public and government to provide more funding opportunities to the NIH. As the saying goes “a rising tide raises all boats”.

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

November 17, 2016 at 6:53 pm

Science Policy Around the Web – November 15, 2016

leave a comment »

By: Sarah Hawes, PhD

Source: PHIL

Zika

Florida voters weigh in on GM mosquito releases: What are the issues?

Concern over mosquito-borne Zika virus arriving in the United States this year spurred rapid allocation of resources toward identifying solutions. Clinical trials are just beginning for a traditional, attenuated vaccine while parallel efforts include research into injecting small DNA segments to effectively vaccinate by engaging a patient’s own cells to produce harmless, Zika-like proteins. However the risk of severe birth defects in infants born to Zika infected mothers is a powerful incentive for expediency. One answer exists in the use of genetically modified (GM) mosquitos to reduce vector number by breeding them in the wild. In August, the Food and Drug Administration (FDA) agreed for the first time to release of GM mosquitoes in the U.S.

The GM mosquitos in question are almost exclusively non-biting males of the Zika vector species Aedes aegypti, modified by British biotech company Oxitec, to carry a gene that prevents their offspring from reaching sexual maturity. Oxitec has used similar techniques successfully since 2009 in the Cayman Islands, Malaysia, Brazil, and Panama. A document prepared by the FDA Center for Veterinary Medicine examines myriad concerns, and determines program risks to be negligible. It includes ecosystem reports showing lack of predators reliant on the invasive Aedes aegypti, and explains that no recognized method exists for the genome-integrated transgene to impact or spread among other species. However a small percentage of GM mosquitos survive to adulthood and could transfer modified genes (or transgene resistance) to next-generation Aedes aegypti. In addition, some fear that population reduction among one disease-carrying mosquito species will make way for another, such as Aedes albopictus, which is also capable of carrying Zika, Dengue, and Chikungunya.

On Election Day, the final word on whether or not to release Oxitec GM mosquitos was given to voters living in the proposed release-site in the small peninsula neighborhood of Key Haven, Florida, and in surrounding Monroe County. Countywide, 58 percent of voters favored release. Within Key Haven, 65 percent opposed it. Following this divide, the decision now rests with Florida Keys Mosquito Control Board. (Kelly Servick, Science Insider)

HIV Vaccine

Controversial HIV vaccine strategy gets a second chance

The first participants in a $130 million HIV vaccine study, funded primarily by the National Institute of Allergy and Infectious Diseases (NIAID) and the Bill & Melinda Gates Foundation, received injections last week in South Africa. The study is a modified repetition of a study conducted in Thailand seven years ago that used nearly three times the number of participants and reported a modest 31.2% risk reduction through vaccination. In a nation with 6 million HIV positive persons, this would still be valuable if reproduced, but there is concern that alterations in the vaccine intended to boost efficacy could have the opposite effect.

No mechanism has been found for the vaccine’s efficacy in Thailand, making it hard to improve on. In hopes of extending the duration of protection, twice the amount of an HIV surface protein will be given. A canary-pox virus carrying pieces of HIV virus common in Thailand seven years ago (targets on which to hone the body’s immunity) has instead been loaded with strains common in South Africa. Finally, a stronger immune stimulant, or “adjuvant,” is included in the injection. However, in May, a study by National Cancer Institute vaccine researcher Genoveffa Franchini found that monkeys were protected from HIV by the old but not by the new adjuvant. Franchini suggests that the new adjuvant may even leave vaccinated persons more susceptible to infection. The South Africa study leader Glenda Gray says that Franchini makes a “compelling” argument for adding a group to repeat use of the old adjuvant, if more money can be found.

The enormity of South Africa’s AIDS epidemic (18% of global cases) compels empathy for the perspective held by Gray, who said, “Someone has to put their stake in the ground and have the courage to move forward, knowing we might fail.” At the same time one would hope that the use of $130 million in HIV research funds is being fueled more by quality medical science than by desperation and action-bias. (Jon Cohen, Science Magazine)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

November 15, 2016 at 9:45 am

Drug policy changes: Marijuana legalization and its therapeutic potential

leave a comment »

By: Kseniya Golovnina, PhD

Source: Flickr, under Creative Commons

On November 8th 2016, nine states voted on legalizing recreational and medical marijuana (Cannabis L.). All US presidential candidates supported relaxing current restrictions on marijuana use. Since 2011, more than 50% of Americans consistently show positive attitudes towards legalizing marijuana. It is estimated that marijuana industry tax revenues for federal, state and local governments could total up to $28 billion. In addition to tax revenues, the non-profit advocacy group the Drug Policy Alliance highlights that marijuana legalization will reduce harm to young people and people of color, create new jobs, save money on law enforcement, and promote development of tests for drug impairment.

One of the challenges in marijuana regulation is how it is generally perceived—either as a drug or as a harmless recreation. On the one hand, it is a central component of the long standing ‘war on drugs’ that is a primary part of US law enforcement. According to the legal system, marijuana remains classified as a Schedule I substance under the Comprehensive Drug Abuse Prevention and Control Act of 1970, along with heroin. On the other, prominent thinkers argue that it is a drug of choice, without a known lethal case, which helps produce serenity and insight, and should be regulated as alcohol and tobacco. Recent policy shifts will strike a new balance between these views.

While prohibited at the Federal level, marijuana decriminalization laws have been passed in several states by lawmakers, and often through public ballot measures. In 25 states, Cannabis is legal for medical use and in 5 states, for recreational use. Out of the 9 states that voted on Nov 8, only Arizona hasn’t supported marijuana initiatives. In 2013, the Obama administration clarified Federal marijuana enforcement to deemphasize some criminal behavior, and remain in harmony with new and evolving state laws. The US Congress is acting as well, with a petition introduced (the CARERS Act) intended to remove conflicts between state and federal laws.

Marijuana in science

Shown clearly by these recent political trends, the public attitude has been shifting rapidly, and legalization appears to be only an issue of time. From a scientific point of view, legalization of Cannabis will open the door for robust federally approved research on marijuana’s therapeutic value. The reasonable scientific question now is whether and to what degree Cannabis can be a real new frontier of therapeutics?

Marijuana chemical science started from the identification of THC (delta-9-tetrahydrocannabinol) as the main active ingredient. Today, more than 460 chemicals are known to be Cannabis ingredients, more than 60 of which are grouped under the name cannabinoids. In the early 1990s, cannabinoid (CB) receptors were discovered and cloned. Cannabinoids, along with their receptors, make up the endocannabinoid (EC) system, which participates in the regulation of neurotransmission. Surprisingly, a number of chocolate-derived chemicals can activate the human cannabinoid system, both directly and indirectly, suggesting that chocolate and marijuana can have overlapping effects. The identification of natural agonists anandamide and 2-arachidonylglycerol, which also act on CB receptors, has stimulated interest in the medical uses of Cannabis. On PubMed the number of publications with the term “cannabis” has increased from 71 in 1990 to 1195 in 2016, revealing both the unexpected Cannabis therapeutic horizons and warnings about its effect on adolescent brain.

A 2003 review on cannabinoids as potential anticancer agents reported, “cannabinoids have favorable drug-safety profiles and do not produce the generalized toxic effects of conventional chemotherapies.” Thirteen years later in 2016, cancer therapy using cannabinoids is still paradoxical but evident. In 2006, based on the analysis of 72 controlled studies evaluating the therapeutic effects of cannabinoids, it was shown that “cannabinoids present an interesting therapeutic potential as antiemetics, appetite stimulants in debilitating diseases (cancer and AIDS), analgesics, and in the treatment of multiple sclerosis, spinal cord injuries, Tourette’s syndrome, epilepsy and glaucoma”. A potential antipsychotic effect of cannabidiol was also reported in 2012. At the 2015 AAAS Annual Meeting, researcher Mark Ware from McGill University Health Centre in Montreal, Canada, reported, “it’s clear that the weight of evidence now is such that cannabinoids are analgesic drugs,” while also emphasized that more studies are needed to understand the best dosing and delivery methods for medical use.

A search on the website ClinicalTrials.gov, maintained by the National Institutes of Health shows 557 clinical trials with ‘known status’ for the term “cannabis” as of October 26, 2016. More than one hundred of them are open now. Topics for these studies relate to Cannabis abuse as well as new treatments for a variety of medical conditions such as schizophrenia, cancer, autoimmune diseases, epilepsy, musculoskeletal diseases, and others. For example, GW Pharmaceuticals Ltd. was conducting clinical trials with Nabiximols (trade name Sativex) to investigate its safety in treating cancer pain. However, out of ten cannabis-related drugs on the world market, only three (including Sativex) are approved for medical use in the US.

Legalization, public interest and scientific research on Cannabis has promoted regulatory agencies such as the Food and Drug Administration (FDA) to develop new policies and guidance. It is stated on the official FDA website that “the FDA supports researchers who conduct adequate and well-controlled clinical trials which may lead to the development of safe and effective marijuana products to treat medical conditions.” Non-profit US Pharmacopeial Convention (USP), a known leader in developing and controlling drug standards, has organized a Cannabis expert committee to develop USP Standards for medical Cannabis. Their aim is to control quality specifications for the Cannabis used in clinical studies.

While the frontier of science appears to be opening for Cannabis in the US, the regulatory regime will need to keep pace. As medical use legalization proliferates, there will be a strong, even urgent need to revamp regulation to accommodate and emphasize research and best uses. Until the regulations are properly developed there will be some uncomfortable unknowns from a public health perspective, leading to greater risks and missed benefits.

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

November 14, 2016 at 9:00 am

Posted in Essays

Tagged with , ,

Science Policy Around the Web – November 8, 2016

leave a comment »

By: Saurav Seshadri, PhD

Source: pixabay

Mental Health Research

FET Flagships: lessons learnt

The European Commission (EC), the executive body of the European Union, launched two major Future and Emerging Technologies (FET) Flagship projects in 2013, with funding of about 1 billion euros each. Both aim to foster collaboration and scientific innovation over a period of 10 years, in the fields of neuroscience (the Human Brain Project or HBP) and material chemistry (the Graphene Flagship). As these projects transition to the infrastructure construction phase, which has been funded for the next two years, the EC has released a statement reflecting on the first, ‘ramp-up’ phase of these high-level initiatives.

The assessment presented of the Flagships’ success is uniformly positive; the Directors write that they ‘create amazing collaboration opportunities’, ‘mobilis[e]…enthusiastic young researchers’, and ‘spread an innovation mind-set in Europe’. The EC expects this evaluation to be corroborated by an independent review by a panel of experts, due to be published in early 2017. Key insights from the initial phase include the power of flagships to foster international community-building, the importance of balanced and transparent governance and management, and the need to fine-tune the size and composition of the Consortium of funding entities over time.

A driving force behind some of these lessons is the controversy that has faced the HBP almost since its inception: in 2014, a group of leading neuroscientists sent a protest letter to the EC stating that the HBP was ‘not a well-conceived or implemented project’. The letter now has more than 800 signatures, and led to the formation of mediation committee, based on whose recommendations in 2015 the HBP dissolved its executive board and significantly changed its scientific focus. The recent release of long-gestating computational tools has also helped address criticism. In navigating these challenges and moving forward, the HBP merits attention for its similarity to our own BRAIN Initiative in scope, methodology, and scale. (European Commission)

HIV/AIDS

HIV’s patient zero exonerated

Gaetan Dugas was a French Canadian airline steward whose cooperation with CDC researchers helped identify sexual contact as a key step in HIV transmission in 1984. Unfortunately, this contribution earned him the label of ‘Patient Zero’ for HIV in the United States, which, along with an influential book that portrayed him as an unrepentant and malicious spreader of the disease, led to his widespread condemnation. On a larger scale, this characterization of the epidemic was a setback in the fight against homophobia, even at the policy level: in 1988, a Presidential Commission on HIV recommended that behavior among gay men that ‘fail[s] to comply with clearly set standards of conduct’ be criminalized.

However, a recent study in Nature has found ‘neither biological nor historical evidence’ that Dugas was the primary case of HIV in the US. The authors used a highly sensitive method to recover and sequence viral RNA from samples collected in the late 1970s, which revealed that HIV most likely jumped from Africa to the US via the Caribbean in approximately 1971, and that Dugas’ HIV genome was typical of US cases far downstream of ancestral strains. Ignorance may explain how scientists and the public in the 1980s came to scapegoat Dugas: with our current understanding of HIV’s long incubation period, it appears possible that many of Dugas’ partners could have contracted the disease years before they met him.

According to Dr. Anthony Fauci, director of NIAID, “The history of diseases has always been, in part, that someone needs to be blamed.” This study highlights the scientific and ethical pitfalls inherent to this mentality. (Sara Reardon, Nature News)

Schizophrenia

Schizophrenia secrets found hidden in the folds of DNA

Schizophrenia is known to be highly heritable, but the individual genes conferring risk for the disease have remained elusive. Advances in sequencing capabilities have allowed researchers to vastly increase the statistical power of studies aimed at identifying these genes: one such large-scale effort, the Psychiatric Genomics Consortium (PGC), identified over 100 common variants associated with schizophrenia, by using more than 36,000 cases and 100,000 controls. While progress has been made in understanding how some of these mutations contribute to changes in gene expression and brain network development, the majority remain unexplained. One obstacle is the fact that many of the loci are in regulatory regions, often without any obvious nearby target gene.

A recent study from Dr. Daniel Geschwind’s group at UCLA addresses this problem by showing that many non-coding variants identified by the PGC actually do contact genes involved in brain development, when the 3-dimensional structure of chromatin is taken into account. The authors used a cutting-edge technique called chromosome conformation capture to generate high-resolution maps of physical interactions between regulatory regions and genes. This approach revealed that loci of previously indeterminate function may in fact influence pathways linked to schizophrenia, including neurogenesis and cholinergic signaling.

Coming on the heels of another study, which used whole exon sequencing in about 5,000 cases to show that rare variants contribute to risk for schizophrenia, these findings represent significant progress towards understanding the mechanistic implications of genome-scale data in psychiatric disorders. This understanding is a key step towards using such data to develop personalized treatment strategies, which may be a priority for Dr. Geschwind, as he was appointed head of precision medicine efforts in the UCLA Health System in March. The above approach can also be generalized to other neurodevelopmental disorders (a prime candidate is autism, for which Dr. Geschwind helped establish the world’s largest gene bank), and holds great promise for the future of care for these devastating diseases. (Tim Newman, Medical News Today)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

November 8, 2016 at 9:00 am

Science Policy Around the Web – November 4, 2016

leave a comment »

By: Courtney Kurtyka, PhD

Source: Flickr, by Wellington College, under Creative Commons

Science Education

Unexpected results regarding U.S. students’ science education released

The National Assessment of Educational Progress (NAEP) is a nation-wide exam and survey used in the United States to ascertain student knowledge and education in key areas. Recently, the 2015 science education results from fourth, eighth, and twelfth graders in the United States were released, and showed some surprising outcomes. Out of seven different hands-on activities that students were asked if they completed as part of their curriculum, only one (simple machines) showed a positive correlation between activity participation and scores on the exam. Some activities (such as using a microscope or working with chemicals) showed no correlation with scores on the exam, while students who engaged in activities such as handling rocks and minerals actually performed worse than students who did not. Furthermore, not as many students engage in scientific activities as part of their curriculum as one might expect. For example, 58% said that they never used simple machines in class, while 62% say they never or rarely work with “living things”.

An anonymous expert on the assessment suggested that one potential explanation for these unexpected results is that the assessment asks whether students completed any of these activities “this year”. Therefore, for the results from twelfth graders, students who use rocks and minerals in class tend to be in lower-level science courses, and are more likely to not perform as well on the exam as students in higher-level courses that would not include that activity. However, this does not account for the low level of reporting of scientific activities overall.

Another concerning aspect of the exam is related to the reporting of the results. The National Center for Education Statistics (NCES), which manages the NAEP, operates a website that is both difficult to use and incomplete. In fact, when using the drop-down menu of results from the survey, only the results of activities that have positive correlations with test scores are listed. NCES has said that they show results based on what they think are of greatest interest to the public.

While some cite the positive results as a reflection of the success of active learning techniques, others note that 40% of twelfth graders who took the NAEP did not have a “basic” knowledge of science. Additionally, these results are interesting for many because the twelfth graders reflect the first students to have spent their entire education under No Child Left Behind, which mandated annual assessment of reading and math for third through eighth graders. Since many have argued that this law leaves less room for teaching topics that are not tested (such as science), examining students’ scientific performance under these guidelines is important. (Jeffrey Mervis, Science Magazine)

Health Disparities

Sexual and gender minorities are officially recognized as a minority health population

The National Institute on Minority Health and Health Disparities (NIMHD), one of the institutes and centers within the National Institutes of Health, recently officially recognized sexual and gender minorities (SGM) as a distinct minority health population. The SGM population is very diverse, including lesbian, gay, bisexual, and transgender communities, as well as those from additional sexual and gender classifications that differ from various norms (such as traditional, cultural, etc.).

Multiple health disparities (meaning that the likelihood of disease and death from particular diseases and disorders in that group differ from the average population) have been identified in the SGM population. Some of these issues include a lower likelihood of women who have sex with women getting Pap smears and mammograms, and higher rates of depression, panic attacks, and psychological distress in gay and bisexual men.

Previously, the NIH requested a report on SGM health that was published in 2011, and later created the Sexual and Gender Minority Research Office (SGMRO) following the results of the report. Now, this official designation will allow researchers focused on SGM health to be able to apply for health disparity funding from the NIH, and Karen Parker (the director of the SGMRO at the NIH) said that she hopes that it will lead to increased interest in applications to support health research related to this population. (Nicole Wetsman, STAT)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

November 4, 2016 at 9:00 am

Science Policy Around the Web – November 01, 2016

leave a comment »

By: Eric Cheng, PhD

Source: pixabay

Climate Change

Nations, fighting powerful refrigerant that warms planet, reach landmark deal

Over 170 nations agreed to limit the use of hydrofluorocarbons (HFCs), the key climate change-causing pollutants found in air conditioners and refrigerators. This deal reached in Kigali, Rwanda could help prevent a 0.9°F rise in temperature by the year 2100. Although the negotiations did not produce the same publicity as the climate change accord in Paris of last year, the outcome may have an equal or even greater impact on the efforts to slow the warming of our planet.

Adopting an ambitious amendment to phase down the use and production of HFCs is “likely the single most important step that we could take at this moment to limit the warming of our planet,” Secretary of State John Kerry said in Kigali, in remarks before the passage of the agreement. President Obama called the deal “an ambitious and far-reaching solution to this looming crisis.”

Total global HFC emissions are still far less significant contributors to climate change than the combined emission of other greenhouse gases like carbon dioxide and methane. However HFCs are thousands of times more potent than carbon dioxide on a pound-per-pound basis, making them an obvious target for international efforts to combat climate change.

Many experts still believe that international efforts have moved too slowly as research continues to show significant effects and large scale of global warming. Scientists say 2016 will top last year as the hottest year on record with some months showing a temperature rise close to the 3.6°F benchmark. (Coral Davenport, New York Times)

Wildlife Conservation

Nations agree to establish world’s largest marine reserve in Antarctica

Twenty-four countries and the European Union agreed to establish the world’s largest marine sanctuary in Antarctica’s Ross Sea. This area is home to “50 per cent of ecotype-C killer whales (also known as the Ross Sea orca), 40 per cent of Adélie penguins, and 25 per cent of emperor penguins,” according to a statement from the United Nations Environment Programme.

“The significance of this is that most of the marine protected area is a no-take area,” acknowledged the State Department’s Evan Bloom, head of the U.S. delegation to the meeting. More than 600,000 square miles of the Ross Sea around Antarctica will be protected under the deal. This means that an area about the size of Alaska will be set aside as a no-take “general protection zone”.

No-take areas are zones set aside by authorities where any action that removes or extracts any resource is prohibited. These actions include fishing, hunting, logging, mining, drilling, shell collecting and archaeological digging. (Merrit Kennedy, NPR)

Science Funding

Budget cap would stifle Brazilian science, critics say

Brazil’s interim President Michel Temer proposed a constitutional amendment to limit public spending growth for up to 20 years as a solution to curb a rise in public debt. The proposal, known as PEC 241, would prohibit all three branches of Brazil’s government to raise yearly expenditures above the inflation rate. This would essentially freeze spending at current levels for two decades. The effect of the bill, if passed, would put Brazilian science in a budgetary straightjacket. “It will be a disaster,” says Luiz Davidovich, president of the Brazilian Academy of Sciences in Rio de Janeiro.

The 2016 federal budget for science, technology, and innovation was approximately $1.5 billion, the lowest in 10 years when corrected for inflation. (The National Institutes of Health in the United States currently has a budget of about $30 billion.) Agencies have been reducing scholarships and grants to adjust for the lack of funding. For example, the Brazilian Innovation Agency has slashed funding for national programs and is delaying payments on research grants. This has led to consequences such as finding money to pay for electricity bills. “There is no way we can survive another 20 years like this,” says Davidovich, who is also a physicist at the Federal University of Rio de Janeiro.

“Smart countries increase funding for science, technology, and innovation to get out of a crisis,” says Helena Nader, president of the Brazilian Society for the Advancement of Science in São Paulo. “We are doing the opposite.” (Herton Escobar, ScienceInsider)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

November 1, 2016 at 9:14 am