Science Policy For All

Because science policy affects everyone.

Posts Tagged ‘EPA

Science Policy Around the Web – May 11, 2018

leave a comment »

By: Mohor Sengupta, PhD

Tablets Nutrient Additives Dietary Supplements Pills

source: Max Pixel

Drug prices

Why Can’t Medicare Patients Use Drugmakers’ Discount Coupons?

With high drug prices, affordability of specialized medicines is a matter of concern for many individuals, especially those on life-saving brand-name drugs.

Manufacturers of brand-name medicines provide discount coupons to people with private health insurance. Such discounts are denied for people with federal healthcare plans such as Medicare or Medicaid. For example, for one patient on Repatha (a cholesterol reducing drug), the co-payment is $618 per month with the Medicare drug plan, but it is only $5 for patients with commercial insurance plans. This discrepancy has resulted in a “double standard” because arguably, the discount is denied to the people who need it most, that is the retired population subscribing (compulsorily) to federal healthcare programs.

Drug manufacturers have an incentive to offer discounts on branded medicines as they increase the likelihood of purchase and results in greater access to and demand for the products. While these discount coupons are immensely beneficial for life-threatening conditions for which generic drugs are not available, a 2013 analysis has shown that lower cost generic alternative and FDA approved therapeutic equivalent was available for 62% of 374 brand-name drugs.

The federal government has argued that with the discount coupons, patients might overlook or be discouraged from buying cheaper variants of the brand-name drug. Even if a patient chooses to use a brand-name drug with a discount coupon over cheaper alternative, their health insurance plan still has to pay for the drug. That amount maybe more than Medicare or Medicaid may be willing to pay. This has resulted in the federal anti-kickback statute which prohibits drug manufacturers to provide “payment of remuneration (discounts) for any product or service for which payment may be made by a federal health care program”.

One important question is why do drug makers sell the brand-name drugs at a much higher price bracket when generic, cheaper options are available? In the present scenario, insurance companies should make the judgement about whether they are willing to cover such brand-name drugs for which generic alternatives are available. Often doctors prescribe brand-name drugs without considering their long-term affordability by patients. It is the responsibility of doctors and insurance providers alike to determine the best possible drug option for a patient.

Taking in both sides of the picture, use of discounts must be exercised on a case basis. It must be enforced for specialized drugs against which generic alternatives are not available and which are usually used for severe or life-threatening conditions. Currently for people with such conditions and on federal healthcare plans, affordability is a major challenge.

(Michelle Andrews, NPR)

 

EPA standards

EPA’s ‘secret science’ rule could undermine agency’s ‘war on lead’

Last month the Environmental Protection Agency (EPA) administrator, Scott Pruitt issued a “science transparency rule” according to which studies that were not “publicly available in a manner sufficient for independent validation” could not be used while crafting a regulation. This rule is at loggerheads with Pruitt’s “war on lead” because a majority of studies on lead toxicity are observational, old and cannot be validated without consciously exposing study subjects to lead.

Lead is a potent neurotoxin with long term effects on central nervous system development. It is especially harmful to children. There are several studies showing lead toxicity, but many do not meet the inclusion standards set by the EPA’s the new science transparency rule. Computer models developed to assess lead toxicity, which played important role in EPA’s regulations on lead in the past, have amalgamated all these studies, including the ones that cannot be validated. If the science transparency rule is retroactive, it would mean trashing these models. An entire computer model can be rendered invalid if just one of its component studies doesn’t meet the transparency criteria.

Critics say that the transparency measure will be counter-effective as far as lead regulations are concerned. “They could end up saying, ‘We don’t have to eliminate exposure because we don’t have evidence that lead is bad’”, says former EPA staffer Ronnie Levin. Another hurdle is the proposed data sharing requirement. Lead based studies tend to be epidemiological and authors might be unwilling to share confidential participant data.

Bruce Lanphear of Simon Frazer University in Canada is skeptical of EPA’s intensions because the agency has not imposed similar transparency measures for chemical companies like pesticide producers.

Finally, this rule could set different standards for lead safely levels in different federal agencies. Currently Center for Disease Control and Prevention (CDC) and Department of Housing and Urban Development (HUD) consider 5 micrograms per milliliter of lead in blood as the reference level. The EPA rule could lead to a new reference level, leading to discrepancies when complying with agencies across the U.S. government.

(Ariel Wittenberg, E&E News)

 

Have an interesting science policy link? Share it in the comments!

Advertisements

Written by sciencepolicyforall

May 11, 2018 at 10:24 pm

Science Policy Around the Web – May 8, 2018

leave a comment »

By: Saurav Seshadri, PhD

20180508_Linkpost

source: pixabay

Environment

EPA Cites ‘Replication Crisis’ in Justifying Open Science Proposal

The U.S. Environmental Protection Agency (EPA) may soon be using far less scientific evidence to inform its policy positions.  EPA administrator Scott Pruitt recently announced that, in an effort promote reproducibility and open access to information, the EPA will no longer consider studies whose underlying data or models are not publicly available.  However, such studies often represent the ‘best available’ data, which the EPA is legally obliged to consider, and form the basis of, among others, policies limiting particulate matter in the air.  Several studies that support the health and economic benefits of lower particulate limits do so by using detailed medical information whose disclosure would compromise patient confidentiality.  The so-called HONEST (Honest and Open New EPA Science Treatment) Act, put forth by House Republicans, aims to suppress such ‘secret science’; its detractors say that it’s a poorly disguised gift to industry interests, conveniently timed to take effect just before a scheduled review of pollution limits.

Opposition to the policy has been building steadily.  A letter signed by 63 House democrats, asking for an extension to the open comment period for the policy, has so far been unsuccessful. A separate letter, signed by almost a thousand scientists, and comments from several professional associations, have also been ignored – perhaps unsurprisingly, given Pruitt’s parallel effort to bar relevant scientists from EPA advisory boards.  The scientist behind the article calling attention to the ‘reproducibility crisis’ cited by Pruitt has also spoken out, writing that simply ‘ignoring science that has not yet attained’ rigorous reproducibility standards would be ‘a nightmare’.

Perhaps the most effective response has come from scientists who are outpacing the bureaucracy.  In a pair of papers published last year, a biostatistics and public health group at Harvard used air quality data, Medicare records, and other public sources to reiterate the health risks posed by air pollution.  Such studies could not be excluded by the new EPA policy and may influence regulators to keep particulate limits low.  Another potential roadblock to implementing changes could be the controversy surrounding Pruitt himself.  The administrator has been the target of several federal probes, following a series of scandals regarding his use of government funds for purposes such as a 24-hour security detail, soundproof office, and first class travel.  Bipartisan calls for his resignation have made his future at the EPA, and the quick implementation of a Republican agenda there, uncertain.

(Mitch Ambrose, American Institute of Physics)

Science funding

NIH’s neuroscience institute will limit grants to well-funded labs

With a budget of $2.1 billion, the National Institute of Neurological Disorders and Stroke (NINDS) is the fifth largest institute at NIH.  Yet each year many investigators are constrained by a lack of funds, while some large labs have accumulated so many grants that their principal investigator can only spend a few weeks per year on a given project.  To address this disparity, NINDS recently announced a plan to revamp implementation of an existing NIH policy, in which grant applications from well-funded labs must go through an additional review by a special council. While the current secondary review rarely rejects such applications, NINDS’ policy takes two steps to make the process more stringent: first, it increases the number of labs that would undergo review, to include labs that would cross the $1 million threshold with the current grant; second, it sets higher standards for review, requiring applications from such labs to score in the top 7% of all proposals to be successful.

Responses to the idea have been tentative, despite widespread support for its objective.  One potential cause for concern is its perceived similarity to the Grant Support Index (GSI), a previous NIH initiative with a similar goal (i.e., reallocating resources to sustain less-established but deserving researchers). The GSI sought to achieve this goal by placing a cap on the number of grants that a lab could receive, using a point system. However, this caused an uproar among scientists, who, among other issues, saw it as punishing or handicapping labs for being productive – it was quickly revised to create the Next Generation Researchers Initiative, a fund earmarked for early and mid-stage investigators, for which each institute is responsible for finding money.  The new policy appears to be a step towards meeting this obligation, and not, NINDS insists, a return to the GSI.

The impact of the new policy will probably be clearer after NINDS’ next round of grant reviews takes place, in January 2019.  So far, only the National Institute of General Medical Sciences (NIGMS) has a comparable policy, which has been in place since 2016.  The success of these approaches may well shape future cohorts of NIH-funded scientists – cutoffs and uncertainty are not unique to neuroscience, and other institutes are likely to be paying close attention.

(Jocelyn Kaiser, Science)

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

May 8, 2018 at 6:11 pm

Science Policy Around the Web – January 30, 2018

leave a comment »

By: Kelly Tomins, BSc

20180130_Linkpost

By RedCoat (Own work) [CC-BY-SA-2.5], via Wikimedia Commons

Cloning

Yes, They’ve Cloned Monkeys in China. That Doesn’t Mean You’re Next.

Primates were cloned for the first time with the births of two monkeys, Zhong Zhong and Hua Hua, at the Chinese Academy of Sciences in Shanghai. Despite being born from two separate mothers weeks apart, the two monkeys share the exact same DNA. They were cloned from cells of a single fetus, using a method called Somatic Cell Nuclear Transfer (SCNT), the same method used to clone over 20 other animal species, beginning with the now infamous sheep, Dolly.

The recently published study has excited scientists around the world, demonstrating the potential expanded use of primates in biomedical research. The impact of cloned monkeys could be tremendous, providing scientists a model more like humans to understand genetic disorders. Gene editing of the monkey embryos was also possible, indicating scientists could alter genes suspected to cause certain genetic disorders. These monkeys could then be used a model to understand the disease pathology and test innovative treatments, eliminating the differences that can arise from even the smallest natural genetic variation that exists between the individuals of the same species.

Despite the excitement over the first cloning of a primate, there is much work to be done before this technique could broadly impact research. The efficiency of the procedure was limited, with only 2 live births resulting from 149 early embryos created by the lab. In addition, the lab could only produce clones from fetal cells. Now it is still not possible to clone a primate after birth. In addition, the future of primate research is uncertain in the United States. Research regarding the sociality, intelligence, and DNA similarity of primates to humans has raised ethical concerns regarding their use in research. The US has banned the use of chimpanzees in research, and the NIH is currently in the process of retiring all of its’ chimps to sanctuaries. In addition, there are concerns regarding the proper treatment of many primates in research studies. The FDA recently ended a nicotine study and had to create a new council to oversee animal research after four squirrel monkeys died under suspicious circumstances. With further optimization, it will be fascinating to see if this primate cloning method will expand the otherwise waning use of primate research in the United States.

The successful cloning of a primate has additionally increased ethical concerns over the possibility of cloning humans. In addition to the many safety concerns, several bioethicists agree that human cloning would demean a human’s identity and should not be attempted. Either way, Dr. Shoukrat Mitalipov, director of the Center for Embryonic Cell and Gene Therapy at the Oregon Health & Science University stated that the methods used in this paper would likely not work on humans anyways.

(Gina Kolata, New York Times)

Air Pollution

EPA ends clean air policy opposed by fossil fuel interests

The EPA is ending the “once-in always-in” policy, which regulated how emissions standards differ between various sources of hazardous pollutants. This policy regards section 112 of the Clean Air Act, which regards regulation of sources of air pollutants such as benzene, hexane, and DDE. “Major sources” of pollutants are defined as those that have the potential to emit 10 tons per year of one pollutant or 25 tons of a combination of air pollutants. “Area Sources” are stationary sources of air pollutants that are not major sources. Under the policy, once a source is classified as a major source, it is permanently subject to stricter pollutant control standards, even if emitted pollutants fall below the threshold. This policy was intended to ensure that reductions in emissions continue over time.

The change in policy means that major sources of pollution that dip below the emissions threshold will be reclassified as an area source, and thus be held to lower air safety standards. Fossil fuel companies have petitioned for this change for years, and the recent policy change is being lauded by Republicans and states with high gas and coal production. The EPA news release states that the outdated policy disincentives companies from voluntarily reducing emissions, since they will be held accountable to major source standards regardless of the amount of emissions. Bill Wehrum, a former lawyer representing fossil fuel companies and current Assistant Administrator of EPA’s Office of Air and Radiation, stated reversing this policy “will reduce regulatory burden for industries and the states”. In contrast, environmentalists believe this change will drastically increase the amount of pollution plants will expel due to the softening of standards once they reach a certain threshold. As long as sources remain just below the major source threshold, there will be no incentive or regulations for them to lower pollutant emissions.

(Michael Biesecker, Associated Press)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

January 30, 2018 at 3:30 pm

Science Policy Around the Web – December 8, 2017

leave a comment »

By: Roger Mullins, Ph.D.

20171208_Linkpost

source: pixabay

Chemical Safety

Chlorpyrifos Makes California List of Most Dangerous Chemicals

Last Wednesday, the California Office of Environmental Health Hazard Assessment (OEHHA) passed a vote to add the organophosphorus pesticide Chlorpyrifos to Proposition 65, an extensive list of over 900 chemicals known to cause cancer, birth defects, or reproductive harm. While Chlorpyrifos was previously considered for inclusion on this list in 2008, updated scientific information gave the OEHHA cause for reassessment.

This new data included further information on the neurodevelopmental toxicity of Chlorpyrifos in humans and wildlife. Of particular concern to this board was its harmful effect on fetal brain development. Central to this decision was the extensive review of scientific evidence provided in the 2014 and 2016 EPA Human Health Risk Assessments, as well as new and additional findings not previously reviewed in these assessments.

On a national level, the findings of earlier EPA risk assessments resulted in a national ban on homeowner use as far back as 2000. The recent 2014 and 2016 reports further cemented the evidence for pervasive neurodevelopmental toxicity and also highlighted the danger of dietary exposure from residues in drinking water and crops. An all-out ban on Chlorpyrifos was proposed in 2015, revoking all pesticide tolerances and cancelling its registrations, but this was ruled out by the current Environmental Protection Agency (EPA) in 2017. This pesticide is still under registration review by the EPA, which re-evaluates their decision on a 15-year cycle.

Inclusion on California’s Proposition 65 list does not amount to a ban within the state, though products containing Chlorpyrifos will have to be labeled as such starting in late 2018. This action on the state level stands in contrast to federal decisions, and is a revealing lesson in regard to the complexity of national response to scientific evidence.

(Sammy Caiola, Capital Public Radio)

Gene Drives

US Military Agency Invests $100m in Genetic Extinction Technologies

Gene-drives, an emerging powerful gene-editing technology, have been drawing considerable attention and controversy for their proposed use in disease vector control. This method involves the release of an animal that has been genetically modified into a wild population, with the aim of breeding in genes that have been designed to reduce the species’ ability to spread disease. These introduced genes are preferentially inherited, resulting in their eventual dominance in the population. For example, a gene could be designed and introduced to provide resistance to a particular parasite or reduce fertility. This technique is proposed for use in controlling mosquito-borne diseases such as malaria and the Zika virus, as well as to halt the spread of invasive species.

Controversy over this technique however also hinges on its strengths. The primary concerns are the likelihood of animals with favorable modifications crossing over international borders, downstream effects on dependent species, and the possibility of irreversible harm to the ecosystem if the technique is misapplied. Appropriately, much of this concern comes from fellow scientists. In light of this, scientists and policy-makers alike have been proactive about addressing the safety and ethical issues presented, coming up with a set of specific guidelines to advance quality science for the common good. These entail an effort to promote stewardship, safety, and good governance, demonstrate transparency and accountability, engage thoughtfully with affected communities, stakeholders, and publics, and foster opportunities to strengthen capacity and education. Consensus on these issues is intended to help move this promising field forward in the face of growing public scrutiny.

Recently, a trove of emails from US scientists working on gene drive technology was acquired under the Freedom of Information Act and disseminated to the media. Some of these emails revealed the Bill and Melinda Gates Foundation’s engagement with a public relations company to influence the UN moratorium on the use of this technology. The Foundation has long been a financial supporter of the Target Malaria research consortium that seeks to develop gene drives for the eradication of Malaria. The concern surrounding the release of these emails realizes the common fear of scientists involved in research with the potential to fall under the public eye, as ironically, even attempts to recruit expertise in portraying your research favorably may be seen as damning.

This will inevitably be true of any powerful emerging technique to come in the future as well. With the advance of science’s ability to address problems effectively, there will be obstacles towards implementing new technologies and addressing concerns from the communities they may affect. Some of these will be valid and cause for moratorium and introspection, and some will be more attributable to sensationalism. Understanding and navigating these differences will be an increasing and ever-present concern for policy-minded scientists.

(Arthur Neslen, The Guardian)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

December 8, 2017 at 1:35 pm

Science Policy Around the Web – September 26, 2017

leave a comment »

By: Rachel F Smallwood, PhD

20170926_Linkpost

source: pixabay

Public health

Air Pollution Tied to Kidney Disease

A new study has reported a link between kidney disease and air pollution. Using data collected from over 2.4 million veterans, researchers were able to examine this relationship by consulting NASA and EPA pollution data. They found that glomerular filtration rate, a measure of kidney function that quantifies how much blood is passing through the kidneys to be filtered, decreased as levels of fine particulate matter less than 2.5 microns in diameter (PM 2.5) increased. These particles are small enough to enter the bloodstream where they then enter the kidneys. The authors estimate that 44,793 cases of chronic kidney disease and 2,438 cases of end-stage renal disease can be attributed to PM 2.5 levels that exceed EPA standards.

Kidney disease is just the latest disease that can be partially attributed to air pollution. Pulmonary conditions, cardiovascular disease, and stroke have been established as being contributed to and exacerbated by air pollution. Earlier this year, it was reported that air pollution also increased the risk of Alzheimer’s and dementia. With so many researchers reporting on such a wide variety of adverse health effects due to air pollution, it is becoming more imperative to address these issues and not lose ground in the struggle for cleaner air. Citizens and policymakers need to be educated about the importance and vigilant about the risks of air pollution. They need to work together to find ways to reduce pollution levels, not just for the planet for future generations, but for the health of today’s.

(Nicholas Backalar, The New York Times)

Biomedical Research

Scientists grow bullish on pig-to-human transplants

Speaking of kidney disease, the list of people in the United States waiting to receive a kidney transplant has almost 100,000 entries. One solution that scientists and physicians have long considered is xenotransplantation – harvesting donor kidneys and other organs from animals like pigs, which naturally have human-like anatomies. Until now, there have never been any demonstrations that have come close to being acceptable for trials in humans. However, that may be changing. A few research groups are reporting that they are close to moving into human trials and have begun early talks with the FDA.

They have been testing their methods by implanting pig kidneys and hearts into monkeys. The monkeys typically have an extreme immune response, which is what the groups have been attempting to ameliorate. Researchers have not been able to completely eliminate the immune response, but recently they reported that a transplanted kidney lasted over 400 days in a macaque before rejection. A transplanted heart lasted 90 days in a baboon before experimental protocol required that they stop the experiment. A previous experiment demonstrated the ability to keep a pig heart viable when implanted into an immune-suppressed baboon’s stomach for over two years, though it was just to test the biocompatibility and the baboon still had its autogenous heart.

This success is partially attributable to CRISPR technology, which has allowed scientists to remove portions of the pig DNA that intensify the immune response. The International Society for Heart and Lung Transplantation laid out bare minimum of requirements for moving xenotransplantation into human trials. They require at least 60% survival of life-supporting organs for at least 3 months with at least 10 animals surviving. Scientists also need to provide evidence that survival could be extended out to 6 months. These experimental results and minimum requirements are informative for expectations of xenotransplantation: it is not a permanent solution (at least not any time soon). However, they may provide temporary solutions that give patients more time while they are waiting on transplants from human donors. This is good news; over 7000 people died on an organ transplant waiting list in 2016. For those just trying to get through with dialysis or who just need a few more months before receiving their heart, these xenotransplants could mean the difference between life and death.

(Kelly Servick, Science)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

September 26, 2017 at 4:58 pm

Science Policy Around the Web – August 1, 2017

leave a comment »

By: Sarah L. Hawes, PhD

20170801_Linkpost_1

Source: pixabay

Climate Science

Conducting Science by Debate?

Earlier this year an editorial by past Department of Energy Under Secretary, Steven Koonin, suggested a “red team-blue team” debate between climate skeptics and climate scientists. Koonin argued that a sort of tribalism segregates climate scientists while a broken peer-review process favors the mainstream tribe. Science history and climate science experts published a response in the Washington Post reminding readers that “All scientists are inveterate tire kickers and testers of conventional wisdom;” and while “the highest kudos go to those who overturn accepted understanding, and replace it with something that better fits available data,” the overwhelming consensus among climate scientists is that human activities are a major contributor to planetary warming.

Currently, both Environmental Protection Agency Administrator, Scott Pruitt, and Department of Energy Secretary, Rick Perry, cite Koonin’s editorial while pushing for debates on climate change. Perry said “What the American people deserve, I think, is a true, legitimate, peer-reviewed, objective, transparent discussion about CO2.” That sounds good doesn’t it? However, we already have this: It’s called climate science.

Climate scientists have been forthright with politicians for years. Scientific consensus on the hazards of carbon emissions lead to the EPA’s endangerment findings in 2009, and was upheld by EPA review again in 2015. A letter to Congress in 2016 expressed the consensus of over 30 major scientific societies that climate change poses real threats, and human activities are the primary driver, “based on multiple independent lines of evidence and the vast body of peer-reviewed science.”

Kelly Levin of the World Resources Institute criticizes the red team-blue team approach for “giving too much weight to a skeptical minority” since 97% of actively publishing climate scientists agree human activities are contributing significantly to recent climactic warming. “Re-inventing the wheel” by continuing the debate needlessly delays crucial remediation. Scientific conclusions and their applications are often politicized, but that does not mean the political processes of holding debates, representing various constituencies, and voting are appropriate methods for arriving at scientific conclusions.

(Julia Marsh, Ecological Society of America Policy News)

20170801_Linkpost_2

source: pixabay

Data Sharing, Open Access

Open Access Science – getting FAIR, FASTR

Advances in science, technology and medicine are often published in scientific journals with costly subscription rates, despite originating from publicly funded research. Yet public funding justifies public access. Shared data catalyzes scientific progress. Director of the Harvard Office for Scholarly Communication and of the Harvard Open Access Project, Peter Suber, has been promoting open access since at least 2001. Currently, countries like The Netherlands and Finland are hotly pursuing open access science, and the U.S. is gearing up to do the same.

On July 26th, bipartisan congressional representatives introduced The Fair Access to Science and Technology Research Act (FASTR), intended to enhance utility and transparency of publicly funded research by making it open-access. Within the FASTR Act, Congress finds that “Federal Government funds basic and applied research with the expectation that new ideas and discoveries that result from the research, if shared and effectively disseminated, will advance science and improve the lives and welfare of people of the United States and around the world,” and that “the United States has a substantial interest in maximizing the impact and utility of the research it funds by enabling a wide range of reuses of the peer-reviewed literature…”; the FASTR Act mandates that findings are publicly released within 6 months. A similar memorandum was released under the Obama administration in 2013.

On July 20th, a new committee with the National Academies finished their first meeting in Washington D.C. by initiating an 18-month study on how best to move toward a default culture of “open science.” The committee is chaired by Alexa McCray of the Center for Biomedical Informatics at Harvard Medical School, and most members are research professors. They define open science as free public access to published research articles, raw data, computer code, algorithms, etc. generated through publicly-funded research, “so that the products of this research are findable, accessible, interoperable, and reusable (FAIR), with limited exceptions for privacy, proprietary business claims, and national security.” Committee goals include identifying existing barriers to open science such as discipline-specific cultural norms, professional incentive systems, and infrastructure for data management. The committee will then come up with recommended solutions to facilitate open science.

Getting diverse actors – for instance funders, publishers, scientific societies and research institutions – to adjust current practices to achieve a common goal will certainly require new federal science policy. Because the National Academies committee is composed of active scientists, their final report should serve as an insightful template for federal science agencies to use in drafting new policy in this area. (Alexis Wolfe & Lisa McDonald, American Institute of Physics Science Policy News)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

August 1, 2017 at 7:38 pm

Science Policy Around the Web – June 10, 2017

leave a comment »

By: Allison Dennis, BS

Source: pixabay

Animal Testing

Lack of Clarity Puts Chemical Safety and Animal Welfare at Odds

In the lineup of American stereotypes, the health-nut who cares about the chemicals in his shampoo is often the same person who cares if that shampoo was tested on animals or not. However, a bill signed June 22, 2016, known as the Frank R. Lautenberg Chemical Safety for the 21st Century Act, may be placing those two views at odds. The bill requires the U.S. Environmental Protection Agency (EPA) to implement a risk-based process to evaluate the safety of chemical substances currently being used in the marketplace and approve the use of new chemicals before their introduction. The bill was passed with bipartisan support and offered EPA the new-found power to fully regulate the use of well-known carcinogens like asbestos.

Yet the pathway forward for the EPA is daunting. More than 62,000 substances find their way into and onto our bodies through the products we use and our environment. While many of these substances have become associated with disease over time, how can the EPA certify the risks associated with different exposures to varying amounts of each substance on such an extensive list? The Act itself suggested that once the EPA has evaluated the existing information on the 62,000 substances currently in use, it spend the next twelve months triaging chemicals according to their potential risk. Next, the highest priority chemicals will be evaluated on a three-year deadline to develop knowledge of their toxicity and guidelines for their regulation. Ultimately, by clearly cataloging the risk of common chemicals the Frank R. Lautenberg Chemical Safety for the 21st Century Act promises to greatly reduce the amount of animal testing needed in the long-term.

In the meantime, however, the companies that use to-be-regulated substances in their products may be inclined to undertake independent toxicity testing, collecting enough data to guarantee that their favorite substances meet the low-risk criteria and avoid a drawn-out evaluation. Defining toxicity requires careful experimentation, which can sometimes be carried out in human cells outside of the body, but often require evaluation in animals. Animal rights groups like the Human Society find concern with the lack of transparency in the pre-prioritization process. They fear the eagerness of companies to provide data without any clear guidelines about how that data will be evaluated or what substances will require extensive evaluation could result in extensive and unnecessary animal testing. Further they suggested that the EPA require any new pre-approval data obtained by companies to be collected using non-animal methods. (Maggie Koerth-Baker, FiveThirtyEight)

CRISPR

Small Study may Reveal Big Concerns over CRISPR-Based Therapy

A one-page letter published in Nature Methods last week reports unexpectedly high levels of unintended changes to the genomes of mice that underwent a CRISPR-based therapy. Since it’s renaissance as a therapeutic tool in 2012, CRISPR has occupied the imaginations of scientists, doctors, patients, investors, and ethicists. CRISPR technology provides a relatively straight-forward and reproducible means to gene editing on the cellular level, but its applications to create heritable mutations in the human germ line is on hold until more is understood about the long-term effects such treatments would have.

The original study sought to explore potential long-term effects of germline manipulation by CRISPR in a mouse model. Guide RNA along with the Cas9 enzyme were injected into mouse zygotes, which introduced a correction in a mutation in the rd1 gene of otherwise blind mice. Initiating this change before the first cell division enabled this corrected mutation to be inherited by all cells arising in the developing mouse, consequently restoring the ability for the eyes to develop normally. In a follow-up experiment described in their one page letter, the researchers looked for mutations in the genomic DNA of two CRISPR-treated adult mice compared with a control mouse, revealing over 2,000 unintended mutations following CRISPR treatments. None of these mutations appeared to affect the mice, suggesting that deep genomic sequencing may be required to reveal unanticipated changes in an outwardly healthy mouse. Further, the nature of these unintended mutations offered few clues explaining how they might have occurred.

This result stands in contrast with other reports quantifying the extent of these unintended changes, which found CRISPR to be highly specific. While the CRISPR-Cas9 system has been observed to sometimes alter off-target regions of the genome, this activity can usually be curbed through the careful design and evaluation of the guide RNA. The limitations of this small study have been discussed extensively since its publication. However, the findings have sparked the need for further investigation into the long-term-whole-animal effects of germline-editing by CRISPR. As human germline-editing creeps closer to reality, the FDA will be tasked with developing an entirely new means of evaluating the safety of such technologies (Megan Molteni, Wired)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

June 10, 2017 at 11:33 am