Science Policy For All

Because science policy affects everyone.

Is Novelty Killing Research Science?

leave a comment »

By: Aaron Rising, PhD

20180419_Essay

source: Sean MacEntee via flickr

Within the past decade or so, researchers have become aware of the prevalence of scientific studies whose results cannot be replicated, which has been dubbed the ‘reproducibility crisis.’ While the phrase may be a slight hyperbole, there is a real concern about how many published scientific studies are valid and reproducible. One of the most eye-opening studies into this issue was in the field of psychology in 2015 where 100 different experimental and correlation studies were redone in order to reproduce the conclusions of the original experiments. Overall, this study found that only 36% of the replicate experiments had significant results, despite 97% of the original set reporting significant results. Combining the two sets of studies (the original and the replicate experiments) resulted in nearly 70% significance of the two experimental sets, essentially meaning that ~25% of the original results were potentially a false positive. While the specific reproducibility percentages are debated (1, 2)  and being talked about in the media (3,4), it is worrying that a sizeable amount of published research, through no fault of the original researchers involved in the study, may turn out to be inaccurate. In a poll by Nature, 70% of researchers reported that they have failed to replicate another groups work. What’s even more troubling? More than half could not replicate their own findings.

There are multiple reasons why reproducibility of a single study can be called into question. Different labs use different mouse strains or cell lines, reagents from different companies or lot numbers can vary, or even two people can inherently perform the same experiment differently. All of these are examples could result in the exact same study ending up on either side of the significance threshold (usually set at p<0.05). And it shouldn’t be ignored that there are laboratories who use more nefarious methods to get positive results such as p-hacking, excessive removal of outliers, or just plain making up or editing data. But a more fundamental issue maybe driving this ‘crisis’, and it is one of ‘novelty.’  As humans, we love novelty. We are attracted to things that are innovative, new or never been done or seen before. It is ‘boring’ to rehash the same topic constantly. This desire or need for novelty flows into how we fund ideas and how we publish scientific results.

To get funding to do research, scientists must apply for and receive grants. In the United States roughly 40 to 50% of the science R&D funding comes from either the federal or local and state governments (5,6,7). Requirements for federally funded grants mostly rely on building on previous work and coming up with new and innovative ideas that have not been done previously. For instance, the U.S. National Institutes of Health (NIH) requires proposed projects to be unique and cannot, by law, use taxpayer money to pay for research that is already done. While not bound by law, other funding sources outside of the government such as Alzheimer’s Association, The Heart Foundation and the Leukemia and Lymphoma Society all emphasize that the research funded by their grants be ‘novel’  in concept, approach, and or strategy. Everything proposed in these grant applications, for the most part, is new, innovative and assumes all prior work is correct and can be reliably built upon.

On the other end of the research pipeline is the publication of results. Like when a researcher is trying to get funding for their work, they must show novelty and innovation to get published. As examples, two top tier journals, Cell and Nature require that the research being submitted for publication is ‘novel.’ Cell states that they are looking for papers ‘that report results that prompt new thinking about a biological problem or therapeutic challenge—work that will inspire others to want to build on it.’ Nature has two criteria points that state the work must be ‘of extreme importance to scientists in the specific field’ and ’ideally, interesting to researchers in other related disciplines.’ These criteria obviously promote and result in good, high quality papers, but such policies also box out research publications that might be important but only conformational.

There are journals such as PLOS One that just look at the quality and rigor of the science itself as criteria for publication, but these types of journals are not nearly as common. PLOS One states: ‘Judgments about the importance of any particular paper are then made after publication by the readership, who are the most qualified to determine what is of interest to them’ and that the journal accepts studies with negative results. However, when ranking journals, using the standard Impact Factor rankings as of 2017, Nature and Cell come in at 10th and 22th respectively and PLOS Medicine (a sub journal of PLOS One and highest ranked PLOS One journal) is ranked 167th. It’s easy to see where a scientist would rather publish to advance their career considering most jobs and tenure track promotions look not just at the number, but the quality (impact factor) of scientific publications.

A few groups have been attempting to solve both the funding and the publishing issues described here. The Dutch Organization for Scientific Research, an organization in the Netherlands similar to the NIH in the U.S., has begun to fund grants for replication research (8,9). Grants can either be for reanalysis of the data already collected or can be a complete repeat of the study to confirm its results. This initiative by the Dutch helps on the front end of the science pipeline by specifically allocating grant money for repeating a set of experiments already done by other groups rather than doing the proposed conformational experiments with ‘extra’ money that was intended for innovative and new work. On the publication end of the scientific pipeline, there are foundations and groups of concerned scientists that are working on publishing replication studies (10,11). In addition, the PLOS One Biology journal has recently announced that it will take ‘scooped’ work as long as it is submitted within 6 months of the original article. As a well-articulated article in The Atlantic points out, this will help the first group that publishes by allowing the ‘second place’ group to confirm their results and thus add to the reproducibility of that original study. The “second place” group still receives recognition through publication of the work they likely spent months if not years doing and does not waste the money and resources on that work that would normally not see the light of day.

While the over emphasis of novelty by the scientific community is not the only reason for the ‘reproducibility crisis,’ it is part of the underlining culture that might be contributing to it. Other factors eluded to above such as the pressure to publish in high impact journals, variable cell and mouse lines, and lab personnel differences also contribute to the problem. The Dutch initiative, PLOS One Biology and Open Science Collaborations are all examples of ongoing projects and attempts to help solve part of this ‘crisis.’ Other ideas to further this effort to increase science reproducibility in the United States would be for a policy change at the NIH funding level. With a slight tweak to current policy, the NIH could allow for one of the specific aims of a grant to specifically verify something that is pivotal or groundbreaking in the field. This explicit allowance would start to make replication studies more acceptable and perhaps make researchers more apt to perform and publicly verify or dispute previous studies. Another idea would be for other journals to take PLOS One Biology’s lead and allow for ‘scooped’ research to be published. Depending on the prestige (impact factor) of the journal, the time frame of said ‘scooped’ research could be shortened from the 6 months of PLOS One and have more stringent review requirements. An additional policy that all journals could adopt that would greatly strengthen scientific confidence in pivotal papers is to attach short communications/addendums that show peer reviewed replication attempts of that work. These addendums would add to the strength of the original paper if conformational, or suggest there is more nuance and the need for further study if they do not confirm the original paper. All found in one place to boot!

Implementation of further replication policies would take a real push by the scientific community, but would beneficial to the ongoing efforts in trying to solve the ‘reproducibility crisis.’  While it may take time before we see any tangible or measurable results from the current endeavors, we should look to other ideas and concepts that enhance science reproducibility. We can’t afford to squander the public’s great faith in the scientific community due to highly touted papers that turn out to be a false positives or simply wrong.

Have an interesting science policy link? Share it in the comments!

Advertisements

Written by sciencepolicyforall

April 25, 2018 at 9:32 pm

Science Policy Around the Web – April 24, 2018

leave a comment »

By: Kelly Tomins, BSc

20180424_Linkpost

source: pixabay

NASA

Trump’s NASA Nominee, Jim Bridenstine, Confirmed by Senate on Party-Line Vote

The senate has confirmed Jim Bridenstine, republican Oklahoma congressman and former navy pilot, as the new administrator of NASA. The senate confirmed Bridenstine along party lines, with 50 republicans for and 47 democrats and two independents against. His confirmation concludes 454 days NASA has operated without a permanent leader, the longest period in the organizations history. Despite Bridenstine’s long-time interest in space, his lack of technical expertise and bureaucratic leadership experience has left many legislators skeptical of his ability to run a $18.5 billion dollar agency.

Bridenstine’s background differs greatly from past NASA administrators. He is a three-time Oklahoma congressman and the first elected official to ever hold the top position at NASA. Bridenstine’s science experience is limited to sponsoring the American Space Renaissance Act, an unpassed outline of the future of NASA, and serving for two years as the executive director of the Tulsa Air and Space Museum and Planetarium. The NASA administrator under Barack Obama, Charles F. Bolden, Jr., was an astronaut for 14 years at NASA before returning to the Marine Corp. Current acting administrator, Robert M. Lightfoot, is a mechanical engineer who has worked for NASA for nearly 20 years. Bridenstine will be only the third of 22 NASA administrators or acting administrators without previous NASA experience or formal science/engineering training. In addition, Bridenstine has no experience running a government bureaucracy and has come under fire for questionable dealings during his brief tenure at the Tulsa museum.

Democratic senator Bill Nelson of Florida was one of the most outspoken opponents of the confirmation, denouncing Bridenstine’s political background as a potential conflict of interest. Bridenstine has made controversial and conservative statements in the past, including criticisms of climate change funding and opposition to same-sex marriage. Even republican Marco Rubio expressed concerns regarding Bridenstine’s lack of science expertise, and was only swayed to vote yes after the current acting NASA administrator announced his retirement.

Bridenstine’s confirmation follows the trend within the current administration to appoint non-scientists to lead scientific agencies. Rick Perry was appointed as Secretary of Energy despite his lack of scientific expertise, his questioning of climate change, and having once proposed eliminating the agency as a whole. The current administrator of the EPA, Scott Pruitt, entered the position without scientific background. Additionally he was a well known critic of the EPA, demonstrated when he sued the agency more than a dozen times during the Obama presidency.

NASA is a historically nonpartisan agency, and its best interest would not be served by swaying political ties. There has historically been little partisan divide over the NASA administrator appointment, and both administrators under Barack Obama and George Bush were unanimously confirmed by the senate. Despite Bridenstine’s unconventional political background, Bridenstine assured the senate during his confirmation hearing that he “want[s] to make sure that NASA remains, as you said, apolitical”. Let’s hope that’s the case.

(Kenneth Chang, New York Times)

 

Ethical Research

African scientists call for more control of their continent’s genomic data

New guidelines published by the Human Heredity and Health in Africa Initiative (H3Africa) hope to clarify ethical standards of studies, give African scientists more autonomy, and ensure that Africans benefit from the research they participate in. The African continent contains a wealth of human genetic diversity and overseas researchers are increasingly utilizing this diversity to discover more about our species history and health. Despite the wealth of information African samples can provide, there is a lack of infrastructure to support African scientists. African genomic samples are often shipped to the global north to be analyzed, a practice driven by superior computational facilities and faster computing times. African scientists often have to collaborate with researchers overseas, reducing their autonomy. In addition, there are ethical questions regarding the use of African biobank data for secondary use by researchers not involved in the original study.

H3Africa is an NIH funded health-genomics consortium that works to increase the genomic infrastructure by funding African-led projects and train bioinformaticians. These new guidelines were written by an ethics working group aimed at all stakeholders involved in the design, participation, and regulation of genomic research throughout Africa. The guidelines four core principles are summarized as:

  1. Research should be respectful of African culture
  2. Research should benefit the African people
  3. African investigators/ stakeholders should have intellectual leadership in research
  4. Research should promote fairness, respect, equity, and reciprocity

H3Africa hopes that these guidelines will help guide research-ethics committees to promote the best practice for research in Africa, and eventually spark the creation of national regulations for genomics research and biobanks. Brenna Henn, a population geneticist at the University of California, Davis, is optimistic about the framework guidelines, although somewhat worried of heightened tensions with western scientists. She states, “The guidelines could be a rude awakening for scientists who seem to believe they can fly into an African country, study a genetically unique population and export the samples in a few months”. It is definitely a necessary awakening that African populations should not be exploited for their genomic data, and hopefully these guidelines will pave the way for more ethical, consensual, African-led research studies.

(Linda Nordling, Nature)

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

April 24, 2018 at 10:21 pm

Science Policy Around the Web – April 20, 2018

leave a comment »

By: Jennifer Patterson-West, Ph.D.


20180420_Linkpost

source: USEPA via flickr

Food waste

Grocery Stores Get Mostly Mediocre Scores On Their Food Waste Efforts

Food waste is often thought of as unavoidable. Everyone creates food waste. However, steps can be taken to minimize or eliminate waste.

The Environmental Protection Agency (EPA) has issued simple guidelines to reduce food waste. These ‘guidelines’ have been outlined into a hierarchical ranking based on their effectiveness at preventing food waste. The most effective tier is ‘Source Reduction’, which entails reducing the total volume of food generated. Source reduction reduces pollution and cost associated with the growth, preparation, transport, and disposal of excess food.   Producers can save money by reducing the cost of labor and other resources (such as water and pesticides) associated with unused food.

The second tier is focused at ‘Feeding the Hungry’ by donating extra food. In 2016, it was estimated that ~15.6 million American households faced low or very low food-security at some point. Low food security is defined as households that obtained enough food by participating in food assistance programs, such as community food pantries, whereas very low food security applies to those that experienced a disruption in normal eating patterns due to insufficient money or other resource for food. Taken into account, that over 38 million tons of food was wasted in 2016 alone, the donation of excess food could significantly reduce food-insecurity in America. Food donation programs have already been implemented by the 10 largest U.S. supermarkets. To promote donations by corporations, potential tax deduction for food donation are available to companies and they are protect from liability by the Bill Emerson Good Samaritan Food Donation Act.

The third tier promotes diverting food scraps to ‘Animal Feed’. Converting food scraps to animal feed is often cheaper then transporting it to a landfill. Although this practice has been implemented by farmers for centuries, corporations can also participate by donating extra food to producers of animal feed or zoos. The fourth and fifth tiers are ‘Industrial Uses’ and ‘Composting’, respectively. For industrial purposes, food can be converted into biofuel or other bio-products. Composting, which creates nutrient-rich soil amendments, is a great option for inedible parts of food waste that remains after all other actions are taken.

These guidelines were recently used by the Center of Biological Diversity and The Ugly Fruit and Veg Campaign to score the 10 largest U.S. supermarkets for their handing of food waste. A report of their findings was recently released. They found that the surveyed companies focused on donating and recycling food waste instead of preventing it with none of them achieving an A scoring. A limitation to this survey is incomplete tracking and reporting of the amount of food waste throughout an entire company. Some practices that were specifically noted as reducing food waste include Whole Food’s use of produce that is pulled from shelves to make prepared meals, Walmart’s replacement of eggs within partially damaged packages to reduce waste, and Walmart’s standardization of expiration labels.

(Menaka Wilhelm, NPR)

The opioid crisis

Nursing homes routinely refuse people on addiction treatment – which some experts say is illegal

Opioids account for more than 50% of all drug overdoses, however, total deaths are likely underestimated due to under coding in mortality data The opioid epidemic which was largely isolated to Appalachian communities and minority populations in the 1990s has rapidly spread across the United Stated into more affluent suburban communities. The surge in opioid use correlates with an acceleration in the prescription of legal opioid pain relievers, such as OxyCotin. For this reason, many individuals with opioid use disorder (OUD) became addicted due to long-term use of prescription pain medication. This link between prescription drugs and addiction are likely why evidence-based medication-assisted treatments (MAT) are treated skeptically by the public.

MAT has been shown to reduce symptoms of withdrawal, thereby significantly reducing the risk of relapse and overdose. These drugs, such as methadone or buprenorphine, reduce cravings associate with withdrawal by activating the same receptors in the brain without providing the euphoria associated with other opioid use. Contrary to evidence, many patients are directed away from medications and toward treatment programs that have no scientific or medical evidence supporting their efficacy. In fact, only 1 out of 5 OUD patients receive MAT of any kind.

Two major barriers to MAT, including prescribing restrictions and issues finding extended care facilities. Currently, authorized physicians can use buprenorphine to treat a maximum of 275 patients for opioid dependency. In order to get authorization to prescribe buprenorphine, physicians must apply for a waiver from the Substance Abuse and Mental Health Services Administration. However, the physician must have already been authorized under the Drug Addiction Treatment Act of 2000 to prescribe buprenorphine to up to 30 patient for one year prior to applying. These restrictions are thought to be essential to limit over use of these drugs; however, they increase the administrational burden on physicians and decrease assess to MAT. In an effort to expand access to treatment, the declaration of public health emergency under the Trump administration in 2017 gave doctors the ability to prescribe medications for addiction remotely through telemedicine services.

In addition to limited access to MAT treatment, patients also face the possibility that if they receive MAT they may be refused for admittance into nursing home facilities. For instance, a trade group in Ohio released a written statement that none of its more than 900 member facilities will accept patient receiving either methadone or buprenorphine for addiction. Experts exert that refusal of OUD patients receiving MAT is illegal under the Americans with Disabilities Act (ADA) for nursing facilities. Despite an unknown prevalence of such restrictions, Massachusetts Department of Public Heal release a circular letter in 2016 providing guidance for nursing facilities caring for patient on medications for addiction. Similar efforts can be expanded by other states to educate nursing facilities of their legal obligations and to provide guidance for proper care.

(Allison Bond, STAT news)

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

April 20, 2018 at 9:29 pm

Old Wounds and Shifting Tides: Potential Consequences of and Remedies for Health Disparities and Inequity in the United States

leave a comment »

By: Calais S. Prince, Ph.D.

20180416_Essay

By Jsonin [CC-BY-4.0], via Wikimedia Commons

By the year 2060, the percentage of racial and ethnic minorities is expected to increase by 49% in the United States. As the country becomes more diverse, it will become imperative to understand the genetic/epigenetic, molecular, cellular, and environmental differences associated with increased risk for disease onset. Currently, it is still clear that certain minority groups have a greater propensity for several diseases including diabetes, stroke, heart disease, and cancer. Health disparities are preventable differences in disease manifestation that can be attributed to social, political, and environmental factors. These factors can include, but are not limited to: discrimination, poverty, access to education, and exposure to hazardous chemicals.

Segregation in health care and the potential influence on participation in biomedical research

Although commonly perceived as a relic of the past, health care segregation in the United States persists and can be attributed to the Jim Crow laws that were designed and implemented following the Civil War through the 1960s. For example, “[m]any hospitals, clinics, and doctor’s offices were totally segregated by race, and many more maintained separate wings or staff that could never intermingle under threat of law” contributing to “subpar health care standards.” A glaring, present day example is that of Boston City Hospitals and Mass. General, which is both a reflection of the “the referral system that dates back five decades” and the type of care that will be covered by insurance. Another powerful, and personally relevant, example that demonstrates the importance of understanding how environment influences the risk for disease was discussed in a recent article. In African American/Black women, the consequences of racism had a significant impact on intrauterine stress as there are higher incidences of complicated pregnancies, miscarriages, premature births, and infant deaths which correlate with self-reported experiences with racism and discrimination. Conversely, African women were reported to have similar birth rates as Caucasian/White women. However, maternal health, pregnancy, and neonatal health of the grandchildren of African immigrant women born in the United States trend towards the patterns described in African American/Black women. These disparities are believed to contribute to the low percentages of minorities that participate in clinical and biomedical research as some of the barriers to participation are “distrust, provider perceptions, and access to care.” The cyclical nature of disparities: disparate living environments, disproportionate access to education and health care, postnatal complications, wealth inequalities, accelerated aging and morbidity, warrants a multifaceted solution to a pervasive, generational problem.

Mechanism that can potentially facilitate health care integration and improve participation in research

In 2010, the redesigned National Institute on Minority Health and Health Disparities (NIMHD) was established with a vision in which “all populations will have an equal opportunity to live long, healthy, and productive lives.” To accomplish this, NIMHD raises national awareness about the prevalence and impact of health disparities and disseminates effective “individual-, community-, and population-level interventions to reduce and encourage elimination of health disparities.” This vision recognizes the need to study health disparities within a variety of different modalities ranging from biomedical to social sciences as the majority of clinical and translational studies have been conducted in Caucasians/Whites. Specifically, there are four major NIMHD sponsored programs that provide funding to address the components of health disparities, inequity, and inequality at the levels of academe (Research Endowment Program), community (Community Based Participatory Research Program, Small Business Innovation Research/Small Business Technology Transfer Program), and internationally (Minority Health and Health Disparities International Research Training Program). It is also essential to facilitate mentoring of up-and-coming scientists and clinicians from underrepresented groups. The National Research Mentoring Network is a consortium composed of biomedical and clinical professionals that provide “evidence based mentorship professional development” for undergraduates through professionals; this serves as an important way to make inroads to increasing diversity in biomedical sciences. Earlier exposure to the sciences for underprivileged youth, as well as parental and community support, could serve as valuable avenues to combat health inequity.

Concluding thoughts: Demographic changes in the United States and the impact on biomedical research

The conversations surrounding disparities can be difficult, however, they are necessary. A concerted effort to improve the lives of those that are at risk/underserved have the potential to improve the lives of the individual as well as strengthen the scientific community. The projected increase of minorities in the United States warrants improved access to life saving treatment and encouragement of participation in biomedical research, as there is mounting evidence that environmental factors can influences the cellular and physiological response to stress. We also need to examine methodologies that will build trust in the scientific community which starts by: continuing to dismantle the remnants systematic discrimination, introducing science to underrepresented minorities earlier in their didactic training, providing community support, and train future researchers and clinicians to be more sensitive and responsive to the needs of the community in which they serve.

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

April 16, 2018 at 9:57 pm

Science Policy Around the Web – April 13, 2018

leave a comment »

By: Maryam Zaringhalam, Ph.D.

20180413_Linkpost

source: pixabay

Public Health

Flint school children to be screened for effects of lead after agreement

April 25th will mark four years since Flint, Michigan had clean drinking water. In that time, the water has been contaminated with lead at levels above hazardous waste and with pathogens like Legionnaires’, which resulted in an outbreak that left 12 dead. The mishandling of Flint’s water crisis has resulted in a number of lawsuits and several felony convictions, with charges ranging from conspiracy to involuntary manslaughter.

Most recently, a judge approved a $4 million legal agreement on Thursday to screen children for exposure to lead poisoning and evaluate their cognitive development, memory, and learning. The lawsuit was first filed in 2016 by a coalition of local and national groups sued the Michigan Department of Education and school districts in Flint. Exposure to lead — a neurotoxin — during childhood can have long-term adverse health effects on cognitive and physical development. As a result, children exposed to lead may require special education services. The results of these evaluations, which will begin in September, will be used to better provide services to the children affected by lead exposure. Dr. Mona Hanna-Attisha, director of the Michigan State University-Hurley Children’s Hospital Pediatric Health Initiative and an early advocate for the Flint community, will oversee the program. The lawsuit will continue in Michigan federal court to increase special education services and reforms with representation by the ACLU of Michigan.

The Flint water crisis began in 2014 when the city switched its water supply from Lake Huron to the Flint River, which had long been polluted by industrial byproducts. Flint residents immediately reported poor-tasting water, however, their complaints were ignored by government officials despite robust community advocacy efforts. Finally, in September 2015 scientists at Virginia Tech published an extensive report (made possible by collaboration with members of the Flint community) documenting dangerous levels of lead in Flint residences, followed by a report from the Environmental Protection Agency (EPA). Pollution in the river had created a fertile breeding ground for bacteria, so the river was treated with chlorine, making the water acidic, in turn leaching lead from Flint residents’ plumbing. The crisis could have been prevented if appropriate corrosion control measures were taken.

On April 6, Michigan Governor Rick Snyder announced Flint’s water is once again safe for drinking, terminating the free bottled water program designed to give Flint residents safe water as part of a $450 million state and federal aid package. Nevertheless, mistrust remains.

(Alex Dobuzinskis, Reuters)

Healthcare

Trump administration rewrites ACA insurance rules to give more power to states

After several unsuccessful Congressional attempts to repeal the Affordable Care Act (ACA) last year, the Trump administration has taken steps to roll back ACA regulations with 523 pages worth of new and revised rules. The new regulations will take effect for ACA health plans sold this fall for 2019 coverage.

Perhaps the most significant change comes from a new rule aimed at shrinking the authority of the individual mandate — the ACA provision that every individual must have healthcare or face a penalty. Individuals can seek exemption from that requirement through one of two broad channels. On Monday, April 9, the Centers for Medicare & Medicaid Services issued a final notice that individuals living in counties with only one or no ACA insurers qualify for a “hardship exemption” because the marketplace is not competitive in their region. Notably, in 2018, around half of US counties had only one or ACA insurers. Individuals opposing abortion can also qualify for exemption if their only ACA provider options cover abortion. In November, the Congressional Budget Office projected that a straight repeal of the individual mandate would increase premiums by ten percent; so even a partial effective repeal could lead to increased premiums for customers opting to stay on ACA plans.

The new rules also grant states much more authority and flexibility when determining whether healthcare plans meet ACA standards. The old ACA rules required insurers provide a standard set of ten essential health benefits to ensure customers had access to the same core set of benefits and allow them to comparison shop. Before, states were required to base these ten categories on the same benchmark plan within state borders. The rule has now been changed so that states can select different benchmark standards across state lines a la carte (i.e. a maternity care standard from New Jersey paired with a laboratory services standard from Arkansas).

CMS Administrator Seema Verma told reporters: “Ultimately the law needs congressional action to repeal.” But in the meantime, the above examples are only two of several changes that will reign in the ACA’s powers.

(Amy Goldstein, The Washington Post)

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

April 13, 2018 at 4:02 pm

Science Policy Around the Web – April 10, 2018

leave a comment »

By: Allison Dennis B.S.

Linkpost_20180410

source: pixabay

Mental Health

Many People Taking Antidepressants Discover They Cannot Quit

15 million American adults have taken antidepressants for a period longer than five years, in spite of the fact that these drugs were originally approved for short-term treatment, lasting less than nine months. Many doctors agree that a lifetime prescription may be necessary for the treatment of some patients. However, many are concerned that some patients may simply be accepting long-term use of antidepressants when faced with the challenge of stopping.

Surveys have shown that choosing to stop long-term medications is not a straightforward process with many patients reporting withdrawal effects. Some antidepressants take weeks to break down and leave the body, and their absence can induce feelings of anxiety, insomnia, nausea, “brain zaps,” and even depression itself. Antidepressants are one of the most frequently prescribed therapeutics by physicians, yet the drugs’ labels do not outline how to end a prescription safely. Patients may have to turn to online resources, including  The Withdrawal Project, which provides a community based approach to provide support, but whose writers are self-described as “laypeople who have direct personal experience or who have supported someone else in the process of reducing or tapering off psychiatric medication,” but are not medical professionals.

The benefits of antidepressants in the treatment of depression is undeniable, leaving government regulators cautious about limiting their availability. Antidepressant manufacturers appear unwilling to dive into research characterizing the discontinuation syndrome experienced when patients try to stop, feeling their efforts to demonstrate the drugs are safe and effective is sufficient. Academic and clinical researchers have occasionally tackled the issue, but few studies have looked at the barriers facing open-ended antidepressant prescription holders.

(Benedict Carey and Robert Gebeloff, The New York Times)

Alzheimer’s Disease

Scientists Push Plan To Change How Researchers Define Alzheimer’s

Currently, the 5.7 million Americans living with Alzheimer’s are identified through a panel of symptoms including memory problems or fuzzy thinking. However these symptoms are the product of biological changes scientists feel may be an earlier and more accurate marker of disease. On the biological level, Alzheimer’s can be characterized by the accumulation of several characteristic structures in brain tissue including, plaques, abnormal clusters of protein that accumulate between nerve cells, tangles, twisted fibers that form inside dying cells, and the build up of glial cells, which ordinarily work to clear debris from the brain. It is unclear if these changes are driving the widespread disconnection and destruction of neurons exhibited in the parts of the brain involved in memory and later in those responsible for language and reasoning in the brains of Alzheimer’s patients or just a byproduct of a yet-to-be-discovered process.

A work group formed by collaborators at the National Institute on Aging and the Alzheimer’s Association are putting forward a research framework which defines Alzheimer’s by the progression of a panel of risk factors, including neuropathology, tangles, plaques, and neurodegeneration. By allowing these biomarkers to fall along a continuum, the group is accommodating the observation that the exhibition of these traits can vary widely between individuals and may not always co-occur with symptoms. Yet the framework is intended to “create a common language with which the research community can test hypotheses about the interactions between Alzheimer’s Disease pathologic processes.”

Although much of the research is preliminary, specialized brain scans and tests of spinal fluid are already being designed to identify these biomarkers directly. The biomarkers included on the continuum can be observed 20-30 years prior to symptoms, fostering the hope that early interventions could be implemented to slow disease progression or even prevent it in the first place.

(Jon Hamilton, NPR)

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

April 11, 2018 at 6:11 pm