Science Policy For All

Because science policy affects everyone.

Posts Tagged ‘brain research

Science Policy Around the Web – January 4, 2019

leave a comment »

By: Saurav Seshadri, Ph.D.

united-states-2361405_1280

Source: Pixabay

U.S. shutdown begins: ‘It’s disheartening, … discouraging, … deflating’

As the third government shutdown of 2018 enters its third week, the extent of its impact on the scientific community is still unclear. The shutdown affects about a quarter of the federal government, including agencies such as the EPA, NIST, and NASA.  While some agencies, such as the NIH and CDC, have budgets approved through 2019 and can operate normally, many others have had to halt activities and place non-essential employees on leave without pay.  Such employees can’t even use their official email addresses until the government reopens. One of the hardest hit agencies is the National Science Foundation (NSF), whose current contingency plan only retains sixty out of about two thousand employees.  NSF has been forced to stop grant reviews and payments, and will have to reschedule many of its deadlines, which has real consequences for investigators – following a 16-day shutdown in 2013, NSF stated outright that because of the lapse in funding, ‘scientists experienced setbacks in furthering their research objectives’.  Other agencies are less affected; for example, drug reviews conducted by the FDA are largely funded by pharmaceutical companies and will continue without disruption.  However, the American Association for the Advancement of Science (AAAS) summarized many scientists’ concerns by stating that ‘any shutdown of the federal government can disrupt or delay research projects, lead to uncertainty over new research, and reduce researcher access to agency data and infrastructure’.

With their party’s majority in the House of Representatives made official on Thursday, Democrats immediately passed spending bills that would fund the government through early February.  However, these bills do not include $5.6 billion in funding for a border wall, and are therefore unlikely to end the shutdown: President Trump has insisted upon this figure, and Senate Republicans have indicated that they will not move forward any legislation that doesn’t meet Trump’s approval, despite previously supporting the same stopgap bill.  Post-holiday media coverage of struggling furloughed workersand lapsed public services has started to stoke public pressure, which will ultimately force an end to the shutdown.  But for now, both sides are more deeply entrenched in their positions than ever, and important science is falling by the wayside.

(Science News Staff, Science)

Brain circuits of compulsive drug addiction identified

Misuse of prescription opioid painkillers has reached epidemic proportions in America, and was declared a public health emergency by the Department of Health and Human Services in 2017.  According to recent estimates, there were 72,000 opioid overdose-related deaths that year, a 10% increase over the previous year.  The National Institute on Drug Abuse (NIDA) reports that about 20-30% of people prescribed opioids misuse them, and about 10% develop a disorder.  These statistics highlight the desperate need to understand the neurobiology of addiction, and specifically, how it can be reversed.  A recent study published in Nature makes significant progress in this direction.

The authors began by modeling a core feature of human addiction: compulsive behavior, or drug seeking in the face of negative consequences.  Mice were allowed to stimulate the release of dopamine (the neurotransmitter that signals ‘reward’ in response to drugs of abuse) by pressing a lever, which sent a laser pulse through an optical fiber to light-sensitized dopaminergic neurons in their brain.  Within a few days, mice were functionally addicted, and were self-stimulating up to 80 times per hour.  At this point, mice started to receive punishing electric foot shocks on every third stimulation.  This intervention caused the mice to split into two groups: those that stopped self-stimulating (40%, ‘renouncers’) and those that continued (60%, ‘perseverers’).

Having thus identified mice that showed characteristics of compulsive behavior, the authors turned their focus to the underlying circuitry.  They traced a neural pathway connecting the orbitofrontal cortex and the dorsal striatum, regions involved in decision making and action selection, which showed opposite patterns of activity in renouncers and perseverers.  Remarkably, reducing activity  in this pathway caused persevering mice to temporarily renounce stimulation, and weakening the connection itself extended this effect (up to about six days).

While the validity of comparing brain stimulation to drug abuse remains to be established, these results are still highly encouraging.  They provide a physical basis for aspects of addiction that are often poorly understood and harshly judged by society: why some addicts engage in seemingly irrational compulsive behaviors, and why individuals differ in their ability to break self-destructive cycles of addiction. Since several methods for modulating brain activity already exist, pinpointing exactly which circuits to target to combat addiction, as this study does, may be the key to recovery for many patients.

(Patricia Janak, Nature)

 

 

Have an interesting science policy link? Share it in the comments!

Advertisements

Written by sciencepolicyforall

January 4, 2019 at 10:01 am

Science Policy Around the Web – April 10, 2018

leave a comment »

By: Allison Dennis B.S.

Linkpost_20180410

source: pixabay

Mental Health

Many People Taking Antidepressants Discover They Cannot Quit

15 million American adults have taken antidepressants for a period longer than five years, in spite of the fact that these drugs were originally approved for short-term treatment, lasting less than nine months. Many doctors agree that a lifetime prescription may be necessary for the treatment of some patients. However, many are concerned that some patients may simply be accepting long-term use of antidepressants when faced with the challenge of stopping.

Surveys have shown that choosing to stop long-term medications is not a straightforward process with many patients reporting withdrawal effects. Some antidepressants take weeks to break down and leave the body, and their absence can induce feelings of anxiety, insomnia, nausea, “brain zaps,” and even depression itself. Antidepressants are one of the most frequently prescribed therapeutics by physicians, yet the drugs’ labels do not outline how to end a prescription safely. Patients may have to turn to online resources, including  The Withdrawal Project, which provides a community based approach to provide support, but whose writers are self-described as “laypeople who have direct personal experience or who have supported someone else in the process of reducing or tapering off psychiatric medication,” but are not medical professionals.

The benefits of antidepressants in the treatment of depression is undeniable, leaving government regulators cautious about limiting their availability. Antidepressant manufacturers appear unwilling to dive into research characterizing the discontinuation syndrome experienced when patients try to stop, feeling their efforts to demonstrate the drugs are safe and effective is sufficient. Academic and clinical researchers have occasionally tackled the issue, but few studies have looked at the barriers facing open-ended antidepressant prescription holders.

(Benedict Carey and Robert Gebeloff, The New York Times)

Alzheimer’s Disease

Scientists Push Plan To Change How Researchers Define Alzheimer’s

Currently, the 5.7 million Americans living with Alzheimer’s are identified through a panel of symptoms including memory problems or fuzzy thinking. However these symptoms are the product of biological changes scientists feel may be an earlier and more accurate marker of disease. On the biological level, Alzheimer’s can be characterized by the accumulation of several characteristic structures in brain tissue including, plaques, abnormal clusters of protein that accumulate between nerve cells, tangles, twisted fibers that form inside dying cells, and the build up of glial cells, which ordinarily work to clear debris from the brain. It is unclear if these changes are driving the widespread disconnection and destruction of neurons exhibited in the parts of the brain involved in memory and later in those responsible for language and reasoning in the brains of Alzheimer’s patients or just a byproduct of a yet-to-be-discovered process.

A work group formed by collaborators at the National Institute on Aging and the Alzheimer’s Association are putting forward a research framework which defines Alzheimer’s by the progression of a panel of risk factors, including neuropathology, tangles, plaques, and neurodegeneration. By allowing these biomarkers to fall along a continuum, the group is accommodating the observation that the exhibition of these traits can vary widely between individuals and may not always co-occur with symptoms. Yet the framework is intended to “create a common language with which the research community can test hypotheses about the interactions between Alzheimer’s Disease pathologic processes.”

Although much of the research is preliminary, specialized brain scans and tests of spinal fluid are already being designed to identify these biomarkers directly. The biomarkers included on the continuum can be observed 20-30 years prior to symptoms, fostering the hope that early interventions could be implemented to slow disease progression or even prevent it in the first place.

(Jon Hamilton, NPR)

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

April 11, 2018 at 6:11 pm

Science Policy Around the Web – November 22, 2016

leave a comment »

By: Rachel Smallwood, PhD

Photo source: pixabay

Federal Research Funding

US R&D Spending at All-Time High, Federal Share Reaches Record Low

Recently released data from the National Science Foundation (NSF) showed trending increases in scientific research funding in the US across the past several years. Estimates of the total funding for 2015 put the value at an all-time high for research and development (R&D) funding for any country in a single year. In 2009, President Obama stated a goal to devote 3% of the USA’s gross domestic product (GDP) to research, and we have been making slow progress to that point; in 2015, 2.78% of the GDP went to research. Businesses accounted for the largest portion of overall scientific funding, contributing 69% of the funds. The second largest contributor was the federal government; however, it had the lowest percentage share of the total since the NSF started tracking funding in 1953, and the actual dollar amount contributed has been declining since 2011. Therefore, although the overall percentage of GDP going to research is increasing, that increase is driven by businesses, whereas the GDP percentage contributed by the federal government has dropped to almost 0.6%.

When taking a closer look at types of research, the federal government is the largest funding source for basic science research, covering 45% of the total. However, businesses make up the majority of the funding for applied research (52% in 2014) and experimental development (82% in 2014). This disproportionality in funding types combined with the decreases in federal research spending are concerning for the basic science field. There is more competition for less money, and this concern is compounded by uncertainty and questions about President-Elect Trump’s position on and plans for scientific funding. Aside from a couple of issues, primarily concerning climate change and the environment, he has said very little about science and research. Many scientists, institutions, and concerned citizens will be watching closely to see how science policy develops under Trump’s administration and its effects on federal spending and beyond. (Mike Henry, American Institute of Physics)

Biomedical Research

‘Minibrains’ Could Help Drug Discovery for Zika and for Alzheimer’s

A group of researchers at Johns Hopkins University (JHU) is working on a promising tool for evaluating disease and drug effects in humans without actually using humans for the tests. ‘Minibrains’ are clusters of human cells that originated as skin cells, reprogrammed to an earlier stage of development, and then forced to differentiate into human neural cells. They mimic the human brain as far as cell types and connections, but will never be anywhere near as large as a human brain and can never learn or become conscious.

A presentation earlier this year at the American Association for the Advancement of Science conference showcased the potential utility for minibrains. A large majority of drugs that are tested in animals fail when introduced in humans. Minibrains provide a way to test these drugs in human tissue at a much earlier stage – saving time, money, and animal testing – without risking harm to humans. Minibrains to test for biocompatibility can be made from skin cells of healthy humans, but skin cells from people with diseases or genetic traits can also be used to study disease effects.

A presentation at the Society for Neuroscience conference this month demonstrated one such disease – Zika. The minibrains’ growth is similar to fetal brain growth during early pregnancy. Using the minibrains, Dr. Hongjun Song’s team at JHU was able to see how the Zika virus affected the cells; the affected minibrains were much smaller than normal, a result that appears analogous to the microcephaly observed in infants whose mothers were infected with Zika during the first trimester.

Other presentations at the meeting showcased work from several research groups that are already using minibrains to study diseases and disorders including brain cancer, Down syndrome, and Rett syndrome, and plans are underway to utilize it in autism, schizophrenia, and Alzheimer’s disease. Though there might be a bit of an acceptance curve with the general public, minibrains potentially offer an avenue of testing that is a better representation of actual human cell behavior and response, is safer and more affordable, and reduces the need for animal testing. (Jon Hamilton, NPR)

Health Policy

A Twist on ‘Involuntary Commitment’: Some Heroin Users Request It

The opioid addiction epidemic has become a significant healthcare crisis in the United States. Just last week the US Surgeon General announced plans to target addiction and substance abuse. He also stated the desire for a change in perception of addiction – it is a medical condition rather than a moral or character flaw. Earlier this year, the Centers for Disease Control published guidelines that address opioid prescribing practices for chronic pain, strongly urging physicians to exhaust non-pharmacologic options before utilizing opioids. In response to the rising concern over prescription opioid abuse, steps have been taken to reduce prescriptions and access. This has resulted in many turning to heroin – which is usually a cheaper alternative anyway – to get their opioid fix.

One of the first steps in treatment and recovery for addiction and dependence is detoxing. However, opioids are highly addictive and many people struggle with the temptation to relapse. Additionally, many of the programs designed to help with the initial detox have long wait lists, are expensive, and may not be covered by insurance, further deterring those with addiction and dependence from getting the help they need. These factors have caused many to start turning to their states, asking to be voluntarily committed to a program on the basis that they are a danger to themselves or others because of their substance abuse. This is currently an option in 38 states. These programs can be held in either privately-run institutions or in state prisons. However, this practice is controversial because if the person’s insurance does not cover their stay, it falls to tax payers to foot the bill. While this is unpopular with some, advocates say the civil commitment laws are important options while there may be no other immediate ways for an individual to get help. (Karen Brown, NPR)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

November 22, 2016 at 9:00 am

Science Policy Around the Web – September 23, 2016

leave a comment »

By: Emily Petrus, PhD

Source: pixabay

Biomedical Research

BRAIN Initiative might get a global boost

While politicians met at the UN General Assembly in New York this week, another meeting of a more scientific variety was going on nearby at Rockefeller University. The US National Science Foundation (NSF) hosted a meeting to organize neuroscientists from across the globe to develop new ideas to organize their field of research. The US BRAIN initiative was launched in 2013 as an effort to study key issues in neuroscience, such as how the brain connects and functions at the cellular and systems levels. Worldwide, other countries have similar initiatives in place or in planning, thus NSF wanted to get a feel of how data and resources could be shared between scientists regardless of country. For example, Japan and China are investing heavily in primate research, while America and Europe tend to shy away from these organisms, but put more focus on basic research and clinical applications.

One problem that neuroscientists encounter as they compare research findings is differences in data acquisition and processing, with each lab having their own in-house protocols and analyses. A global repository of data with access to super computers and/or powerful microscopes for all could be a boon for how neuroscience research of the future is performed. Other researchers voiced concerns over the possibility that a global project would re-direct funds from local and national sources. This new neuroscience “club” could also create yet another economic hurdle for developing nations’ scientists to overcome.

Politicians at the UN General Assembly voiced their support for an International Brain Initiative, and were met by cautious enthusiasm from neuroscientists. Time will tell if a truly global approach to neuroscience materializes, but political and financial support for neuroscience research makes this an exciting time to be a scientist. (Sara Reardon, Nature)

2016 Presidential Elections

How do the candidates stack up on science?

With the first presidential debate scheduled for Monday, September 26, our nation continues a heated election season with two powerful candidates. Although science is generally low on the priority list for the voting public, it remains an integral part of how our educated nation works. Research influences broad issues in public policy, and policy influences how science gets funded and moves forward.

The candidates have some points of agreement and points of contention for various scientific topics. For example, both Trump and Clinton support NASA and space exploration, although Trump is more eager for a private sector endeavor. Both Trump and Clinton support vaccines in children, with Trump having some reservations, but for other issues of public health such as funding for biomedical research, Clinton has clear proposals for increasing funding, while Trump seems more skeptical than supportive of funding NIH.

Neither candidate has voiced strong opinions on the use of genetically modified foods. However, Clinton does support food labeling, citing a “right to know”, while the Republican Party opposes making labels mandatory. In addition, neither candidate has made a clear statement about gun research; while Clinton has proposed many changes to gun control, Trump supports a right to carry at the national level. Improving Science, Technology, Engineering and Mathematics (STEM) education is a topic about which Clinton is passionate, while Trump’s stance is less clear. He maintains that education should be on a locally managed level, which means geography would impact the availability of quality STEM programs.

The strongest point of contention is regarding climate change, where Clinton proposes creating clean energy jobs and cutting greenhouse gas emissions, while Trump considers climate change a hoax and vowed to use American-produced natural gas and oil and reverse the EPA’s moratorium on new coal mining permits.

Overall the candidates have said little regarding these top scientific issues, but based on what they have said in the past, there are certain issues they agree on, while others are divisive in both politics and for the general public. (Science News Staff, ScienceNews)

Biomedical Training

It’s postdoc appreciation week!

In 2009 the US House of Representatives officially declared a week of appreciation for the forces which move scientific research: the postdoc. Postdoctoral fellows/researchers (postdocs) are research scientists who have completed a PhD and continue their training under a more established principle investigator in order to expand their research experience and launch their careers. The National Postdoctoral Association (NPA) pioneered the celebrations in 2010, giving postdocs perks such as career fairs, ice cream socials, and free tickets to local events. Although some of these perks may seem superficial, the larger goal of this week is to bring attention to the plight of these mid-career scientists.

Recently postdocs have been an increasingly vocal part of the research community, as their numbers swell and job prospects appear bleak. Under the organization of the NPA, postdocs have won increases in stipend (pay) levels dictated by the NIH. The NPA has also provided recommendations, information and guidance to the White House and other policy branches of the government. Their goals are to enhance postdoctoral training experiences and opportunities for postdocs in academic and government research settings. The US is placing more focus on getting students to study Science, Technology, Engineering and Math, however biomedical PhDs are being produced at an unsustainable rate for academia, government and industry to employ. By celebrating postdoc appreciation week, the focus is briefly shifted to the other end of the pipeline, where conditions must improve if more people are to be inspired to join at the entry point.

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

September 23, 2016 at 3:29 pm

Science Policy Around the Web – July 22, 2016

leave a comment »

By: Nivedita Sengupta, Ph.D.

photo credit: Alex E. Proimos via photo pin cc

The Common Rule and human testing

Science academies blast US government’s planned research-ethics reforms

The ultimate advancement in human health and welfare depend on research with human subjects. To achieve this, properly controlled studies with human subjects is imperative for eliminating abuse of human subjects and proper protection of the data. To address these concerns the “Common Rule” was established in 1991 influenced by the Belmont Report, a 1978 document which laid out principles for ethical research with humans, such as minimizing patient harm and maximizing the benefit to society. The ‘Common Rule’ is the current human subject regulation policies which addresses ethical issues such as informed consent, storage of study participants’ biological specimens and many others. However with technological advances over time, achieving these goals has become more complicated and thus imposes difficulties in maintaining patient privacy. Hence in September, 2015 the US government proposed revisions for regulations governing studies of human subjects.

Recently however, an independent advisory panel proposed that the US government’s proposed overhaul of  the Common Rule is flawed and should be withdrawn. On June 29th the US National Academies of Sciences, Engineering and Medicine said that the government’s proposed changes are “marred by omissions and a lack of clarity”. They indicated that it would slow down research and will do little to improve protections of patients enrolled in studies. The panel recommended that the government should appoint an independent commission to craft new rules for such research.

The changes proposed by the US Department of Health and Human Services (HHS) attempted to address concerns that have arisen since the ‘Common Rule’ was established. For instance, the HHS reforms suggests a requirement for participants’ consent to use stored samples, such as blood or tissue, for future research. But the US academies’ panel pointed that the new consent requirements would slow research unnecessarily because little harm is likely to come to a person as a result of the use of stored samples. Moreover the extra consent forms can link the samples to the person’s name and thus increasing risk of identification.

Currently HHS is reviewing more than 2,100 public comments to its proposal and many of these comments were critical. However, the US academies’ panel says that the proposal should be scrapped and HHS should start fresh by appointing an independent commission to recommend reforms for the Common Rule. Meanwhile an HHS spokesperson said that the government is still pondering over the public’s comments and the report. She adds that the proposal comes after “many years of work,” and “that starting over would require many more.” (Sara Reardon, Nature News)

Scientific Publishing

Beat it, impact factor! Publishing elite turns against controversial metric

Journal Impact factor (JIF) – one of the most promoted and controversial metric system in the field of science is currently facing negative reviews from the scientific community. Impact factor is a measure of the average number of citations that articles published by a journal in the previous two years have received in the current year which is calculated by various companies. It is solely aimed to indicate the quality of journals said Heidi Siegel, a spokesperson for Thomson Reuters, the major publisher of JIFs. However, the irony is researchers often use the JIF to judge individual papers instead and in some cases even the authors.

On July 5th, several leading science publishers posted a paper to the preprint server bioRxiv asking all journals to consider a different metric which will capture the range of citations that a journal’s articles acquire. Also the American Society for Microbiology in Washington DC announced its plans to discard impact factor from its journals and website, and also from marketing and advertising.

Stephen Curry, a structural biologist at Imperial College London and also the lead author on the bioRxiv preprint paper said that sadly many researchers evaluate papers by the impact factor of the journals and this can also influence decisions made by hiring committees and funding agencies. Curry’s team highlighted some limitations by plotting the distribution of citations (used to calculate the 2015 impact factors) for articles published in 2013–14 in 11 journals, including ScienceNatureeLife. They showed that most of the papers gathered fewer citations than the impact factor for the journal: 74.8% of Nature articles were cited below its impact factor of 38.1, and 75.5% of Science papers were cited fewer than 35 times with its impact factor of 34.7. Highly cited papers are the cause of this disconnect as Nature’s most cited paper in the analysis was referenced 905 times and Science’s 694 times. Curry and his team highly recommends the use of citation distribution curves instead of JIF as it provides a more informative snapshot of a journal’s standing.

However, Ludo Waltman, a bibliometrics researcher at Leiden University in the Netherlands, thinks that citation distributions are more relevant than impact factors for making decisions in hiring and promotion. But he feels that impact factors can be useful for researchers in some cases. Nonetheless anti-impact-factor campaigners believes that it will take time and pressure from various directions to diminish the influence of impact factor as it has become a cultural thing in the scientific field. (Ewen Callaway, Nature News)

Brain research advancements

Human brain mapped in unprecedented detail

Neuroscientists have long sought to divide the brain into smaller pieces to better appreciate how it works as a whole. On July 20th,  Nature published the new unprecedented map of the brain’s outermost layer — the cerebral cortex — subdividing each hemisphere and valley-like folds into 180 separate areas. Ninety-seven of these areas have never been previously described despite showing clear differences in structure, function and connectivity from the neighboring areas.

“Until now, most brain maps were based on a single type of measurement which provides an incomplete view of the brain’s inner workings” says Thomas Yeo, a computational neuroscientist at the National University of Singapore. This new map is based on multiple MRI measurements which measures the flow of blood in response to different mental tasks, which Yeo says “greatly increases confidence that they are producing the best in vivo estimates of cortical areas.”

The map was constructed by a team of people led by neuroscientist Mathew Glasser at Washington University Medical School. They collected imaging data from 210 healthy young adults participating in the Human Connectome Project, an National Institutes of Health-funded initiative to map the brain’s structural and functional connections. They collected information of cortical thickness; brain function; connectivity between regions; topographic organization of cells in brain tissue; and levels of myelin — fatty substance that speeds up neural signaling. The borders on the map was delineated by areas which showed significant changes in two or more of these properties. Analysis of all the data confirmed the existence of 83 previously reported brain areas while identifying 97 new ones. Scientists further tested the map generated by Glasser and his team, and found it accurate by looking for these regions in the brains of additional 210 people. But the size of the areas varied from person to person and these differences may reveal new insights into individual variability in cognitive ability and opens up the possibility to explore further the unique intersection of individual talents with intellectual and creative abilities.

But the map is limited in some important ways as it reveals little about the biochemical basis of the brain and about the activity of single neurons or small groups. However, Glasser says that “We’re thinking of this as version 1.0, that doesn’t mean it’s the final version, but it’s a far better map than the ones we’ve had before.” (Linda Geddes, Nature News)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

July 22, 2016 at 9:00 am