Science Policy For All

Because science policy affects everyone.

Posts Tagged ‘Alzheimer’s

Science Policy Around the Web – November 14, 2017

leave a comment »

By: Saurav Seshadri, PhD

20171114_Linkpost

source: pixabay

Alzheimer’s Disease

Bill Gates sets his sights on neurodegeneration

Microsoft founder Bill Gates has announced a new funding initiative for research into Alzheimer’s Disease, starting with a personal donation of $50 million to the Dementia Discovery Fund (DDF).  The DDF is a UK-based, public-private collaboration, launched in 2015 and designed to encourage innovative research into treatments for dementia, of which Alzheimer’s Disease is a leading cause.  Initial investment in the DDF, which came from the pharmaceutical industry and government entities, was $100 million, meaning Gates’ contribution will be significant.  Gates says his family history makes him particularly interested in finding a cure for Alzheimer’s Disease.  The DDF has already taken steps in this direction: its first investment was in the biopharma company Alector, which is moving forward with immune system-related research to combat Alzheimer’s Disease.

Gates is already famous for his philanthropy through the Bill and Melinda Gates Foundation, which funds efforts to fight poverty and disease throughout the world.  However, the Foundation has traditionally focused on infectious diseases, such as HIV and malaria, making Alzheimer’s Disease Gates’ first foray into neuroscience.  In this regard, he has some catching up to do to match philanthropic contributions and business pursuits by other tech billionaires.  These include his Microsoft co-founder Paul Allen, who started the Allen Institute for Brain Sciences with $100 million in 2003.  The Allen Institute provides a range of tools for basic researchers using mouse models, generating comprehensive maps of brain anatomy, connectivity and gene expression.  More recently, Tesla founder Elon Musk started Neuralink, a venture which aims to enhance cognitive ability using brain-machine interfaces.  Kernel, founded by tech entrepreneur Bryan Johnson, has a similar goal.  Finally, while the Chan Zuckerberg Initiative (started by Facebook CEO Mark Zuckerberg in 2015) doesn’t explicitly focus on neuroscience, its science program is led by acclaimed neuroscientist Cori Bargmann.

As pointed out by former National Institutes of Mental Health Director Tom Insel, this infusion of money, as well as the fast-moving, results-oriented tech mindset behind it, has the potential to transform neuroscience and deliver better outcomes for patients.  As government funding for science appears increasingly uncertain, such interest and support from private investors is encouraging.  Hopefully the results will justify their optimism.

(Sanjay Gupta, CNN)

 

Physics

Elusive particles create a black hole for funding

The Large Hadron Collider (LHC) enabled a scientific breakthrough in 2012 when it was used to produce evidence for the Higgs boson, a physical particle that endows matter with mass.  In the wake of the worldwide excitement generated by that discovery, physicists finalized plans for a complementary research facility, the International Linear Collider (ILC), to be built in Japan.  While the LHC is circular and collides protons, the ILC would collide positrons and electrons, at lower energy but with more precise results.  Unfortunately, anticipated funding for the $10 billion project from the Japanese government has failed to materialize.  Following recent recommendations by Japanese physicists, the group overseeing the ILC has now agreed on a less ambitious proposal, for a lower energy machine with a shorter tunnel.  Though physicists remain optimistic that the ILC will still provide useful data, it will no longer be able to produce high-energy quarks (one of its planned uses), and will instead focus on previously detected particles and forces.  The ILC’s future is currently in limbo until the Japanese government makes a concrete financial commitment, and it is unlikely to be completed before 2030.

After the Higgs boson, the LHC struggled to find proof of the existence of other new particles.  One such high-profile disappointment was the search for dark matter.  When dark matter was hypothesized to be the source of unexplained gamma radiation observed with NASA’s Fermi Space Telescope, the search for a dark matter particle became a top priority for the LHC’s second run.  Such evidence would also have supported supersymmetry, a key theory in particle physics.  However, these efforts, as well as multiple others using different detectors, have thus far failed to find any signs of dark matter.  These unsuccessful experiments certainly contributed to scaling back the ILC, and illustrate larger problems with setting realistic expectations and/or valuing negative results among scientists, government officials, and the public.  As a result, in order to advance our understanding of the basic building blocks of our universe, particle physicists will now have to do more with less.

(Edwin Cartlidge, Nature News)

Have an interesting science policy link?  Share it in the comments!

Advertisements

Written by sciencepolicyforall

November 14, 2017 at 5:40 pm

Science Policy Around the Web – September 27, 2016

leave a comment »

By: Nivedita Sengupta, PhD

Source: pixabay

Alzheimer’s Disease

Larger studies are under way to test whether the promising early data holds up

Recent clinical trials reported that the drug “Aducanumab” might remove toxic amyloid-β proteins thought to trigger Alzheimer’s disease from the brain. In the study involving 165 people, 103 patients received the drug once a month for 54 weeks and the other group received a placebo. Patients receiving infusions of aducanumab experienced a reduction in the amount of amyloid-β in their brains, which was in accordance with the findings of a pretrial mouse study in which the drug cleared amyloid-β plaques from the animals’ brains. “This drug had a more profound effect in reversing amyloid-plaque burden than we have seen to date,” says psychiatrist Eric Reiman, executive director of the Banner Alzheimer’s Institute in Phoenix, Arizona.

Whether aducanumab works to ameliorate the memory and cognitive losses associated with Alzheimer’s is currently under phase III clinical trials. Scientists have debated for years whether accumulation of amyloid-β causes memory loss and other symptoms of Alzheimer’s. This trial is in favor of the “amyloid hypothesis”, and suggests that elimination of the protein might alleviate the symptoms. In the past, other Alzheimer’s drugs have looked promising in early-stage trials, but ended in failure and even caused deaths of patients because of brain inflammation. Aducanumab also showed abnormalities on brain-imaging scans but it was in less than one-third of the patients. Hence, to avoid death of participants, researchers closely monitored such anomalies in these Alzheimer’s trials. All of the reported imaging abnormalities eventually disappeared in about 4 to 12 weeks, and no patients were hospitalized.

Patients who received higher doses of the drug, or who had genetic risk factors for Alzheimer’s, were more likely to develop the brain anomalies. Accordingly, Biogen — the company that makes aducanumab —adjusted the drug’s dosage and the monitoring schedule for people with genetic risks for Alzheimer’s in its phase III trials.

Aducanumab is a bright spot in the field of Alzheimer’s therapeutics after years of failed antibody and other types of drug trials. “This is the best news we’ve had in my 25 years of doing Alzheimer’s research, and it brings hope to patients and families affected by the disease,” says neurologist Stephen Salloway of Butler Hospital in Providence, Rhode Island, who was on the team that ran the initial trial. (Erika Check Hayden, Nature)

Clinical Trials

Investigators are now required to disclose all clinical trials, whether successful or not

On 16th September 2016, the US Department of Health and Human Services (HHS) and the US National Institutes of Health (NIH) announced the new rules for clinical-trial disclosures. According to the new law, it is required that all researchers must report the design and results of all clinical trials whether successful or not. The revised law also empowers the government to enforce penalties for those who fail to comply. The new rule will be effective from 18th January onwards and researchers have 90 days to comply. The disappointing results of clinical trials will no longer remain unpublished, and the new rule intends to crack down on the large number of clinical trials that are conducted but never reported. Robert Califf, head of the US Food and Drug Administration (FDA) says, “A lot of major universities just miss the point that if you do an experiment on a person and get consent, you really have the obligation to make the results known.”

The old rule mandated that researchers conducting trials with human subjects had to register their study with the HHS website, ClinicalTrials.gov, before starting their work, and should follow up with information about their methods and results. But there were many exceptions and loopholes which created a lot of ambiguity. This allowed researchers to avoid reporting all the trials, particularly the failed ones. Christopher Gill, a health researcher at Boston University in Massachusetts says, “This can bias the literature and obscure important information on whether an experimental therapy is harmful. From the perspective of consumers and science, failures are as important as successes”.

Under the new rule, all trials must be registered on ClinicalTrials.gov within 21 days of enrolling their first patient and researchers can no longer wait for the results of their trials to report their data. Additionally the NIH’s companion rule edicts that NIH-funded researchers have to register phase I trials and also trials that do not involve an FDA-regulated product, such as behavioral interventions. Further changes dictate reporting the details of plans to conduct trials, outlining the statistics to be used to analyze the results, and revealing any changes in the protocol over the course of the study. The final HHS rules will give regulators a greater ability to enforce existing regulations, because many studies of drugs that are eventually licensed are still not reported. (Sara Reardon, Nature)

Income Inequality

Wages for top scientists are shooting skywards while others are being left behind

Income inequality in science is in the rise and is evident in all universities across several countries. The salary gap between elite scientists and those toiling in the benches is expanding over the past few decades. Limited data on the salaries of scientists is available making it difficult to determine the full extent and causes of income inequality. “But the gap in wages has reached a point at which it could be driving talented young people away from careers in academic science”, says Richard Freeman, an economist at Harvard University in Cambridge, Massachusetts. The results of Nature‘s 2016 salary survey also support this concern.

One of the metrics used to measure disparities in salaries is the Gini coefficient in which 0 means everyone earns the same and 1 indicates maximum inequality. In 2012, economist Paula Stephan found that the Gini coefficient has more than doubled between 1973 and 2006 in most fields and faculty ranks in science, with the biggest increases in the life sciences. In contrast, it grew only 35% for full-time male earners in the United States and 18% for US households.

A major issue responsible for the rise of the Gini coefficient is the doubling of the National Institutes of Health’s budget during the late 1990s and early 2000s. This created competition among institutions for a small pool of top-ranked, grant-winning scientists. Everybody wanted to employ the most productive scientists who could bring in grants thus driving up the salaries. “One way for universities to minimize risk is to pick someone who is a demonstrated winner,” says Donna Ginther, a labour economist at the University of Kansas. Like the US, in the UK too, the salaries of top-earning professors have been pulling away from the pack since the late 1990s. An analysis of full-professor salaries in UK in July suggested that low ranking universities, to improve their REF (Research Excellence Framework – an assessment, done by UK funding agencies roughly every five years) performance, are offering high salaries to recruit researchers with high-quality papers to boost their scores. A similar trend is seen in other countries like China and Germany.

On the other end of the salary spectrum, there is little pressure to boost pay. With grants getting harder to win, labs are employing low-cost workforce to maximize research output. This labor environment benefits from the willingness of postdocs to sacrifice income for a chance at an academic research career. Even those lucky enough to land offers for tenure-track junior faculty positions find that starting salaries are not very negotiable.

High salaries at the top can attract productive workers, but low pay at the bottom signals that there may not be a good future in this career. If big rewards become concentrated among a smaller group of people in a highly competitive area, then others who could still have been productive scientists end up losing a disproportionate amount in terms of earnings and career prospects and it could keep promising people from further pursuing a research career. (Corie Lok, Nature)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

September 27, 2016 at 11:47 am

Science Policy Around the Web – September 9, 2016

leave a comment »

By: Thaddeus Davenport, PhD

Source: pixabay

Biotechnology

DNA Data Storage

In a recent Nature News article, Andy Extance described the growing need for novel data storage methods and materials. It is estimated that between 2013 and 2020 there will be a tenfold increase in digital information, requiring 44 trillion gigabytes of storage. This is a number that is difficult to comprehend, but it’s magnitude and the rapid rate of digital data growth are put in context by a second, more shocking, estimate: if the expansion of digital information continues at the forecasted rates the amount of data requiring storage in 2040 will require “10 to 100 times the expected supply of microchip-grade silicon.” For this reason, researchers have begun considering alternative data storage materials including DNA, which is able to store information at an impressive density; it is estimated that 1 kg of DNA would be sufficient to store the world’s digital archives. DNA is also stable – while there is data loss from hard disks after less than ten years of storage, Nick Goldman, a researcher pioneering DNA data storage at the European Bioinformatics Institute (EBI), notes that in 2013, researchers successfully read the genome of a horse that had been trapped in permafrost for 700,000 years. But there are a number of hurdles that must be overcome before we are able to stream our favorite show out of a test tube. These hurdles include: 1) it is slow to read and (especially) to write DNA sequences, 2) DNA synthesis is error prone, 3) DNA synthesis is currently expensive and 4) it is difficult to specifically access desired information stored within DNA. There have been exciting advances over the last few years from researchers at EBI, Harvard, the University of Washington, and Microsoft that begin to address these problems. This year, researchers at Microsoft and the University of Washington reported successfully storing and retrieving 200 megabytes of data in DNA. This is a far throw from the 44 trillion gigabytes of storage we will require in 2020, but progress in science is non-linear and the need for alternative storage media will motivate the growth of this exciting field. (Andy Extance, Nature News)

Environment

Oklahoma Shuts Down Wastewater Injection Wells Following Earthquake

There is a significant amount of wastewater that is released in the process of extracting oil and gas from traditional and hydraulic fracturing (“fracking”) wells. One way to dispose of this wastewater is to inject it deep into the earth’s crust. As oil production has increased within the continental United States within the last few years, wastewater injection has increased in stride. Recent evidence suggests that wastewater injection into rock formations alters pre-existing stresses within faults, in some cases leading to slippage that results in an earthquake. A recent article by Niraj Chokshi and Henry Fountain for the New York Times reported that on September 3rd, Oklahoma experienced a 5.6-magnitude earthquake – tying the state’s previous record for its most severe earthquake set in 2011. In response, Oklahoma government officials ordered the shutdown of three dozen wastewater injection wells in the area most affected by the earthquake. The quake comes amid an impressive increase in earthquake frequency for the state. In 2009, there were only three earthquakes of magnitude 3 or greater, but in 2015, this number increased to over 900. To address this increase, state officials ordered a reduction in wastewater injection last year with the hope of decreasing earthquake activity. To date in 2016 there have been over 400 earthquakes of magnitude 3 or greater in Oklahoma. While it is widely accepted that oil and gas production and the associated wastewater injection have set off a number of earthquakes in Oklahoma and other states, it remains unclear if last Saturday’s earthquake was the result of this activity. In the future, additional monitoring of injection wells will provide valuable data to inform decisions on the placement and operation of wastewater injection wells. (Niraj Chokshi and Henry Fountain, New York Times)

Health

Early Support for Amyloid Plaques as the Causative Agent of Alzheimer’s Disease

As humans are living longer, Alzheimer’s disease is becoming an increasingly significant public health problem. The prevailing hypothesis is that aggregation of proteins such as amyloid-β (Aβ) into larger “plaques” leads to Alzheimer’s disease, but there is still no direct evidence to demonstrate that Aβ plaques cause Alzheimer’s disease. In a Nature News & Views article this week, Eric M. Reiman, summarizes the results of an article published in the same journal, which showed that a human antibody, called aducanumab, was able to reduce Aβ plaques in a dose-dependent manner in a small, 12-month placebo-controlled human trial. Though other Aβ-targeting therapies have successfully reduced Aβ aggregates, the most tantalizing result of this study comes from early exploratory analysis of the trial data, which suggested – based on a study population that is too small to make definitive conclusions – that higher doses of aducanumab and larger reductions in Aβ plaques were associated with slower cognitive decline. Before accepting the hypothesis that Aβ plaques cause Alzheimer’s disease, it will be critical to repeat the experiment in larger clinical trials appropriately powered to measure the impact of antibody treatment and plaque reduction on cognitive decline. The study authors also noticed that high doses of antibody were sometimes associated with the inflammation within the brain, causing them to limit the maximum antibody dose tested. Overall, these are exciting results, which, if confirmed in larger clinical trials, would provide much-needed clarity about the mechanism of Alzheimer’s disease and inform future treatments. (Eric M. Reiman, Nature News & Views)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

September 9, 2016 at 9:20 am

Science Policy Around the Web – September 25, 2015

leave a comment »

By: Elisavet Serti, Ph.D.

Image courtesy of NIH Image Bank

Drug Policy

The new hepatitis C virus bottleneck: Can delaying therapy be justified?

Chronic Hepatitis C infection affects more than 3.2 million people in the United States, resulting in severe liver disease, cirrhosis, hepatocellular carcinoma and death. The FDA approval of the new anti-Hepatitis C direct acting antivirals marked a revolution in HCV therapeutics; it is now possible to offer patients safe and highly effective (more than 90%) alternatives to pegylated interferon and ribavirin. However, the high cost of these new treatment regimens has proven to be a major obstacle to their delivery. Twelve weeks of anti-Hepatitis C oral treatment costs between $80,000 and $95,000, and it has been estimated that total health care costs related to Hepatitis C therapy could soon reach $27 billion per year. As a consequence, many state-funded and private insurance programs have restricted access to direct acting antiviral-based therapy to patients with advanced fibrosis and extra-hepatic manifestations. In Texas, Medicaid has elected not to cover this type of therapy at all.

In a recent Hepatology editorial, the editors wondered about the potential burdens of this Hepatitis C therapy bottleneck that leads to exclusion of the majority of chronic patients due to high costs. The editors argue that “while persons with advanced fibrosis are clearly at higher risk for short-term complications, it is not clear that persons with lesser degrees of fibrosis are not at risk for harm.” A recent meta-analysis has demonstrated that rates of fibrosis progression may be far more accelerated than previously thought. This means that the option of delaying therapy runs the risk of progression to cirrhosis and development of hepatocellular carcinoma. The editors argue that delaying therapy could introduce the added burden of implementation of hepatocellular carcinoma or portal hypertension screening. Also, chronic Hepatitis C has been associated with a variety of extra-hepatic manifestations like diabetes, cardiovascular disease, psychiatric disorders, depression, renal dysfunction and rheumatologic conditions. This means that we should factor these extra-hepatic complications when calculating the cost effectiveness of the new Hepatitis C therapies. For these and several other reasons, the American Association for the Study of Liver Diseases (AASLD) – Infectious Diseases Society of America (IDSA) HCV guidance recommends that all infected persons should be treated.  (Tracy G. Simon and Raymond T. Chung, Hepatology)

Biomedical Research

Is the Alzheimer’s protein contagious?

A study recently published in Nature concluded that human transmission of amyloid-b pathology and cerebral amyloid angiopathy is possible. In simpler terms, they concluded that Alzheimer’s disease can be transmissible person-to-person, under special iatrogenic (or medical treatment) routes. The researchers examined the brains of eight people that had died of iatrogenic Creutzfeldt-Jakob disease (CJD) as a result of treatment with human cadaveric pituitary-derived growth hormone contaminated with prions. Prions are misfolded proteins with incubation periods that can exceed five decades. Human transmission of prions has occurred via medical and surgical procedures worldwide as well as via cannibalism in Papua New Guinea. Although treatment with cadaveric-derived growth hormone stopped in 1985, iatrogenic CJD continues to be found.

The researchers showed evidences of amyloid-β spread in the brains of these subjects, which is responsible for the development of Alzheimer’s disease, supporting the theory that the contaminated growth hormone injections could have contributed to the development of iatrogenic Alzheimer’s disease as well. It has never been reported in the past that amyloid-β protein can be transmittable through other medical procedures, such as brain surgery or blood transfusion. These results and their interpretation spurred criticism from several neuro-specialists and researchers who noted that the prions causing CJD can also trigger the formation of amyloid deposits in these brains. Still, there is no epidemiological connection between the contaminated growth hormone injections and the development of Alzheimer’s disease (Emily Underwood, Science Latest News).

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

September 25, 2015 at 9:00 am

Posted in Linkposts

Tagged with , ,

Science Policy Around the Web – June 23, 2015

leave a comment »

By: Rebecca A. Meseroll, Ph.D.

Water conservation

Satellite data reveals depletion of underground water supply

A decade-long study using NASA satellite data reveals that 21 of the world’s 37 largest aquifers have passed their sustainability tipping points, i.e., more water was removed from the aquifers than replenished over the course of the ten years that study was conducted. These underground water supplies account for 35 percent of human water usage around the globe, with increased use during times of drought, when above-ground water sources are diminished. Stress on the aquifers varies around the world, but was found to be highest in the areas of dense human populations, especially in dry environments. The most drastically depleted aquifer is the Arabian Aquifer, which services more than 60 million people, followed by the Indus Basin in India and Pakistan. Aquifer depletion rates are also high in locations where water-demanding activities, including various types of mining and drilling for oil and natural gas, take place, such as the Canning Basin in western Australia. Although water basins do become replenished with rainwater and snow melt, this is a process that can take thousands of years. According to Jay Famiglietti, the principal investigator of the study, the stress on the aquifers is anticipated to get worse as with global warming, as areas around the equator receive less rain and people living in those areas need to get more water from underground sources. Population growth is also expected to put strain on the aquifers. Water conservation efforts will be required to prevent further depletion of aquifers worldwide. (Todd Frankel, Washington Post)

Supplement regulation

Senator calls for scrutiny of Alzheimer’s supplements

Senator Claire McCaskill (D-Missouri), a ranking member of the Senate Special Committee on Aging, sent letters last week to fifteen major retailers asking what efforts they were making to prevent the sale of any fraudulent or potentially dangerous supplements that claim to protect against Alzheimer’s disease and dementia. The letters requested documentation of retailers’ policies relating to sale, marketing, removal, and reporting of adverse effects of dietary supplements. Federal law, as laid out by the Dietary Supplement Health and Education Act of 1994, allows supplement manufacturers to make general claims about their products’ health benefits, but prevents them from claiming a supplement can treat a specific disease. Despite this legislation, supplements claiming to protect against diseases still make it into the marketplace. In her letter to Amazon, McCaskill specifically targets a supplement called Brain Armor, touted by its manufacturer to protect against Alzheimer’s disease, dementia, and several other hallmarks of cognitive decline.   Although this particular supplement was removed from Amazon following notification by the FDA, which had been made aware of the supplement’s illegal claims during a staff meeting with McCaskill, the letter raises the question of why it had been for sale in the first place. Supplement regulation has been in the news previously this year when the New York State attorney general, Eric T. Schneiderman, sent cease and desist letters to GNC, Target, Walmart, and Walgreens, upon discovering that these retailers were selling herbal supplements that contained unlisted contaminants or did not even contain what was on the label. New legislation may be necessary to protect consumer safety if current regulations are not sufficient. (Anahad O’Connor, The New York Times)

Ebola drug development

Trials for potential Ebola drug halted

Tekmira Pharmaceuticals and the Wellcome Trust announced late last week that they ceased enrollment in a trial for the potential Ebola drug, TKM-Ebola-Guinea, because the study had reached a statistical endpoint, meaning the trial would not likely be improved by increased enrollment in the study. Although the drug was found to be an effective anti-Ebola therapeutic in rhesus monkeys, it has not proven beneficial for human patients. TKM-Ebola-Guinea is composed of several small RNA molecules, which interfere with Ebola proteins to counteract replication of the virus, inside a lipid nanoparticle. The drug tested on humans uses an older type of lipid nanoparticle compared to the one used on the monkeys, because the newer lipid nanoparticle has not undergone safety trials yet, and this difference may have an influence on the efficacy of the drug. Trial design has also presented some difficulties in analysis of the data, due to the ethical considerations of treating infected patients with a placebo instead of the drug. Everyone enrolled in the study received the drug, and outcomes were compared with those of patients at treatment centers not involved in the trial, rather than a control group receiving a placebo or a different drug. Data that have been collected from the trial thus far will still be analyzed to learn more about the tolerability of the drug and its effect on the patients that were treated with it. Clinical trials for other potential Ebola treatments and preventions, such as the antibody cocktail ZMapp and two candidate vaccines, are ongoing. (Gretchen Vogel and Kai Kupferschmidt, ScienceInsider)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

June 23, 2015 at 9:00 am

Inching Forward – An Initiative to Understand the Brain

leave a comment »

By: Varun Sethi, MD, Ph.D.

On April 2nd 2013, the twitter handle @BRAINinitiative re-tweeted a White House announcement that stated “Today we announce the next great American Project – the BRAIN initiative”. Since then, punctuated tweets have told the story of the evolution of this initiative to an audience of over 1400 followers. The Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative was launched in 2013, as a 12 year long journey towards BRAIN 2025, identifying it as one of the major Grand Challenges for the 21st century and aiming to improve our understanding of the brain in action.

Neurological diseases, developmental and degenerative alike, are disabling, expensive and chronic conditions. The physical and economic burden of neurological diseases is only expected to increase with the growth of an increasingly aging population. At a time when the prevalence of neurological conditions is increasing across the spectrum, this initiative has been a catalyst to neurological research. The initiative aims to develop and apply new technologies, create maps of brain circuits, and improve the comprehension of behavior and cognition. With paradigm shifts in the practice of medicine, gravitating towards preventative and personalized medicine, this initiative allows neurologists and neuroscientists to evolve in their practice of care.

Co-ordination of the BRAIN Initiative at the National Institutes of Health (NIH) happens across ten NIH Institutes and Centers, with regular meetings to integrate strategic planning, management and support. Multi-council working groups ensure a coordinated and focused effort across NIH and amongst other Federal agencies. At the most recent meeting of the BRAIN multi-council working group on March 4, 2015, the proposed agenda included a discussion of BRAIN research supported by NIH, neuroethics, and presentations of BRAIN-related activities from the federal agencies involved.

The BRAIN Initiative is to neuroscience what the Human Genome Project was to genomics; this analogy has been stated often and sets a high bar, which the Initiative has continuously aimed to surpass. At its inception, this was announced as a $100 million grant. In 2014, the administration announced the growth of the BRAIN Initiative to include five participating federal agencies: the NIH, National Science Foundation (NSF), Defense Advance Research Projects Agency (DARPA), Food and Drug Administration (FDA), and Intelligence Advanced Research Progress Activity (IARPA). Members of the National Photonics Initiative, together with companies such as GE, Google, GlaxoSmithKline and Inscopix announced plans to leverage over $30 million in support of the BRAIN Initiative. Other agencies such as patient advocacy organizations, universities (e.g. University of Pittsburgh), and the Simons Foundation proposed a contribution of $240 million towards research efforts.

The FY2014 BRAIN investments at NIH included the first wave of BRAIN awards wherein $46 million was invested in 58 projects encompassing more than 100 investigators in 15 states and 3 countries. Data sharing and integration across projects was emphasized. The grants focused on transformative technologies and included amongst others classification of the myriad of cell types in the brain and creation of next generation human brain imaging technology to monitor circuit activity. In 2015, $65 million in funding was secured from the aforementioned five federal agencies. Five new funding announcements were announced, as also were two new opportunities, through the small business program on research.

In spite of the above, there is need for additional funding. Neuroscientists themselves insist that BRAIN be funded; Thomas Inel, director of NIMH had said in 2014, that his institute might be willing to redirect funds from other neuroscience projects so as to support BRAIN. In recent weeks, the BRAIN initiative was amongst the important agenda items discussed by members of the American Academy of Neurology (AAN) at the recent “Neurology on the Hill” event earlier this month. Inching forward towards its goal, President Obama’s FY2016 budget proposes increasing federal funding from about $200 million in FY2015 to more than $300 million in FY2016, for BRAIN. On March 3rd 2015, 156 members of the AAN met and urged members of the Congress to sign a letter of support for the BRAIN initiative at NIH, authored by Rep. Chaka Fattah (D-PA). The AAN members met with staff in 226 congressional offices and 80 members of the House and Senate. This highlights the interest and need for the continuous dedicated funding that is required to support the BRAIN initiative. A strong advocate for the initiative, former Indianapolis Colts player Ben Utecht spoke about his personal experience with traumatic brain injury and how increasing awareness through education is very important to change the standards of care.

The gradual evolution of the Initiative has been guided by analysis of the scientific and tool development goals from preceding years, together with incorporation of new goals towards the larger BRAIN 2025 objectives. The long term scientific vision of the NIH BRAIN initiative focuses on circuits and networks, calling for $4.5 billion in brain research funding over the next 12 years. Interim recommendations included ramping up support to $400 million per year by FY2018 and plateauing at $500 million per year by FY2021. Seven areas of research have been identified, all aiming to collectively map brain circuits and measure fluctuating patterns of electrical and chemical activity within those circuits, so as to elucidate the understanding of cognition and behavior.

The United States is not alone in prioritizing ‘brain health’. China had launched a similar, Brainnetome project. The European analogue, the Human Brain Project was launched with an ambitious 1.5 billion euros of funding over ten years, aiming to improve digital technologies, working together with neuroscientists. Often described as the Apollo program for neuroscience, BRAIN has steadily taken steps. However, is the inception of such programs enough? The need for a larger consensus in what the neuroscientists deem important together with a tangible improvement in health care are vital for the success of such an initiative. The recent discontent at the management of the Human Brain Project in Europe has called for a disbanding of the three member executive committee. This discontent stemmed from concerns about removing cognitive neuroscience as a priority from the initiative. Such trends highlight the need for a dynamic, continuous evaluation of such a vision and the need to be more inclusive. The step ladder, phased out approach towards funding, seems to have set the right trend but increased funding to meet goals remains a challenge and has not been without criticism. John Horgan had discussed the militarization of brain science and questioned the role of the Pentagon in funding the BRAIN. On the other extreme, Cori Bargmann, a co-chair of the advisory committee, provided economic rationale for the project stating, “To use numbers, the entire cost of the space program to put a man on the moon added up to about one six pack of beer for every person in America living at the time. And the entire cost of the Brain Initiative proposed here adds up, inflation corrected, to about one six pack of beer for each American over the entire 12 years of the program”. I don’t drink beer. But I feel the dilemma of rationalizing the ‘expense’ or ‘investment’. Depending on whether or not you can relate to a personal story of neurological disease, your opinion may vary, but the argument cannot be ignored. The proposed increase for funding FY2016 presents itself as a litmus test that, if successful, will validate the trajectory of the project and provide impetus for accelerated growth.

Alzheimer’s, Parkinson’s, multiple sclerosis, stroke – each is a devastating reality, with patient advocacy organizations and highly specialized neuroscientists, painstakingly looking for answers and therapies to improve, treat and someday cure these conditions. Is the larger vision of wire diagrams and maps of activity in the brain the correct end point? With a project such as this, can there be a clear end point? The brain–machine interface might be too far-fetched and futuristic. Nonetheless, in shifting from a disease-specific goal to a broader vision of understanding the circuitry of the brain, BRAIN encourages dialogue across disciplines and helps scientists overcome one of the largest obstacles of being highly specialized with a very unique skill set – ‘compartmentalization’. And yet, it cannot be understated that the possibility of a  breakthrough therapeutic option would be much more of an advertisement for the initiative, than a brain activity map. Areas of research that are not outlined as being of paramount importance are likely to be left behind, causing researchers in some areas to feel insecure and limited in the pursuit of science. The relatively myopic view of the Initiative is thought by many to be its biggest shortcoming.

“To keep the body in good health is a duty, otherwise we shall not be able to keep our mind strong and clear” Buddha. Roman poet Juvenal (Satire X 10.356-64) wrote “orandum est ut sit mens sana in corpore sano,” meaning “you should pray for a healthy mind in a healthy body”. So is a healthy mind as important or perhaps more important for a healthy qualitative life? Neurological disease is feared. Movement, perception and memory are equally important in ensuring we can lead healthy productive lives. However, is the global obsession to understand the brain justified? Neurological health is undoubtedly important, relevant and an increasing economic and physical burden. With a brain activity map, would we know the seat of the mind by 2025? Perhaps, perhaps not. It is, however, a good time to be a neuroscientist.

Written by sciencepolicyforall

March 11, 2015 at 9:00 am