Science Policy For All

Because science policy affects everyone.

Archive for the ‘Linkposts’ Category

Science Policy Around the Web – September 19, 2017

leave a comment »

By: Nivedita Sengupta, PhD


source: pixabay

Global Warming

The world may still hit it’s 1.5 °C global warming mark, if modelers have anything to say about it

Nature Geoscience published a paper on 18th September, which deliving what could be good news regarding climate change and global warming. The team of climate scientists in that paper claimed that it is possible to limit global warming to 1.5 °C above pre-industrial levels, as mentioned in the 2015 Paris climate agreement. The data in the paper stated that the global climate models used in a 2013 report from the Intergovernmental Panel on Climate Change (IPCC), overestimated the extent of warming that has already occurred. By adjusting this parameter when performing model-based calculations, they concluded that the amount of carbon that humanity can emit from 2015 and onwards to keep temperatures from rising above 1.5 °C is almost 3 times greater than the previous estimation done by IPCC. The conclusions have significant implications for global policymakers and suggests that IPCC’s carbon budget can be met with modest strengthening of the current Paris pledges up to 2030, followed by sharp cuts in carbon emissions thereafter.

Some scientists are already challenging the conclusions from the paper, questioning the analysis’s reliance on a period of slower warming, a so called climate hiatus, at the beginning of this millennium which continued until 2014. They say that natural variability in the climate system produced lower temperatures during that period and hence any calculations based on that can be artificially low because it calculates the human contribution towards warming during those periods. Moreover, the oceans and the land were probably absorbing more carbon than normal during this period. Hence natural processes can return some of that carbon to the atmosphere. Taking these factors into account reduces the predicted amount of carbon that can be released while keeping atmospheric temperatures under the 1.5°C limit. But the authors of the paper argue that the climate hiatus should not significantly contribute too their conclusions. They feel the multiple methodologies used to estimate the actual warming due to greenhouse gases should allow the calculations to be accurate irrespective of any short-term climate variability.

Nonetheless,  humanity’s rapid assent towards the global warming threshold is muddled by modelling scenarios, framing the 1.5 °C as a very small target for policy makers and scientists to try to hit by 2030. The fine details of carbon emission matters when scientists are looking for the precise effects of the different greenhouse gases on global warming. But even if the paper proves accurate in its prediction, huge efforts to curb greenhouse-gas emissions will still be necessary to limit warming. As the author of the paper, Dr. Millar, says, “We’re showing that it’s still possible. But the real question is whether we can create the policy action that would actually be required to realize these scenarios.”

(Jeff Tollefson, Nature News)

Debatable Technology

What’s in a face…prediction technology

A paper by the genome-sequencing pioneer Craig Venter published on 5th September raised a lot of criticism and have gathered fears about genetic privacy. The paper, published in the Proceedings of the National Academy of Sciences (PNAS), claims to predict people’s physical traits from their DNA. Dr. Venter and his colleagues sequenced the whole genomes of 1,061 people of varying ages and ethnic backgrounds at Human Longevity, Inc. (HLI). Artificial intelligence was applied to analyze combination of each participant’s genetic data with a high-quality 3D photographs of the participant’ face. This analysis revealed single nucleotide changes in the genetic code between participants that corresponded with with facial features such as cheekbone height and other factors like height, weight, age, vocal characteristics and skin color. Using this approach, they could correctly pick an individual out of a group of ten people randomly selected from the HLI’s database, 74% of the time. This technology could be tremendously powerful for any agencies handling human genome data. Simply removing personal identifying information, which is routinely done in practice, would not eliminate the possibility that individuals could still be identified by the data itself.

However, reviewers of the paper says that the claims are vastly overstated and the ability to use a person’s genes to identify the individual is hugely overblown. According to the skeptics, just knowing the age, sex and race alone can eliminate most of the individuals in a randomly selected group of ten people from a data set as small and diverse as HLI’s. Computational biologist Yaniv Erlich of Columbia University in New York City provided evidence in support of this statement by looking at the age, sex and ethnicity data from HLI’s paper. According to his calculations knowing only those three traits was sufficient to identify an individual out of a group of ten people in the HLI data set 75% of the time, irrespective of any information on the genome. He concluded that the paper doesn’t demonstrate that individuals can be identified by their DNA, as it claims to. HLI counter argued by saying that they used multiple parameters to identify someone, out of which a person’s face is just one.

The review process that the paper underwent is not standard for most journals. By submitting to PNAS as a member of the US National Academies of Science, Engineering, and Medicine, Venter was allowed to hand select the three reviewers his paper would be evaluated by. While the issues surrounding this paper are being hotly debated by members of the scientific community, some fear Venter’s stature will give the paper undo weight to policymakers, who may become overly concerned about DNA privacy and can thus affect the rules and regulations making processes.

(Sara Reardon, Nature News)

Have an interesting science policy link?  Share it in the comments!


Written by sciencepolicyforall

September 21, 2017 at 5:45 pm

Science Policy Around the Web – September 15, 2017

leave a comment »

By: Liu-Ya Tang, PhD


source: pixabay

Public Health

A new flavor in cancer research: sugar

Sugar is an important energy source for fueling our body. However, eating too much sugar doesn’t do any good to our health. It can cause obesity or diabetes, both of which are considered risk factors for cancer. Moreover, there is evidence showing that sugar may play a direct role in cancer development/progression. Here’s why.

The first study is about how sugar protects cancer cells from attack by the immune system. The immune system is the safeguard of our body that acts by cleaning out “foreign” invaders or bad cells. When the cell becomes cancerous, it may disguise itself and escape the challenges imposed by the immune system. Dr. Carolyn Bertozzi’s group at Stanford University found that cancer cells have denser sialic acid, a type of sugar, on the cell surface than that on normal cells. This sugary coating makes cancer cells invisible to the immune system, so they can divide freely in the body. To enable the immune system to attack cancer cells, Dr. Bertozzi proposed to strip away the sugary coating by using drugs.

In addition to masking cancer cells, sugar can also directly promote their growth. Glucose, a metabolic product of sugar, is important for not only the growth of normal cells but also for that of cancer cells. One distinct phenotype of cancer cells is uncontrolled cell growth, which may require more glucose. This notion has been supported by a recent study done by Dr. Jung-whan Kim’s group at The University of Texas at Dallas. The researchers found that high levels of a protein called glucose transporter 1 (GLUT1), which is responsible for transporting glucose, is associated with lung squamous cell carcinoma (SqCC). However, they didn’t observe similar results in lung adenocarcinoma (ADC), which indicates that different cancer cells may adopt different mechanism to satisfy their energy needs. They further found that using GLUT1 inhibitor can suppress the tumor growth in a SqCC mouse model, but not the adenocarcinoma mouse model. Their findings will help the development of specific treatment plans for SqCC patients targeting GLUT1.

The American Heart Association recommended that the maximum amount of sugar consumption for women is 25 grams a day and that for man is 36 grams. However, the reality is that in 2015, on average, each American consumed more than 93 grams of sugar a day. Reducing sugar consumption and eating a balanced diet will not only help decrease the high rates of obesity in the US, but also benefit cancer prevention.

(Erin Blakemore, Washington Post and University of Texas at Dallas, ScienceDaily)

Innovative Technology

Stem cells could help Parkinson’s patients get the dopamine they need

Parkinson’s disease (PD) is a chronic degenerative disorder of the central nervous system that mainly affects movement. It is a progressive disease and PD patients can have very severe symptoms such as the inability to walk or talk. The cause of PD is the death of dopamine-producing neurons, as dopamine, a neurotransmitter, is essential for motor neurons to function properly.

Medications and surgery can help alleviate the symptoms of PD, but there is no cure for it. Recently, a study published in Nature brings hope to doctors, PD patients and their families. This study was led by Dr. Jun Takahashi, a stem-cell scientist at Kyoto University in Japan. As the loss of dopamine is the root cause of PD, to implant dopamine-producing cells to the brain of PD patients would be the most effective way to cure PD. Embryonic stem cells have the versatile ability to develop into different organs, but there are always ethical issues around this research. Dr. Takahashi’s group generated induced pluripotent stem (iPS) cells derived from both healthy people and those with PD, transformed iPS cells to dopamine-making neurons and implanted the cells into monkeys with neurodegenerative disorders. After two years, the monkeys are still alive and the disorder symptoms are greatly mitigated. Dr. Takahashi hopes to begin a clinical trial by the end of next year.

Ideally and in theory, deriving iPS cells from a patient’s own cells would allow them to avoid taking the immune-suppressing drugs that are usually necessary when introducing non-native tissues. But generating customized iPS cells is expensive and requires a couple months for propagation. A good solution, planned by Dr. Takahashi, is to establish iPS cell lines from healthy people and match them with PD patients by using immune cell biomarkers. This approach will probably be feasible as it has been successfully applied in monkeys.

In addition to Dr. Takahashi, there are other scientists conducting stem-cell research on PD. Dr. Jeanne Loring, working at Scripps Research Institute in La Jolla, California, prefers to transplant iPS-derived neurons made from a patient’s own cells. She hopes to start a clinical trial in 2019. Another stem-cell expert from the Memorial Sloan Kettering Cancer Center in New York City, Dr. Lorenz Studer, is working on a trial that will use neurons derived from embryonic stem cells. Although there are still issues in this field, all the efforts will ultimately lead to a better treatment for PD patients. (Ewen Callaway, Nature News)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

September 15, 2017 at 4:05 pm

Science Policy Around the Web – September 12, 2017

leave a comment »

By: Saurav Seshadri, PhD

Vaccination Medicine Vaccine Ampoules

Source: maxpixel

Infectious Diseases

Is a Zika vaccine worth the effort?

A collaboration between pharmaceutical giant Sanofi and the US Army to develop a vaccine for the Zika virus has come to an end.  About a year ago, Sanofi received $43.2 million in funding from the Biomedical Advanced Research and Development Authority (BARDA, a division of DHHS) to move a Zika vaccine candidate, generated by the Walter Reed Army Institute of Research, into Phase II development.  BARDA has now decided to ‘de-scope’ the project, leading Sanofi to abandon its efforts to develop or license the candidate.

The number of cases of Zika has declined sharply since its peak in early 2016.  While this ‘evolving epidemiology’ has hampered Zika-related clinical research and drug development, it may actually be a welcome relief for Sanofi. The French company has endured months of political pressure to agree to pricing assurances for any vaccine produced from the collaboration, with lawmakers, including Senator Bernie Sanders, arguing that it would be a ‘bad deal‘ for a private company to profit from research funded in part by American taxpayers.  In particular, the exclusivity of Sanofi’s license,  uncommon for such agreements, has been singled out as ‘monopolistic’.  Sanofi has been defending itself vigorously against this characterization, pointing out that it took on significant risk itself for a vaccine that was far from approval, and that it has already discussed reimbursing the US government for its investment through milestone and royalty payments.  Ultimately, ending the collaboration puts this PR-damaging debate to rest, while also providing Sanofi a face-saving opportunity to avoid committing to a drug with limited prospective demand and profitability (as recently transpired with the dengue fever vaccine Dengvaxia, which only reached 55 of its projected 200 million euros in sales in 2016).

In its statement, Sanofi says that it continues ‘to believe that public-private partnerships are the right model to address…public health challenges’ posed by infectious diseases.  Indeed, several pharmaceutical companies responded to the WHO’s declaration of Zika as a public health emergency in 2016; of these, Takeda and Moderna appear to still have ongoing large-scale collaborations with BARDA to develop Zika vaccines.  While the drop in Zika prevalence is clearly a good thing, it’s unclear how it will affect the economic and scientific feasibility of such collaborations in the future.  One solution is to promote vaccine development before an outbreak occurs: groups such as the Coalition for Epidemic Preparedness Innovations (CEPI) hope to facilitate this approach, but the need to allocate limited resources makes its practicality questionable.  However, the alternative is usefully illustrated by the Ebola epidemic of 2014.  Despite concerted global efforts that led to successful vaccine development by Merck, current outbreaks are small enough that the deployment of vaccines may not even be warranted.  Barring an overhaul of regulatory processes and/or financial priorities, it seems likely that when the next epidemic emerges, we’ll be playing catch-up again.

(Eric Sagonowsky, FiercePharma)


Is every human brain study a clinical trial?

Basic research into the mechanisms underlying cognition and their impairment in a range of brain disorders is the primary focus of hundreds of neuroscience laboratories.  While such studies feed into drug discovery for diseases such as autism, schizophrenia, and bipolar disorder, since they do not directly involve testing any treatments, they are not commonly considered to be clinical trials.  This perception became technically incorrect in 2014, following an NIH announcement broadly redefining ‘clinical trial’ to include any study in which ‘one or more human subjects’ receive an intervention and ‘health-related biomedical or behavioral outcomes’ are observed.  Last year, the NIH revised its data reporting policies for such trials.  These more stringent policies are now being implemented, and will affect grant applications submitted in 2018.

Several members of the scientific community have begun to voice their concern about the changes.  The Association for Psychological Science (APS) and Federation of Associations in Behavioral & Brain Sciences (FABBS) have both sent critical letters to the NIH. A petition by neuroscience researchers pushing back against the policy has garnered over 3,400 signatures.  Opponents argue that the requirements imposed by the ‘clinical trial’ label are overly burdensome and would impede basic research.  These requirements include timely study registration and public disclosure of results through Further, they demand that all staff receive Good Clinical Practices training. Investigators dread the bureaucracy that will be involved in complying with these mandates.  Perhaps most concerning for scientists is the constraint that new proposals must respond to a Funding Opportunity Announcement, which have specific stipulations about study objectives, design, oversight, and evaluation.  While these rules are intended to promote scientific rigor and transparency, the more immediate effects may be to stifle exploration and creativity and to deter basic researchers who may not know how to tailor their applications to reflect clinical values.

For its part, the NIH is steadfast that the broad redefinition is ‘intentional’ and that current standards of data reporting are ‘unacceptable’.  Policymakers argue that they are simply asking scientists to inform the public about the existence and outcome of their research.  While this sounds unimpeachable in theory, scientists are already reporting practical challenges: for example, asking potential study participants to sign a clinical trial consent form can scare them away.  While the NIH is making efforts to provide guidance to the community, it is running out of time to stamp out confusion before next January, let alone achieve enthusiastic compliance.  Neuroscientists are likely to face setbacks in funding and progress as a result.

(Sara Reardon, Nature News)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

September 12, 2017 at 5:52 pm

Science Policy Around the Web – September 8, 2017

leave a comment »

By: Emily Petrus, PhD


source: pixabay

Science funding

Congress Returns and Funding Anxiety Continues for Scientists

The summer recess is over, which means congress needs to get to work and pass funding bills to keep the government running past the end of fiscal year: September 30th. The agenda is full, including funding hurricane Harvey relief, raising the debt ceiling and allocating funds for 2018. The budget fight is bound to be full of surprises, even just last night Trump sided with democrats on these three issues, throwing most conservative GOP members for a loop. It remains to be seen how the 2018 budget will impact research, but here’s what we know so far.

The 2017 budget was considered a positive one for scientists, because the large cuts demanded by the president went unheeded by congress. The president requested to cut most federal agencies, and the EPA (-31%), NOAA (-22%) and FDA (-31%) were the largest targets.  However, most research institutions did not see major cuts, and although the National Institutes of Health (NIH) budget was requested to be reduced by 22%, it received a $2 billion raise.  The upcoming 2018 fight would pit the president’s proposed agenda against senate against house, providing a 3-way fight which leaves scientists in the middle of potentially hostile waters.

Proposed budgets by the house of representatives and senate are still being formulated, but there are already discrepancies between the two proposals. For example, the house proposes increasing NASA’s budget by $94 million (+1.6%), while the senate would reduce funds by $193 million (-3.3%). The discrepancies can be found even deeper in NASA’s budget, with reversed support for planetary science (increased spending from the house) and earth science research (cuts from the house, maintained spending from the senate). These cuts could impact our ability to monitor distant planets and moons which could be sustainable for human life. For example, an unmanned mission to Jupiter’s moon, Europa, slated to launch in 2020 and land in 2024 could be stalled. In flyby missions from 1995-2003, this moon was found to have brown sediment, a warm core and probably a salty ocean under an icy surface, making it similar, albeit colder, to our planet.

Back on earth, our ability to design new ways to produce renewable, sustainable energy could also take a hit, as funding may be cut from the Department of Energy’s Advanced Research Projects Agency – Energy (ARPA-E). This department funds “high-risk, high-reward” projects and has only been in operation for 8 years, which makes it difficult to determine if the investment is worth the so far limited outputs. The senate proposes increasing this funding by 1.1%, while the house would scrap the project entirely.

Finally, the National Oceanic and Atmospheric Administration (NOAA) is on the chopping block, with the house following the president with a 22% decrease in funding, while the senate only seeks to cut the budget by 1%. Controversial projects overseen by NOAA include the Polar Follow-On programme, which monitors weather in collaboration with NASA. Cutting this program could impact our ability to predict hurricanes, something not likely to sit well with voters and representatives in states impacted by current weather catastrophes.

Although there are big discrepancies in proposed budgets between the president, the house and the senate, time will tell how much cooperation the republicans and democrats can achieve by the end of the month to avoid a government shut down. On a positive note, the NIH can hope for a boost from the house and the senate, as funding human health is an issue which usually enjoys bipartisan support.

(Rachael Lallensack, Nature News)

The science of education

School’s Back in SessionGet your learning on!

School is back in session; teachers are teaching, students are learning, and education is supposed to be breaking down socioeconomic barriers. What can science do to help educators have the greatest impact on students? There’s an intersection between teaching strategy, learning, and education policy which can be implemented for better student outcomes.

A recent report by Science News describes new strategies developed in the lab to enhance student learning. However, researchers are finding that studies performed in a lab setting with college kids do not yield the same results for optimizing student performance when applied to a bustling classroom of younger students. For example, when college students were asked to read a passage and jot down notes, their recall of the reading assignment was improved a week later. However, younger grade school students were shown to need an extra cue to help connect associations and make memories “stick”. This strategy helps teach students how to recall information, providing an extra support link until they can perform this task without a second thought. Another ongoing study is helping students improve executive function in students as young as middle school. Researchers designed a video game which requires players to shift strategies as rules change mid-game, which thus far has positively impacted the students’ performance on cognitive tests.

Being able to adapt to new situations is a cornerstone of learning, and neuroscience has long been searching for the magic that makes this task easy sometimes but challenging othertimes. The methods to study this process are becoming more sophisticated. Researchers can now view single synapses coming and going, and in some cases receptors on those synapses popping in and out. But understanding brain-wide learning requires zooming out and looking at neural network activity. It seems intuitive that to learn something new, connections must be formed between brain areas. These associations “stick” that memory or fact somewhere in the brain. Indeed, people who are learning something new display greater “brain flexibility”: the ability to not only make new connections, but let some others fall apart. Children with low math performance actually had higher connectivity during brain scans while doing math problems. It seems forgetting unimportant information to make room for new ideas is as important as making just more new connections.  In addition, a researcher scanned himself three times per week for a year, and found his brain displayed greater flexibility on days when he was in a good mood. The balance between making new connections and letting others go may be the key to better learning.

As science puts more pieces together on how learning best occurs, we can see some things coming into focus to enhance student learning. People who can make new connections and loose old ones in a dynamic fashion can be better learners. Being in a good mood, meaning a stable home and school environment with food and housing security can lead to better brain flexibility. Teachers trying new strategies to enhance brain flexibility with their students could help the students learn how to absorb and use new information. All of this information can be used to inform policy on what makes for a successful student as we proceed through the academic year.

(Susan Gaidos, Science News; Laura Sanders, Science News )


Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

September 8, 2017 at 3:54 pm

Science Policy Around the Web – September 5, 2017

leave a comment »

By: Sarah L. Hawes, PhD


Image: By Simon Caulton [CC BY-SA 3.0], via Wikimedia Commons

Gene therapy

FDA approves breakthrough gene therapy for childhood leukemia

Last week, the FDA approved use of gene therapy for the first time, to be used against resistant or relapsed acute lymphoblastic leukemia (ALL) originating in B-cells. The treatment, called Kymriah, was made by Novartis Pharmaceuticals in collaboration with University of Pennsylvania. It is a form of CAR T-cell therapy, in which a patient’s own immune cells are extracted and genetically modified to better identify and attack cancer-laden B-cells before being infused back into the patient.

Because the cellular feature which modified T-cells use to seek and destroy cancerous cells is also present on healthy B-cells, treatment carries risks including hypoxia, hypotension, and suppressed immune function. A life-threatening immunological reaction called cytokine release syndrome appears more commonly in adults, and may explain patient age-restriction (25 and under) on FDA’s approval.

For patients with otherwise intractable cancer, Kymriah may be a literal life-saver. In a recent clinical trial on 63 patients with drug-resistant or recurring ALL, Kymriah lead to remission in 83% of cases three months post-treatment.

While announcing approval of Kymriah, FDA Commissioner Scott Gottlieb asserted that the FDA is “committed to helping expedite the development and review of groundbreaking treatments that have the potential to be life-saving.” This has been substantiated for Kymriah in particular using both Priority Review and Breakthrough Therapy mechanisms. These speed FDA approval, thereby shortening pharmaceutical companies’ delay to profit, and have helped to drive activity in the promising CAR T-cell research arena in recent years.

Despite the success of these mechanisms in bringing a breakthrough cancer therapy to market faster, Novartis insists that a $475,000 price tag for one-time treatment is conservative considering the high cost of drug development and low number of candidate patients. This sobering figure is made worse by the fact that some cases indeed recur several months following Kymriah. Novartis is currently working with Medicare on a plan for outcome-based pricing, so that the pharmaceutical company is only paid if patients respond to the therapy.

(FDA News Release; Jessica Glenza, The Guardian)


Image: By NOAA, via Wikimedia Commons

Emergency preparedness

Hurricane Harvey illustrates the importance of disaster preparedness for research institutions (again) 

The US National Academies of Sciences, Engineering, and Medicine released a report just last month highlighting weaknesses in disaster preparedness in biomedical research facilities, and issuing recommendations to enhance the resilience and continuity of research in the face of adversities including natural disasters, fires, and cyber threats. Costs of unpreparedness are high. In In 2012 Hurricane Sandy is estimated to have caused NYU more than $20 million in research equipment, and killed thousands of mice housed in New York laboratories, including many transgenic strains which took decades to develop and existed nowhere else on earth.

Hurricane Harvey’s toll on the scientific community is similarly, incalculably high. University of Houston’s infant rhesus monkeys ran out of formula and had to be weaned early. Loss of refrigeration capability jeopardized precious tissue and reagents, not to mention rendering some agents hazardously unstable. The University of Texas at Austin Marine Science Institute lost the roof off a microbial-ecology lab, forcing trainees to abandon their work and move to alternative institutions.

Some fared better due to advanced planning. For instance, Baylor College of Medicine was protected from Harvey by a wall installed around their campus after 2001 Tropical Storm Allison cost them 60,000 breast-cancer samples and thousands of laboratory animals. To support less fortunate Texas researchers, the broader scientific community has used hashtag #SciHelpTX on Twitter to advertise sharable resources such as open lab space, computers, and animal colony husbandry.

Hopefully Harvey has driven home the message that preparedness is a necessary investment going forward. Enacting preparations remains up to individual institutions’ policies; a list of recommendations by the National Academies can be found here.

(Emma Marris, Nature News)


Image: Wikimedia Commons

Gene therapy

Correction of a pathogenic mutation in human embryos? Maybe! The exploration continues

An August 2017 publication in Nature reports success using CRISPR-Cas9 to delete targeted sections of gene responsible for producing familial hypertrophic cardiomyopathy from human zygotes. The study, led by Dr. Shoukhrat Mitalipov, involved collaboration between the Salk Institute, Oregon Health and Science University (OHSU) and Korea’s Institute for Basic Science. By introducing a short-lived version of CRISPR, an enzyme, and a repair template into a healthy egg prior to fertilization but simultaneously with sperm bearing the targeted genetic defect, authors believe they ensured the gene excisions would take place early, and be carried throughout all following cell divisions. They believe this technique avoids unintended edits and mosaicism, in which both diseased and repaired cells exist side by side in the organism. The team found both the deleted genes and the template for replacement absent, and believe the genome repair used the healthy genes from the egg. They suggest this is due to certain evolutionary resiliencies associated with early stage eggs.

Other researchers responded by emphasizing the remaining uncertainties and importance of maintaining a focus on research as opposed to pushing too quickly toward application of germline editing techniques with the potential for producing heritable genetic changes. Complex ethical questions remain around germline editing even should techniques be perfected for any specific section of the genome. This research could not receive government funding due to the creation and destruction of human embryos.

Within three weeks, a preprint article questioned the likelihood of the egg serving as a template for repair of the genome’s deleted genes. The authors state that following fertilization the egg and sperm DNA are not in close enough contact for such borrowing, and propose two alternative scenarios: Either the egg failed to incorporate the sperm DNA which is sometimes seen with in vitro fertilization, or failed to replace the missing segment at all. Either would have resulted in an absence of the targeted paternal or template genes. Mitalipov has promised to respond point by point.

(Kelly Servick, Science Magazine)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

September 5, 2017 at 5:29 pm

Science Policy Around the Web – September 1, 2017

leave a comment »

By: Kseniya Golovnina, PhD



Science funding

Priorities emphasize both military and civil uses in R&D spending

On August 17, 2017, the White House established Research and Development (R&D) budget priorities for Pentagon’s 2019 fiscal year (FY).  According to an Office of Management and Budget (OMB) memo outlining these priorities, the Pentagon should emphasize investments in technology that have both military and civilian uses in its next budget.

The Department of Homeland Security (DHS) is evidently under a similar push. DHS guidelines state that special attention should be paid to R&D in the field of safety, and that integration of new technologies can significantly contribute to U.S. economic and technological leadership. Moreover, the need to develop new technologies to protect critical infrastructure and to increase border security is a key Trump administration priority.

An important emphasis in the OMB memo states that Defense Department R&D investments should continue to be spent in a way to support the military of the future. the document urges the Pentagon to develop and utilize new quantitative metrics to evaluate effectiveness of R&D. Monitoring these statistics aims to eliminate any redundancies with effective private sector R&D programs. Thus if research can be done more effectively in the private sector, thereby attracting private investments, Federal involvement should not longer be considered needed or appropriate, providing a mechanism for transforming or closing redundant programs. these changes would allow priorities in R&D funding to be given to basic and early-stage applied research with too high a risk and level of uncertainty for private sector.

Two agencies will definitely benefit from new Budget Priorities, the Pentagon’s Defense Innovation Unit-Experimental (DIUx) group, which has the lead role in interacting with the commercial sector, specifically in Silicon Valley, Boston and Austin, and the Strategic Capabilities Office (SCO) that is charged with taking existing technologies and developing new capabilities for their use. With the newly received authorities these agencies will get private companies on contract more quickly.

(Aaron Mehta, DefenceNews)

Science education

State-by-state interest in STEM indicators revealed by draft education plans

STEM (science, technology, engineering and mathematics) is a curriculum that emphasizes educating students in four specific disciplines using an interdisciplinary and applied approach. This education is becoming an important part of K-12 schools in United States. According to the U.S. Department of Education (DOEd), 14 out of 17 submitted State’s draft education plans (82%) include STEM-related school performance indicators. The proposed programs have been developed to meet the requirements of the ESSA (2015 Every Student Succeeds Act). That law, signed by former President Obama, is the successor to the Bush-era No Child Left Behind program, which encouraged the establishment of science-oriented performance metrics to augment school achievement metrics. ESSA also directed new federal funding streams for STEM education and established a new professional development program for teachers – the STEM Master Teacher Corps.

Including STEM indicators to evaluate the school program gives states the flexibility to incorporate science metrics into accountability systems along with the option to launch new STEM initiatives through ESSA funding programs. However, currently only a small minority of school districts are leveraging ESSA dollars to fund innovative STEM education efforts, mostly due to the small amounts of money available. Despite increased awareness of the need, the US primary and secondary education system has relatively few established and successful STEM educational initiatives.

The future of new STEM initiatives is unclear due to ongoing congressional negotiations over the fiscal year 2018 budget. On one side, the Trump administration has proposed the elimination of three major DOEd grants, they deem duplicative, poorly structured, or showing little impact. On the other, the House Appropriations Committee is recommending both cuts and increases to relevant education programs.

In early September, when the remaining (34) states have submitted their draft education plans, it will be more clear how much interest each state has in STEM education and how much funding they will get based on an accepted 2018 budget.

(Alexis Wolfe, Science Policy News from AIP)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

September 1, 2017 at 1:52 pm

Science Policy Around the Web – August 29, 2017

leave a comment »

By: Allison Dennis, BS


Source: pixabay

Science funding

1 Million fewer dollars available for studying the health impacts of coal mining

The National Academies of Sciences, Engineering and Medicine, was instructed to stop its ongoing research into the potential health effects of surface mining by the U.S. Department of the Interior on August 18, 2017. The US$1 million study was established on August 3, 2016, “at the request of the State of West Virginia,” by the Office of Surface Mining Reclamation and Enforcement (OSMRE). OSMRE, an office within the U.S. Department of the Interior, selected the National Academy of Science to systematically review current coal extraction methods, the framework regulating these methods, and potential health concerns. Critics of the study point to the findings of a similar review undertaken by the National Institute of Environmental Health Sciences that were made public on July 21, 2017, which determined that the current body of literature was insufficient to reach any conclusions regarding the safety of mountaintop removal on nearby communities.

Mountaintop removal, a form of surface mining, employs the use of explosives to efficiently expose coal deposits that would otherwise require a large number of workers to extract over time. The excess soil and rock that has been blasted from the mountain is placed in adjacent valleys, leading to alterations of stream ecosystems, including increases in selenium concentrations and declines in macroinvertebrate populations.

The people of rural Appalachia experience significantly higher rates of cancer than people in the rest of the U.S., of which environmental exposures are only one potential risk factor. Widespread tobacco use, obesity, and lack of accessible medical care are all believed to underlie the cancer epidemic in Appalachia, culminating in a tangled web of risk.

It is unclear how the money from this study will be repurposed. The Obama administration cancelled a study of surface mining to redirect funds towards examining the little known effects of hydraulic fracturing.

(Lisa Friedman and Brad Plumer, The New York Times)

Cancer treatments

For breast cancer patients the cost of peace of mind may be both breasts

Between 2002 and 2012 the rates of women with a breast cancer diagnosis opting for a double mastectomy increased from 3% to 12%. In a majority of these cases, a lumpectomy may be medically sufficient. However for many women, this choice may stem from a personal pursuit of peace of mind rather than the advice of their doctors. The mastectomy procedure can extend time of recovery from a few days, in the case of a lumpectomy, to 4 to 6 weeks. Yet for many women, undergoing a lumpectomy followed by 5 to 7 weeks of radiation therapy would offer the same long-term survivorship. Additionally, 1 in 8 women with invasive cancer in a single breast is electing to remove both breasts.

The reasons for this increase is unknown. While the procedure has not been demonstrated to increase survivorship, the procedure itself is relatively risk free. Breasts are not vital organs, and improvements in reconstruction methods have provided women with a natural-looking, cosmetic replacement. For many women the cost of feeling their struggle with breast cancer is behind them is the removal of both breasts. Double mastectomies, along with the reconstruction surgeries they normally require, are usually covered by insurance.

Breast cancer is the most commonly diagnosed cancer type in the U.S. Mortality from the disease decreased by 1.9% per year from 2003 to 2012. Yet, for many women facing breast cancer, the choice of a double mastectomy may feel like the only empowering choice, one their doctors are willing to let them make.

(Catherine Caruso, STAT News)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

August 30, 2017 at 8:57 pm