Science Policy For All

Because science policy affects everyone.

Science Policy Around the Web – November 17, 2017

leave a comment »

By: Janani Prabhakar, PhD 


source: pixabay

Public Health

The ‘Horrifying’ Consequence of Lead Poisoning

Changes in water treatment practices in Flint, Michigan in 2014 resulted in large levels of lead in the water supply, and eventually culminated to a state of emergency in January 2016. The supply affected over 10,000 residents, forcing these individuals to refrain from using the city’s water supply until 2020. Because state officials may have been aware of lead contamination in the water supply for months before it became public, these officials are now facing criminal charges. This negligence is particularly troubling given recent evidence that shows persisting effects of lead contamination on health outcomes in Flint residents. In a working paper by Daniel Grossman at West Virginia University and David Slusky at the University of Kansas, the authors compared fertility rates in Flint before and after the change in water treatment practices that led to the crisis, and compared post-change fertility rates in Flint to those of unaffected towns in Michigan. They found that the fertility rate declined by 12 percent and fetal death rate increased by 58 percent. These reductions in rate have been witnessed in other cities after similar incidents of lead contamination in the water supply. Furthermore, given that the number of children with lead-poisoned blood supply doubled after changes to the treatment practices, the long-term effects on cognitive, behavior, and social outcomes of this contamination are only beginning to be examined and understood. The circumstances in Flint are an example of how misplaced focus of high-level policy decisions can negatively impact local communities, particularly low-income black neighborhoods. Black neighborhoods are disproportionately affected by lead contamination, but the lack of sufficient attention as well as the false suggestion that effected individuals were to blame propagated by industry leaders and policy makers have deterred progress in addressing critical issues in at-risk and underserved communities.

(Olga Khazan, The Atlantic)

Climate Change

Why China Wants to Lead on Climate, but Clings to Coal (for Now)

In a country of 1.4 billion people, China is one of the world’s largest coal producers and carbon polluters. However, it aims to spearhead the international agreement to address climate change. Despite this contradiction, China is already on track to meet its commitment to the Paris climate accord. This move towards reducing its dependence on coal comes as a necessity to China because of internal pressure to curb air pollution. But, according to NRDC climate and energy policy director Alvin Lin, given its size and population, phasing out coal dependence will not only be a long process for China, but one that has lots of ups and downs. For instance, while China has shown progress in meeting its commitments, a recent report shows higher emission projections this year may reflect an uptick in economic growth and reduction in rains needed to power hydroelectric technologies. While Lin portrays this uptick as an anomaly, competing interests in the Chinese government make the future unclear. In efforts to increase its presence abroad, China has built coal plants in other countries. But, China is also the largest producer of electric cars. President Xi Jinping has derided the United States for being isolationist and reneging on the Paris climate accord, but how his Government plans to hold its end of the deal has not been revealed. An important revelation is the fact that even if every country achieves their individual Paris pledges, the planet will still heat up by 3 degrees Celsius or more. Given that this increase is large enough to have catastrophic effects on the climate, adherence to Paris pledges serves only as a baseline for what is necessary and sufficient to combat global warming.

(Somini Sengupta, The New York Times)

Have an interesting science policy link?  Share it in the comments!


Written by sciencepolicyforall

November 17, 2017 at 4:51 pm

Science Policy Around the Web – November 14, 2017

leave a comment »

By: Saurav Seshadri, PhD


source: pixabay

Alzheimer’s Disease

Bill Gates sets his sights on neurodegeneration

Microsoft founder Bill Gates has announced a new funding initiative for research into Alzheimer’s Disease, starting with a personal donation of $50 million to the Dementia Discovery Fund (DDF).  The DDF is a UK-based, public-private collaboration, launched in 2015 and designed to encourage innovative research into treatments for dementia, of which Alzheimer’s Disease is a leading cause.  Initial investment in the DDF, which came from the pharmaceutical industry and government entities, was $100 million, meaning Gates’ contribution will be significant.  Gates says his family history makes him particularly interested in finding a cure for Alzheimer’s Disease.  The DDF has already taken steps in this direction: its first investment was in the biopharma company Alector, which is moving forward with immune system-related research to combat Alzheimer’s Disease.

Gates is already famous for his philanthropy through the Bill and Melinda Gates Foundation, which funds efforts to fight poverty and disease throughout the world.  However, the Foundation has traditionally focused on infectious diseases, such as HIV and malaria, making Alzheimer’s Disease Gates’ first foray into neuroscience.  In this regard, he has some catching up to do to match philanthropic contributions and business pursuits by other tech billionaires.  These include his Microsoft co-founder Paul Allen, who started the Allen Institute for Brain Sciences with $100 million in 2003.  The Allen Institute provides a range of tools for basic researchers using mouse models, generating comprehensive maps of brain anatomy, connectivity and gene expression.  More recently, Tesla founder Elon Musk started Neuralink, a venture which aims to enhance cognitive ability using brain-machine interfaces.  Kernel, founded by tech entrepreneur Bryan Johnson, has a similar goal.  Finally, while the Chan Zuckerberg Initiative (started by Facebook CEO Mark Zuckerberg in 2015) doesn’t explicitly focus on neuroscience, its science program is led by acclaimed neuroscientist Cori Bargmann.

As pointed out by former National Institutes of Mental Health Director Tom Insel, this infusion of money, as well as the fast-moving, results-oriented tech mindset behind it, has the potential to transform neuroscience and deliver better outcomes for patients.  As government funding for science appears increasingly uncertain, such interest and support from private investors is encouraging.  Hopefully the results will justify their optimism.

(Sanjay Gupta, CNN)



Elusive particles create a black hole for funding

The Large Hadron Collider (LHC) enabled a scientific breakthrough in 2012 when it was used to produce evidence for the Higgs boson, a physical particle that endows matter with mass.  In the wake of the worldwide excitement generated by that discovery, physicists finalized plans for a complementary research facility, the International Linear Collider (ILC), to be built in Japan.  While the LHC is circular and collides protons, the ILC would collide positrons and electrons, at lower energy but with more precise results.  Unfortunately, anticipated funding for the $10 billion project from the Japanese government has failed to materialize.  Following recent recommendations by Japanese physicists, the group overseeing the ILC has now agreed on a less ambitious proposal, for a lower energy machine with a shorter tunnel.  Though physicists remain optimistic that the ILC will still provide useful data, it will no longer be able to produce high-energy quarks (one of its planned uses), and will instead focus on previously detected particles and forces.  The ILC’s future is currently in limbo until the Japanese government makes a concrete financial commitment, and it is unlikely to be completed before 2030.

After the Higgs boson, the LHC struggled to find proof of the existence of other new particles.  One such high-profile disappointment was the search for dark matter.  When dark matter was hypothesized to be the source of unexplained gamma radiation observed with NASA’s Fermi Space Telescope, the search for a dark matter particle became a top priority for the LHC’s second run.  Such evidence would also have supported supersymmetry, a key theory in particle physics.  However, these efforts, as well as multiple others using different detectors, have thus far failed to find any signs of dark matter.  These unsuccessful experiments certainly contributed to scaling back the ILC, and illustrate larger problems with setting realistic expectations and/or valuing negative results among scientists, government officials, and the public.  As a result, in order to advance our understanding of the basic building blocks of our universe, particle physicists will now have to do more with less.

(Edwin Cartlidge, Nature News)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

November 14, 2017 at 5:40 pm

Science Policy Around the Web – November 10, 2017

leave a comment »

By: Vanessa L.Z. Gordon-Dseagu, PhD


Source: Pexels


House Passes CHIP Funding, Sends Bill to Uncertain Future in Senate

The Children’s Health Insurance Program (CHIP) aims to extend healthcare coverage for children living in low-income households. Funded jointly by the federal government and states, the program currently covers around 8.9 million children and 370,000 pregnant women. Initially started in the 1990s, the program has helped to reduce the number of uninsured children from 13.9% in 1997 to 5.3%% at the start of 2017:. Previous funding for CHIP stopped on Oct 1, 2017, although the majority of states have enough money to pay for the program until 2018.

On Friday November 3rd, the House passed (by 242-174 votes) the Healthy Kids Act to extend funding for CHIP for the next five years. Although passed, the act is not without contention. The majority of Democrats voted against the bill due to concerns over how the program would be funded. The current act aims to fund CHIP via increased premiums for older, wealthier individuals on Medicare, narrowing the period for those under Affordable Care Act (ACA) to pay their premiums and taking money from the ACA’s Prevention and Public Health Fund -the fund is the first of its kind mandated to focus upon prevention and improvements in public health.

As reported in a number of sources, the issue of funding is likely to delay the bill once it reaches the Senate, with both Democrats and Republicans accusing each other of delay tactics. Rep Frank Pallone (D-N.J.) stated that the bill “will go to the Senate, and it will sit there”, while Greg Walden (R-Ore.) accused the Democrats of “Delay, delay, delay.” A number of commentators suggest that funding for CHIP will be rolled in to an end-of-year budget seeking to prevent a government shut-down.

(Laura Joszt, The American Journal of Managed Care)


Maine Voters Approve Medicaid Expansion, a Rebuke of Gov. LePage

The ballot question was a relatively simple one:

“Do you want Maine to expand Medicaid to provide healthcare coverage for qualified adults under age 65 with incomes at or below 138% of the federal poverty level, which in 2017 means $16,643 for a single person and $22,412 for a family of two?”

And on the 7th of November, in the first successful vote of its kind, residents of Maine demonstrated overwhelming support (59% voted yes) for the policy. Seeking to expand Medicaid, under the umbrella of the Affordable Care Act, an estimated 70,000 individuals living in the region would receive coverage – the program is specifically for those who are currently uninsured and earning ≤138% of the federal poverty line – 31 states already have similar programs in place.

A key stumbling block to implementation of the program comes from strong opposition from Maine’s Governor Paul LePage. The governor, who had on five occasions previously vetoed health insurance expansion bills proposed by the state legislature, responded to the vote in a statement on the 8th of November calling it “fiscally irresponsible” and “ruinous to Maine’s budget.” In defense of the expansion, Anne Woloson of Maine Equal Justice Partners commented that, with 90% of funding coming from central government, “This will improve the Maine economy. It’s going to create good-paying jobs.”

LePage also stated that his administration was not prepared to expand Medicaid unless the state found a way to pay for it at the levels calculated by the Department of Health and Human Services, but the ability of the governor to veto the implementation of such legislation is limited and it is likely to become law in 2018.

Following the ballot’s success, and with increasing support for Medicaid expansion nationally, advocates in Utah and Idaho are planning similar votes in their own states.

(Abby Goodnough, The New York Times)


Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

November 10, 2017 at 6:55 pm

Pharmaceutical Detailing: in the US the Details are Tied the Prescriber’s Name

leave a comment »

By: Allison Dennis B.S.


Source: pixabay

While U.S. privacy laws protect patients from direct pharmaceutical marketing and shield their personal information from data mining, physicians are routinely identified based on their prescribing habits and targeted by pharmaceutical companies through personalized marketing campaigns. By their very nature, these campaigns aim to influence the behavior of prescribers. In other countries, including those protected by the European Union’s Data Protection Act, the personal identification of prescribers through medical data is strictly forbidden. However, in the U.S. these personalized campaigns are made possible by a robust pipeline of data sharing.

The pipeline begins with pharmacies, who routinely sell data derived from the vast volume of prescriptions they handle. While the prescribers’ names are usually redacted, IMS Health, a key health information organization in the pipeline, can easily use the American Medical Association (AMA)-licensed Physician Masterfile to reassociate physician ID numbers with the redacted names. The physician ID numbers are issued by the U.S. Drug Enforcement Administration (DEA) and are sold to AMA through a subscription service. IMS Health uses the prescription data to develop analytic tools for sale to pharmaceutical companies desperate to gain a marketing edge with individual prescribers. The tools consolidate the activity of nurse practitioners, dentists, chiropractors, and any professionals who can legally file a prescription. Marketers can use these tools to determine how much each named physician is prescribing, how that compares to other named physicians, what their specialty is, etc.

The data contained in the AMA’s Physician Masterfile is applicable for informing research and conducting surveys of practicing physicians, yet the need to identify physicians by name is usually not needed for public health research and enables prescriber manipulation.  The prescriber reports compiled by IMS Health enable pharmaceutical companies to take a data-driven approach to direct-to-physician advertising, a practice known as detailing. During a 17-month period between 2013 and 2015, pharmaceutical companies reported spending $3.5 billion in payments to physicians covering promotional speaking, consulting, meals, travel, and royalties. While many of the expenditures may be tied to legitimate collaborations between pharmaceutical companies and medical professionals, the U.S. Department of Health and Human Services warns that free samples, sham consulting agreements, subsidized trips, and industry-sponsored continuing education opportunities are all tools used by vendors to buy medically irrelevant loyalty. Indeed, physicians themselves seem conflicted over the significance of these relationships. When residents were asked if contact with pharmaceutical representatives influenced their prescribing practices, 61% believed they were unaffected. However, the same residents felt that only 16% of their peers were similarly immune to contact with pharmaceutical representatives.

Studies examining the role of detailing  have found it associated with higher prescribing frequency, higher costs, and lower prescribing quality, all with no contrasting favorable associations. Recent concerns over conflicts  of  interest arising from increased exposure of physicians to detailers led several academic medical centers to restrict sales visits and gift giving and implement enforcement mechanisms. Compared to hospitals with no detailing limitations, hospitals with limitations underwent an 8.7% relative decrease in the market share of detailed drugs and a 5.6% relative increase in the market share of non-detailed drugs. Overuse of brand-name drugs, which are most commonly associated with detailing, cost the US approximately $73 billion between 2010 and 2012, one-third of which was shouldered by patients. Advocates of the practice lament the lack of formal academic opportunities for physicians to learn about new drugs, believing the educational materials provided by pharmaceutical representatives fulfills a need.

The most tragic example of the potential harms of detailing targeting individual prescribers comes from the early days of the prescription opioid crisis. Purdue Pharma, the maker of OxyContin, used prescriber databases to identify the most frequent and least discriminate prescribers of opioids. Sales representatives, enticed by a bonus system that tracked their success according to upswings captured in the prescriber database, showered their target prescribers with gifts while systematically underrepresenting the risk of addiction and abuse from OxyContin. Recruitment into Purdue’s national speaker bureau and subsequent paid opportunities were further used to entice lukewarm and influential prescribers.

The last decade has seen several attempts to address the influence of detailing at the institutional, professional, and executive levels. Individual hospitals have begun limiting the access of physicians to vendors. The American Medical Student Association began issuing a conflict-of-interest scorecard, allowing all U.S. medical schools to track and assess their own detail-related policies, including those related to the limiting of gifts from the industry, industry-sponsored promotional speaking relationships, permitted accesses of pharmaceutical sales representatives, and overall enforcement and sanction of these policies. In 2016, 174 institutions participated. The AMA, which licenses the list of physician names used by health information organizations companies, has offered physicians the chance to block pharmaceutical representatives and their immediate supervisors from accessing their prescribing data. However, the Physician Data Restriction Program does not limit the ability of other employees at a pharmaceutical company to access prescribing data of doctors who have opted out. Physicians must renew their request to opt out every three years and are automatically added to the Masterfile upon entering medical school. Five years after the program’s introduction in 2006, just 4% of practicing physicians listed on the file had opted out.

In 2007, the state of Vermont outlawed the practice of selling prescription data for pharmaceutical marketing without prescriber consent. The law was quickly challenged by IMS Health, the Pharmaceutical Research and Manufacturers of America, and other data aggregators and eventually struck down by the U.S. Supreme Court. Vermont legislators held that detailing compromises clinical decision making and professionalism and increases health care costs and argued that the law was needed to protect vulnerable and unaware physicians. However, the Court held that speech in the aid of pharmaceutical marketing is protected under the First Amendment and could not be discriminately limited by Vermont law.

Congress made the first federal attempt to address the issue by enacting the Physician Payment Sunshine Act in 2010, which required companies participating in Medicare, Medicaid, and the State Children’s Health Insurance Program markets to track and collect their financial relationships with physicians and teaching hospitals. The transparency gained from the disclosures have allowed many researchers to systematically evaluate connections between conflicts of interests and prescribing behavior.

As policy makers and private watchdogs scramble to address the issues of detailing, the availability of physician names and prescription habits continues to facilitate the implementation of novel tactics. Limits on face time have pushed detailers to tap into the time physicians are spending online. When the names of prescribers are known, following and connecting with prescribers through social media accounts is straightforward. Companies like Peerin have emerged, which analyze prescriber Twitter conversations to learn whose conversations are most likely to be influential and which prescribers are connected. LinkedIn, Facebook, and Twitter all offer the ability to target a list of people by name or e-mail address for advertising. While all online drug ads are limited by the U.S. Food and Drug Administration, pharmaceutical companies are experimenting with the use of unbranded awareness campaigns to circumvent direct-to-consumer regulations.

While personalized prescriber marketing campaigns may be turning a new corner in the internet age, a simple opportunity exists at the federal level to de-personalize the practice of physician detailing. It is unclear the extent that the DEA stands to gain from selling physician ID subscriptions. However, in context of the downstream costs of the overuse of name-brand drugs this may be an appropriate loss. The U.S. Government’s central role in the reassociation of prescribers’ prescriptions could be directly addressed through systematic implementation of revised policy in order to preempt downstream prescriber manipulation.

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

November 9, 2017 at 10:41 pm

Science Policy Around the Web – November 7, 2017

leave a comment »

By: Rachel F Smallwood Shoukry, PhD 


source: pixabay

Science and Society

What’s Your (Epistemic) Relationship To Science?

A recently published paper presented the results of a meta-analysis of studies that were published in the journal Public Understanding of Science. The goal of the paper was to determine whether these studies were correctly distinguishing scientific knowledge versus scientific understanding. While the exact definitions are a topic of debate in the field of epistemology, understanding of a concept is generally considered to require a deeper level of comprehension than having knowledge of a concept. This level of comprehension should allow one to draw inferences and make judgements. For the purposes of their paper, the authors set a specific definition of understanding, which included the ability to grasp “how a constellation of facts relevant to that subject are related to one another.” They found 67 papers that used the term “understanding” or similar, and compared the papers’ definitions and descriptions of understanding with their specified one. They found that only one paper defined understanding in line with their definition, and only six used it consistently without explicitly defining it correctly. Forty-seven papers were unclear about their definition of understanding, two used alternate definitions, and 11 conflated knowledge and understanding.

In a follow-up analysis, authors examined papers that were evaluating an “epistemic state”, whether the evaluation targeted the specified definition of understanding correctly. Of the 13 papers they found, only one correctly measured understanding, while two others attempted to. The two attempts and the other ten papers were more closely targeting an epistemic state that resembled knowledge than one resembling understanding.

The authors expressed concern over these findings and promote that scientists and educators think carefully about the difference between knowledge and understanding, and to be more deliberate in targeting understanding. The United States has consistently ranked behind many countries in the fields of science and mathematics. Besides the international ranking of the general population’s scientific understanding, there are other important reasons for the public to pursue understanding over knowledge. Voters elect officials who determine policy, priority, and funding of scientific concerns and research. Additionally, social media plays an increasingly prominent role in the public’s consumption of science news and information, and having a public with a better understanding of science can only help in shaping society’s actions regarding scientific discoveries and recommendations for health, environmental, technological, and social advancement.

(Tania Lombrozo, NPR)


Public Health

US government approves ‘killer’ mosquitoes to fight disease

The US Environmental Protection Agency (EPA) has approved the start-up company, MosquitoMate, to release lab-bred, non-biting mosquitos infected with the bacterium Wolbachia pipientis in order to infect wild mosquito populations. The goal is decrease the Asian tiger mosquito population by releasing infected males which will then create nonviable progeny with wild females. The Asian tiger mosquito can transmit diseases such as dengue, yellow fever, West Nile Virus, and Zika. The EPA has approved the release of these mosquitos in 20 states plus Washington, DC, stating that the approved states have climates similar to those in the regions where MosquitoMate performed test trials.

The biggest obstacle MosquitoMate faces presently is production time; males must be separated from the females, and the company currently separates them both by hand and mechanically. Improvement is possible, however, proven by the research team from Michigan State University and Sun Yat-sen University in China who release millions of mosquitoes infected with the same bacterium weekly in China. They mechanically separate the males and females with 99% accuracy.

Another company using lab-altered mosquitoes to fight wild mosquitoes, Oxitec, has hit resistance in the US. Although widely used in Brazil, voters in Florida were wary of releasing the genetically modified mosquitoes into the wild. People seem to fear the unanticipated consequences of having a large population of GM organisms on the loose. MosquitoMate has appeared to avoid much of this controversy due to the distinction of their mosquitoes being infected with a common bacterium in insects, rather than being genetically engineered. There has been little attention to MosquitoMate’s activities in trial areas, and most of the feedback has been positive. The company will begin by selling their mosquitoes locally and then expanding to nearby areas and beyond as they are able to boost production

(Emily Waltz, Nature News).

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

November 7, 2017 at 5:21 pm

Science Policy Around the Web – November 3, 2017

leave a comment »

By: Liu-Ya Tang, PhD


source: pixabay

Cancer Research

Genomic studies track early hints of cancer

Does getting cancer mean the end of the world? No, if it is found early and treated early. Early detection of cancer greatly increases the chances for successful treatment and survival. Researchers have been striving to develop new approaches for diagnosing cancer, as effective screening tests for early detection do not exist for many types of cancer.

Liquid biopsy is one of the new techniques developed in recent years. Different from traditional biopsies which are usually based on tissues, liquid biopsy is a test done on blood, urine or sputum samples. For example, blood based biopsies are to detect circulating tumor cells, DNA or other substances released by tumor cells. One prominent advantage of liquid biopsy over traditional biopsy is that liquid biopsy is less invasive. In addition to being used in cancer diagnosis, liquid biopsy can also monitor a patient’s response to a treatment plan. After several years’ research at the bench, liquid biopsy is now being applied to diagnose and treat cancer patients.

Liquid biopsy can detect cancer at early stage, but some researchers are aiming to detect cancer before its onset. Two recently-funded projects of genomic studies also use DNA sequencing techniques, similar to those used in liquid-biopsy, but they are using the technology to get one step ahead of cancer. To understand the molecular events that trigger benign tumors to become malignant, researchers are interested in creating a “pre-cancer genome atlas” by sequencing DNA from precancerous tissues. This approach will be applied to lung, breast, prostate and pancreatic cancer. It is particularly important for pancreatic cancer as no early detection method is available, leading to high mortality. Moreover, many pancreatic tumors seem to be driven by mutations in the same genes, which could make the disease predictable if known mutations can be detected at the precancerous stage.

As part of the National Cancer Moonshot Initiative, cancer prevention and early detection are as important as cancer treatment in fighting cancer. If the researchers can successfully identify the early hints of cancer development, it would possibly make cancer more predictable or preventable. Avrum Spira at Boston University in Massachusetts, a leader of both projects, believes her work “could herald a change in how researchers approach cancer prevention.”

(Heidi Ledford, Nature News)


The Scientific Workforce

NIH-Funded Network to Foster Diversity: Achievements and Challenges

In October 2014, the National Institutes of Health launched a $250 million diversity initiative, which included a 5-year $22 million grant to support the National Research Mentoring Network (NRMN). The purpose of NRMN is to increase workforce diversity by bringing more historically underrepresented population, which includes blacks, Hispanics and Native Americans, into biomedical research. The mentoring network covers the full spectrum of research community, from undergraduates to senior faculty members. The content of mentoring is not limited to training in grant writing or application, but also various aspects of professional development.

One advantage of NRMN is its expansion of the traditional way of mentoring, where young scientists learn the profession through their contact with supervisors. The mentees could suffer if good mentoring doesn’t come naturally to their mentors or when the extent and quality of mentoring are not tracked. The mentoring needs are particularly high in less research-intensive institutions because potential mentors might be scarce. To overcome these, NRMN was created to provide a mentoring resource to mentees, which is very important to minority students with less access to high quality mentoring. By signing up online, a mentee will be matched by NRMN system based on mentee’s interests and professional goals. As of June 30, NRMN has 3713 mentees and 1714 mentors, compared to only 37 mentees and 16 mentors who signed up during the first year. Demographically speaking, blacks and Hispanics make up 57% of the mentee group compared to only 11% of the Bachelor of Science holders nationwide. NRMN has featured success stories from Rachel Ezieme, who is the daughter of Nigerian immigrants, and Crystal Lee, who is a Native American from the Navajo tribe.

In addition to its success, some challenges still exist. One big challenge is recruiting more mentors. As good mentoring is not directly tied to professional achievement, senior faculty members may not take the extra effort to seek the opportunity to mentor. Dr. Hannah Valantine, who is the National Institutes of Health’s chief officer for scientific workforce diversity, commented that “the burden is enormous”. As evidenced by NRMN data, 58% of the mentors are women, however less than 40% of the tenured faculty positions are held by a woman in the United States. A solution to this could be “tie NIH funding to a university’s commitment to mentoring”, which was suggested by Dr. Karen Winkfield at Wake Forest University in Winston-Salem, North Carolina. A second problem is associated with the evaluation of the program. Dr. Keith Norris, who is leading an official assessment team for NRMN, tries to identify good mentoring and determine which element of NRMN is beneficial for professional development. He said that there’s a large degree of variation in how the mentees and mentors have interacted. Additionally, mentoring relationship can be very short or can last for several years. To address this, Dr. Norris plans to apply “high-touch interventions” and hopes that the results can generalize to the entire population.

(Jeffrey Mervis, ScienceInsider)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

November 3, 2017 at 6:44 pm

Science Policy Around the Web – October 31, 2017

leave a comment »

By: Michael S. Tennekoon, PhD


source: pixabay

Forensic DNA testing

Using DNA to Sketch What Victims Look Like; Some Call it Science Fiction

CSI on Steroids”. That is how the latest forensic tool used by law enforcement agencies has been described. It is called phenotyping. But is it all that it promises to be?

Phenotyping is a technique that uses human DNA, from materials such as skin or blood, to predict an individual’s appearance. By analyzing the genetic sequence, scientists can look for genes that code for physical characteristics such as skin color, eye color, geographic ancestry, and many others. This information is then plugged into a computer algorithm to predict an individual’s appearance. Today, dozens of law enforcement agencies from New York to Louisiana use phenotyping for cases where traditional forensics has resulted in no leads.

However, critics argue that this technique is reaching far beyond its means, especially as there is a lack of peer-reviewed research to back up its claims. Indeed, Parabon Nanolabs, one of the pioneering companies that offers phenotyping to law enforcement agencies, has yet to publish the methods underlying its techniques. Furthermore, Dr. Yaniv Erlich, a computer scientist that studies genetics at Columbia University, states that apart from basic predictions like human ancestry, phenotyping of faces is “on the verge of science fiction.”

In addition to concerns about the reliability of phenotyping, there are other ethical and legislative concerns. For example, the New York Civil Liberties Union points out that using ancestry to identify potential suspects in a criminal case will place many innocent people without any connection to the incident under suspicion. Theoretically, this could be used in a similar way to ‘stop and frisk’. These concerns are in addition to the already well-documented susceptibility of DNA testing to human error and bias.

States are still in the process of establishing laws and guidelines to regulate DNA testing, including phenotyping, for use in criminal cases. For example, in New York, one must have authorization from state officials before DNA testing is done.

Ethical issues not withholding, supporters such as Deputy Chief Katranakis, who is the commander of the New York Forensics Investigation Division believe that phenotyping offers more benefits than drawbacks, especially for cases where there are no other alternatives. However, as the use of phenotyping becomes more prevalent, caution must still be urged when weighing the contribution of phenotyping to criminal cases.

(Ashley Southall, The New York Times)



Research Misconduct

The Cookie Crumbles: A Retracted Study Points to a Larger Truth

Generously, the chances for a PhD student to get a job in academia are less than 15%. Therefore, the pressure to publish has never been higher. Some would argue that, because of this pressure to publish, there is an increased quantity of lower quality research. Perhaps unsurprisingly then, there is a big problem of researchers failing to replicate studies in the social sciences, and there has been a sharp increase in the number of the papers that have been retracted over the past decade.

On Friday, October 20th, another study which appeared to offer a cheap and simple tool for the fight against national obesity has just been retracted as well. The study suggested that simply placing cartoon Elmo stickers on apples could nudge more children to pick an apple over cookies when offered the choice. However, other researchers noted discrepancies with the numbers in the paper, which led to the submission of a replacement article by the original authors. However, the problems continued when it became known that the study was actually performed on children much younger than originally reported (3-5 years old rather than 8-11 years old, as reported). This situation is exacerbated by the fact that these concerns may also have impacted other published reports from the same lab.

Studying ways to change complex eating behaviors in children is no easy task. Children are considered a vulnerable population and there are several additional regulatory requirements for doing work specifically with children. Examples include parental permission, working on school premises, and getting additional approval from Institution Review Boards to name but a few. However, these hurdles are no excuse to bypass scientific rigor. Given the ease by which scientific findings can reach the masses through social media and the press, scientists must take on the responsibility to be extra vigilant to ensure their findings are accurate, or risk losing the public’s trust and ultimately public funding for the wider scientific community.

(Aaron E. Carroll, The New York Times)


Climate Change

Fighting Poverty Might Make it Harder to Fight Climate Change

At first glance, the goal of tackling poverty appears noble and completely unrelated to tackling climate change. However, new research shows that eradicating poverty may indeed make it harder to tackle climate change. Why? If extreme poverty is eradicated, people may travel more and increase their energy consumption, thus creating a larger carbon footprint.

Given this potential conflict, researchers from the University of Maryland in College Park modeled what the impact of eradicating poverty would be on climate change. The authors found that eradicating extreme poverty (i.e. increasing income from less than $1.90 a day to between $1.90 and $2.97 a day) would not jeopardize current targets for tackling climate change. However, lifting everyone to the next income level (the global middle-income level defined as living on $2.97-8.44 per day) would have a significant impact (an extra 0.6°C of warming) on climate change.

This leaves the global society in the precarious moral position of deciding what level of poverty is acceptable to ensure the sustainability of the planet. If we not only want to eradicate poverty, but also wish to bring everyone to the middle class, we would need to dedicate almost 7 times more resources than we are currently towards tackling climate change.

However, all hope is not lost. Clean energy, if it becomes cheaper than fossil fuels, would be a viable option for developing nations to use to fuel economic growth and hence would reduce future carbon emissions. There are some encouraging signs that this may be a possibility. In recent years, while the global economy has grown, carbon-dioxide emissions have not followed suit. Amazingly, emissions in the United States, Europe and China have actually fallen—though the amount of carbon-dioxide accumulated in the atmosphere has increased. In the meantime, however, the authors of the current study call for lifestyle changes, such as taking public transportation, living in smaller houses, and eating less meat.

(Allie Wilkinson, Science Magazine)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

October 31, 2017 at 8:43 pm