Science Policy For All

Because science policy affects everyone.

Archive for November 2017

Science Policy Around the Web – November 28, 2017

leave a comment »

By: Patrice J. Persad, PhD

20171128_Linkpost

source: pixabay

Mental Health

Brain Patterns May Predict People at Risk of Suicide

Suicide is the second common cause of death for adults under 27 years old in the United States. In light of this, being able to identify individuals who are more likely to attempt suicide may transform suicide prevention initiatives’ outcomes.

At the end of last month, scientists published that machine learning (ML) algorithms were able to yield accurate prognoses by using functional magnetic resonance imaging (fMRI) profiles to differentiate subjects with suicidal thoughts (17 cases) and those without (17 controls) . After being exposed to a display of thirty words classified as positive (some examples: “kindness,” “carefree,” and “innocent”), negative (some examples: “guilty,” “evil,” and “gloom”), or pertaining to suicide (some examples: “lifeless,” “hopeless,” and “apathy”), each participant’s brain activity, or “neural signature”, for each word was captured by fMRI.

The accuracy of predicting individuals who had experienced suicidal thoughts was calculated as 91%. Upon stratifying the cases, a portion of those who had experienced suicidal thoughts were successfully identified as having attempted ending their lives in the past (nine out of 17). From categorizing individuals by suicide attempt status, the ML algorithm’s predictions were 94% accurate. The study’s authors envision clinical applicability, including improving patient surveillance, which could impact the suicide rate and lead to treatment or therapies concentrating on the cerebral areas identified by the algorithm. However, the researchers counsel that the study must be replicated with more subjects, particularly across different demographic groups and including individuals that have been diagnosed with psychiatric disorders but do not experience suicidal thoughts.

Various ethical concerns are raised by the application of this research, including the complications surrounding valid consent and how the life insurance industry could use the data. Classification of potential thoughts by ML may intrude privacy beyond what volunteers may feel they are consenting to when they agree to be recorded by fMRI or similar technologies. Neural signature classification in patients may have secondary uses, which can expand outside of medicine and healthcare areas, and must be evaluated. Misuse by life insurance companies, inviting discrimination against candidate insurance policy holders because of true and false classification of suicide ideation from fMRI, is a hypothetical scenario.

(Jon Hamilton, National Public Radio)

Species Conservation

‘Gene Drives’ Are Too Risky for Field Trials, Scientists Say

Consider the following illustration. On the island of New Zealand, a clan of non-native stoats, short-tailed weasels, are pulverizing numbers of the endemic takahes, flightless birds. Settlers in the 1800s had set stoats free through the terrain to prey on the rabbits, disturbers and unwanted consumers of agricultural crops. Thus, the stoat in this case may have been thought of as posts in a living rabbit-proof fence. Consequently, impacts on the native ecosystem were and are negative underlining the decline in and harm to takahes and other flightless bird species, such as the kiwi, New Zealand’s national bird. This currently heralds the question “How can population sizes of invasive species, such as these stoats, be reduced?”

A proposed solution to the aforementioned question features the germline genome editing system CRISPR to introduce genetic elements designed to undermine the reproductive success of a particular species. The idea of a gene drive system is to increase the inheritance, or frequency, of a particular variation in a gene. The CRISPR system can be applied to introduce variation in a gene essential for a given species’ reproductive system. When an animal carrying a CRISPR edited copy of a gene mates with an animal carrying a functional copy of the gene, their offspring will inherit one altered copy and one regular copy. Eventually, as this genetic variation becomes frequent enough, two individuals carrying the altered copy are likely to mate, producing infertile offspring. Then the number of invasive species’ members, the stoats for example, would drop as a result.

Are there any caveats to the CRISPR-based gene drive system for species conservation? A research team headed by Kevin M. Esvelt, the first proposer of gene drives for wild populations, used population models and stimulations to revealed a vital finding: if several gene-drive-modified organisms of an invasive species came into contact and bred with wild-type organisms of the same species, the population of the species as a whole in its native habitat would decrease over time. In other words, the encounter between a wild-type organism and gene drive organism is inauspicious if mating transpires in the species’ native habitat. Based on these findings, the researchers stressed prudence in initiating this application in field studies, which echoed discussions lead by the National Academy of Sciences in 2016.

One alternative may be to engineer gene drives to deteriorate after a certain number of generations. However, according to Esvelt in the context of eradicating vector-borne diseases, such as malaria, restricted gene drive systems affecting only several generations (out of many) may be ineffective in vector populations.

Although the opening example’s setting is New Zealand, the future development and employment of CRISPR-based gene drive systems depend on international cooperation plus participation because other nations may experience the impact of an ecosystem beyond their designated borders. For formulating proper safeguards before field testing commencement, voices from the scientific community, political arena, general community, and other domains across potentially affected countries must be amplified and heard for maximizing the protection of all species.

(Carl Zimmer, New York Times)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

November 28, 2017 at 7:03 pm

Science Policy Around the Web – November 17, 2017

leave a comment »

By: Janani Prabhakar, PhD 

20171117_Linkpost

source: pixabay

Public Health

The ‘Horrifying’ Consequence of Lead Poisoning

Changes in water treatment practices in Flint, Michigan in 2014 resulted in large levels of lead in the water supply, and eventually culminated to a state of emergency in January 2016. The supply affected over 10,000 residents, forcing these individuals to refrain from using the city’s water supply until 2020. Because state officials may have been aware of lead contamination in the water supply for months before it became public, these officials are now facing criminal charges. This negligence is particularly troubling given recent evidence that shows persisting effects of lead contamination on health outcomes in Flint residents. In a working paper by Daniel Grossman at West Virginia University and David Slusky at the University of Kansas, the authors compared fertility rates in Flint before and after the change in water treatment practices that led to the crisis, and compared post-change fertility rates in Flint to those of unaffected towns in Michigan. They found that the fertility rate declined by 12 percent and fetal death rate increased by 58 percent. These reductions in rate have been witnessed in other cities after similar incidents of lead contamination in the water supply. Furthermore, given that the number of children with lead-poisoned blood supply doubled after changes to the treatment practices, the long-term effects on cognitive, behavior, and social outcomes of this contamination are only beginning to be examined and understood. The circumstances in Flint are an example of how misplaced focus of high-level policy decisions can negatively impact local communities, particularly low-income black neighborhoods. Black neighborhoods are disproportionately affected by lead contamination, but the lack of sufficient attention as well as the false suggestion that effected individuals were to blame propagated by industry leaders and policy makers have deterred progress in addressing critical issues in at-risk and underserved communities.

(Olga Khazan, The Atlantic)

Climate Change

Why China Wants to Lead on Climate, but Clings to Coal (for Now)

In a country of 1.4 billion people, China is one of the world’s largest coal producers and carbon polluters. However, it aims to spearhead the international agreement to address climate change. Despite this contradiction, China is already on track to meet its commitment to the Paris climate accord. This move towards reducing its dependence on coal comes as a necessity to China because of internal pressure to curb air pollution. But, according to NRDC climate and energy policy director Alvin Lin, given its size and population, phasing out coal dependence will not only be a long process for China, but one that has lots of ups and downs. For instance, while China has shown progress in meeting its commitments, a recent report shows higher emission projections this year may reflect an uptick in economic growth and reduction in rains needed to power hydroelectric technologies. While Lin portrays this uptick as an anomaly, competing interests in the Chinese government make the future unclear. In efforts to increase its presence abroad, China has built coal plants in other countries. But, China is also the largest producer of electric cars. President Xi Jinping has derided the United States for being isolationist and reneging on the Paris climate accord, but how his Government plans to hold its end of the deal has not been revealed. An important revelation is the fact that even if every country achieves their individual Paris pledges, the planet will still heat up by 3 degrees Celsius or more. Given that this increase is large enough to have catastrophic effects on the climate, adherence to Paris pledges serves only as a baseline for what is necessary and sufficient to combat global warming.

(Somini Sengupta, The New York Times)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

November 17, 2017 at 4:51 pm

Science Policy Around the Web – November 14, 2017

leave a comment »

By: Saurav Seshadri, PhD

20171114_Linkpost

source: pixabay

Alzheimer’s Disease

Bill Gates sets his sights on neurodegeneration

Microsoft founder Bill Gates has announced a new funding initiative for research into Alzheimer’s Disease, starting with a personal donation of $50 million to the Dementia Discovery Fund (DDF).  The DDF is a UK-based, public-private collaboration, launched in 2015 and designed to encourage innovative research into treatments for dementia, of which Alzheimer’s Disease is a leading cause.  Initial investment in the DDF, which came from the pharmaceutical industry and government entities, was $100 million, meaning Gates’ contribution will be significant.  Gates says his family history makes him particularly interested in finding a cure for Alzheimer’s Disease.  The DDF has already taken steps in this direction: its first investment was in the biopharma company Alector, which is moving forward with immune system-related research to combat Alzheimer’s Disease.

Gates is already famous for his philanthropy through the Bill and Melinda Gates Foundation, which funds efforts to fight poverty and disease throughout the world.  However, the Foundation has traditionally focused on infectious diseases, such as HIV and malaria, making Alzheimer’s Disease Gates’ first foray into neuroscience.  In this regard, he has some catching up to do to match philanthropic contributions and business pursuits by other tech billionaires.  These include his Microsoft co-founder Paul Allen, who started the Allen Institute for Brain Sciences with $100 million in 2003.  The Allen Institute provides a range of tools for basic researchers using mouse models, generating comprehensive maps of brain anatomy, connectivity and gene expression.  More recently, Tesla founder Elon Musk started Neuralink, a venture which aims to enhance cognitive ability using brain-machine interfaces.  Kernel, founded by tech entrepreneur Bryan Johnson, has a similar goal.  Finally, while the Chan Zuckerberg Initiative (started by Facebook CEO Mark Zuckerberg in 2015) doesn’t explicitly focus on neuroscience, its science program is led by acclaimed neuroscientist Cori Bargmann.

As pointed out by former National Institutes of Mental Health Director Tom Insel, this infusion of money, as well as the fast-moving, results-oriented tech mindset behind it, has the potential to transform neuroscience and deliver better outcomes for patients.  As government funding for science appears increasingly uncertain, such interest and support from private investors is encouraging.  Hopefully the results will justify their optimism.

(Sanjay Gupta, CNN)

 

Physics

Elusive particles create a black hole for funding

The Large Hadron Collider (LHC) enabled a scientific breakthrough in 2012 when it was used to produce evidence for the Higgs boson, a physical particle that endows matter with mass.  In the wake of the worldwide excitement generated by that discovery, physicists finalized plans for a complementary research facility, the International Linear Collider (ILC), to be built in Japan.  While the LHC is circular and collides protons, the ILC would collide positrons and electrons, at lower energy but with more precise results.  Unfortunately, anticipated funding for the $10 billion project from the Japanese government has failed to materialize.  Following recent recommendations by Japanese physicists, the group overseeing the ILC has now agreed on a less ambitious proposal, for a lower energy machine with a shorter tunnel.  Though physicists remain optimistic that the ILC will still provide useful data, it will no longer be able to produce high-energy quarks (one of its planned uses), and will instead focus on previously detected particles and forces.  The ILC’s future is currently in limbo until the Japanese government makes a concrete financial commitment, and it is unlikely to be completed before 2030.

After the Higgs boson, the LHC struggled to find proof of the existence of other new particles.  One such high-profile disappointment was the search for dark matter.  When dark matter was hypothesized to be the source of unexplained gamma radiation observed with NASA’s Fermi Space Telescope, the search for a dark matter particle became a top priority for the LHC’s second run.  Such evidence would also have supported supersymmetry, a key theory in particle physics.  However, these efforts, as well as multiple others using different detectors, have thus far failed to find any signs of dark matter.  These unsuccessful experiments certainly contributed to scaling back the ILC, and illustrate larger problems with setting realistic expectations and/or valuing negative results among scientists, government officials, and the public.  As a result, in order to advance our understanding of the basic building blocks of our universe, particle physicists will now have to do more with less.

(Edwin Cartlidge, Nature News)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

November 14, 2017 at 5:40 pm

Science Policy Around the Web – November 10, 2017

leave a comment »

By: Vanessa L.Z. Gordon-Dseagu, PhD

20171110_Linkpost

Source: Pexels

Healthcare

House Passes CHIP Funding, Sends Bill to Uncertain Future in Senate

The Children’s Health Insurance Program (CHIP) aims to extend healthcare coverage for children living in low-income households. Funded jointly by the federal government and states, the program currently covers around 8.9 million children and 370,000 pregnant women. Initially started in the 1990s, the program has helped to reduce the number of uninsured children from 13.9% in 1997 to 5.3%% at the start of 2017:. Previous funding for CHIP stopped on Oct 1, 2017, although the majority of states have enough money to pay for the program until 2018.

On Friday November 3rd, the House passed (by 242-174 votes) the Healthy Kids Act to extend funding for CHIP for the next five years. Although passed, the act is not without contention. The majority of Democrats voted against the bill due to concerns over how the program would be funded. The current act aims to fund CHIP via increased premiums for older, wealthier individuals on Medicare, narrowing the period for those under Affordable Care Act (ACA) to pay their premiums and taking money from the ACA’s Prevention and Public Health Fund -the fund is the first of its kind mandated to focus upon prevention and improvements in public health.

As reported in a number of sources, the issue of funding is likely to delay the bill once it reaches the Senate, with both Democrats and Republicans accusing each other of delay tactics. Rep Frank Pallone (D-N.J.) stated that the bill “will go to the Senate, and it will sit there”, while Greg Walden (R-Ore.) accused the Democrats of “Delay, delay, delay.” A number of commentators suggest that funding for CHIP will be rolled in to an end-of-year budget seeking to prevent a government shut-down.

(Laura Joszt, The American Journal of Managed Care)

Healthcare

Maine Voters Approve Medicaid Expansion, a Rebuke of Gov. LePage

The ballot question was a relatively simple one:

“Do you want Maine to expand Medicaid to provide healthcare coverage for qualified adults under age 65 with incomes at or below 138% of the federal poverty level, which in 2017 means $16,643 for a single person and $22,412 for a family of two?”

And on the 7th of November, in the first successful vote of its kind, residents of Maine demonstrated overwhelming support (59% voted yes) for the policy. Seeking to expand Medicaid, under the umbrella of the Affordable Care Act, an estimated 70,000 individuals living in the region would receive coverage – the program is specifically for those who are currently uninsured and earning ≤138% of the federal poverty line – 31 states already have similar programs in place.

A key stumbling block to implementation of the program comes from strong opposition from Maine’s Governor Paul LePage. The governor, who had on five occasions previously vetoed health insurance expansion bills proposed by the state legislature, responded to the vote in a statement on the 8th of November calling it “fiscally irresponsible” and “ruinous to Maine’s budget.” In defense of the expansion, Anne Woloson of Maine Equal Justice Partners commented that, with 90% of funding coming from central government, “This will improve the Maine economy. It’s going to create good-paying jobs.”

LePage also stated that his administration was not prepared to expand Medicaid unless the state found a way to pay for it at the levels calculated by the Department of Health and Human Services, but the ability of the governor to veto the implementation of such legislation is limited and it is likely to become law in 2018.

Following the ballot’s success, and with increasing support for Medicaid expansion nationally, advocates in Utah and Idaho are planning similar votes in their own states.

(Abby Goodnough, The New York Times)

 

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

November 10, 2017 at 6:55 pm

Pharmaceutical Detailing: in the US the Details are Tied the Prescriber’s Name

leave a comment »

By: Allison Dennis B.S.

171109_essay

Source: pixabay

While U.S. privacy laws protect patients from direct pharmaceutical marketing and shield their personal information from data mining, physicians are routinely identified based on their prescribing habits and targeted by pharmaceutical companies through personalized marketing campaigns. By their very nature, these campaigns aim to influence the behavior of prescribers. In other countries, including those protected by the European Union’s Data Protection Act, the personal identification of prescribers through medical data is strictly forbidden. However, in the U.S. these personalized campaigns are made possible by a robust pipeline of data sharing.

The pipeline begins with pharmacies, who routinely sell data derived from the vast volume of prescriptions they handle. While the prescribers’ names are usually redacted, IMS Health, a key health information organization in the pipeline, can easily use the American Medical Association (AMA)-licensed Physician Masterfile to reassociate physician ID numbers with the redacted names. The physician ID numbers are issued by the U.S. Drug Enforcement Administration (DEA) and are sold to AMA through a subscription service. IMS Health uses the prescription data to develop analytic tools for sale to pharmaceutical companies desperate to gain a marketing edge with individual prescribers. The tools consolidate the activity of nurse practitioners, dentists, chiropractors, and any professionals who can legally file a prescription. Marketers can use these tools to determine how much each named physician is prescribing, how that compares to other named physicians, what their specialty is, etc.

The data contained in the AMA’s Physician Masterfile is applicable for informing research and conducting surveys of practicing physicians, yet the need to identify physicians by name is usually not needed for public health research and enables prescriber manipulation.  The prescriber reports compiled by IMS Health enable pharmaceutical companies to take a data-driven approach to direct-to-physician advertising, a practice known as detailing. During a 17-month period between 2013 and 2015, pharmaceutical companies reported spending $3.5 billion in payments to physicians covering promotional speaking, consulting, meals, travel, and royalties. While many of the expenditures may be tied to legitimate collaborations between pharmaceutical companies and medical professionals, the U.S. Department of Health and Human Services warns that free samples, sham consulting agreements, subsidized trips, and industry-sponsored continuing education opportunities are all tools used by vendors to buy medically irrelevant loyalty. Indeed, physicians themselves seem conflicted over the significance of these relationships. When residents were asked if contact with pharmaceutical representatives influenced their prescribing practices, 61% believed they were unaffected. However, the same residents felt that only 16% of their peers were similarly immune to contact with pharmaceutical representatives.

Studies examining the role of detailing  have found it associated with higher prescribing frequency, higher costs, and lower prescribing quality, all with no contrasting favorable associations. Recent concerns over conflicts  of  interest arising from increased exposure of physicians to detailers led several academic medical centers to restrict sales visits and gift giving and implement enforcement mechanisms. Compared to hospitals with no detailing limitations, hospitals with limitations underwent an 8.7% relative decrease in the market share of detailed drugs and a 5.6% relative increase in the market share of non-detailed drugs. Overuse of brand-name drugs, which are most commonly associated with detailing, cost the US approximately $73 billion between 2010 and 2012, one-third of which was shouldered by patients. Advocates of the practice lament the lack of formal academic opportunities for physicians to learn about new drugs, believing the educational materials provided by pharmaceutical representatives fulfills a need.

The most tragic example of the potential harms of detailing targeting individual prescribers comes from the early days of the prescription opioid crisis. Purdue Pharma, the maker of OxyContin, used prescriber databases to identify the most frequent and least discriminate prescribers of opioids. Sales representatives, enticed by a bonus system that tracked their success according to upswings captured in the prescriber database, showered their target prescribers with gifts while systematically underrepresenting the risk of addiction and abuse from OxyContin. Recruitment into Purdue’s national speaker bureau and subsequent paid opportunities were further used to entice lukewarm and influential prescribers.

The last decade has seen several attempts to address the influence of detailing at the institutional, professional, and executive levels. Individual hospitals have begun limiting the access of physicians to vendors. The American Medical Student Association began issuing a conflict-of-interest scorecard, allowing all U.S. medical schools to track and assess their own detail-related policies, including those related to the limiting of gifts from the industry, industry-sponsored promotional speaking relationships, permitted accesses of pharmaceutical sales representatives, and overall enforcement and sanction of these policies. In 2016, 174 institutions participated. The AMA, which licenses the list of physician names used by health information organizations companies, has offered physicians the chance to block pharmaceutical representatives and their immediate supervisors from accessing their prescribing data. However, the Physician Data Restriction Program does not limit the ability of other employees at a pharmaceutical company to access prescribing data of doctors who have opted out. Physicians must renew their request to opt out every three years and are automatically added to the Masterfile upon entering medical school. Five years after the program’s introduction in 2006, just 4% of practicing physicians listed on the file had opted out.

In 2007, the state of Vermont outlawed the practice of selling prescription data for pharmaceutical marketing without prescriber consent. The law was quickly challenged by IMS Health, the Pharmaceutical Research and Manufacturers of America, and other data aggregators and eventually struck down by the U.S. Supreme Court. Vermont legislators held that detailing compromises clinical decision making and professionalism and increases health care costs and argued that the law was needed to protect vulnerable and unaware physicians. However, the Court held that speech in the aid of pharmaceutical marketing is protected under the First Amendment and could not be discriminately limited by Vermont law.

Congress made the first federal attempt to address the issue by enacting the Physician Payment Sunshine Act in 2010, which required companies participating in Medicare, Medicaid, and the State Children’s Health Insurance Program markets to track and collect their financial relationships with physicians and teaching hospitals. The transparency gained from the disclosures have allowed many researchers to systematically evaluate connections between conflicts of interests and prescribing behavior.

As policy makers and private watchdogs scramble to address the issues of detailing, the availability of physician names and prescription habits continues to facilitate the implementation of novel tactics. Limits on face time have pushed detailers to tap into the time physicians are spending online. When the names of prescribers are known, following and connecting with prescribers through social media accounts is straightforward. Companies like Peerin have emerged, which analyze prescriber Twitter conversations to learn whose conversations are most likely to be influential and which prescribers are connected. LinkedIn, Facebook, and Twitter all offer the ability to target a list of people by name or e-mail address for advertising. While all online drug ads are limited by the U.S. Food and Drug Administration, pharmaceutical companies are experimenting with the use of unbranded awareness campaigns to circumvent direct-to-consumer regulations.

While personalized prescriber marketing campaigns may be turning a new corner in the internet age, a simple opportunity exists at the federal level to de-personalize the practice of physician detailing. It is unclear the extent that the DEA stands to gain from selling physician ID subscriptions. However, in context of the downstream costs of the overuse of name-brand drugs this may be an appropriate loss. The U.S. Government’s central role in the reassociation of prescribers’ prescriptions could be directly addressed through systematic implementation of revised policy in order to preempt downstream prescriber manipulation.

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

November 9, 2017 at 10:41 pm

Science Policy Around the Web – November 7, 2017

leave a comment »

By: Rachel F Smallwood Shoukry, PhD 

20171107_Linkpost

source: pixabay

Science and Society

What’s Your (Epistemic) Relationship To Science?

A recently published paper presented the results of a meta-analysis of studies that were published in the journal Public Understanding of Science. The goal of the paper was to determine whether these studies were correctly distinguishing scientific knowledge versus scientific understanding. While the exact definitions are a topic of debate in the field of epistemology, understanding of a concept is generally considered to require a deeper level of comprehension than having knowledge of a concept. This level of comprehension should allow one to draw inferences and make judgements. For the purposes of their paper, the authors set a specific definition of understanding, which included the ability to grasp “how a constellation of facts relevant to that subject are related to one another.” They found 67 papers that used the term “understanding” or similar, and compared the papers’ definitions and descriptions of understanding with their specified one. They found that only one paper defined understanding in line with their definition, and only six used it consistently without explicitly defining it correctly. Forty-seven papers were unclear about their definition of understanding, two used alternate definitions, and 11 conflated knowledge and understanding.

In a follow-up analysis, authors examined papers that were evaluating an “epistemic state”, whether the evaluation targeted the specified definition of understanding correctly. Of the 13 papers they found, only one correctly measured understanding, while two others attempted to. The two attempts and the other ten papers were more closely targeting an epistemic state that resembled knowledge than one resembling understanding.

The authors expressed concern over these findings and promote that scientists and educators think carefully about the difference between knowledge and understanding, and to be more deliberate in targeting understanding. The United States has consistently ranked behind many countries in the fields of science and mathematics. Besides the international ranking of the general population’s scientific understanding, there are other important reasons for the public to pursue understanding over knowledge. Voters elect officials who determine policy, priority, and funding of scientific concerns and research. Additionally, social media plays an increasingly prominent role in the public’s consumption of science news and information, and having a public with a better understanding of science can only help in shaping society’s actions regarding scientific discoveries and recommendations for health, environmental, technological, and social advancement.

(Tania Lombrozo, NPR)

 

Public Health

US government approves ‘killer’ mosquitoes to fight disease

The US Environmental Protection Agency (EPA) has approved the start-up company, MosquitoMate, to release lab-bred, non-biting mosquitos infected with the bacterium Wolbachia pipientis in order to infect wild mosquito populations. The goal is decrease the Asian tiger mosquito population by releasing infected males which will then create nonviable progeny with wild females. The Asian tiger mosquito can transmit diseases such as dengue, yellow fever, West Nile Virus, and Zika. The EPA has approved the release of these mosquitos in 20 states plus Washington, DC, stating that the approved states have climates similar to those in the regions where MosquitoMate performed test trials.

The biggest obstacle MosquitoMate faces presently is production time; males must be separated from the females, and the company currently separates them both by hand and mechanically. Improvement is possible, however, proven by the research team from Michigan State University and Sun Yat-sen University in China who release millions of mosquitoes infected with the same bacterium weekly in China. They mechanically separate the males and females with 99% accuracy.

Another company using lab-altered mosquitoes to fight wild mosquitoes, Oxitec, has hit resistance in the US. Although widely used in Brazil, voters in Florida were wary of releasing the genetically modified mosquitoes into the wild. People seem to fear the unanticipated consequences of having a large population of GM organisms on the loose. MosquitoMate has appeared to avoid much of this controversy due to the distinction of their mosquitoes being infected with a common bacterium in insects, rather than being genetically engineered. There has been little attention to MosquitoMate’s activities in trial areas, and most of the feedback has been positive. The company will begin by selling their mosquitoes locally and then expanding to nearby areas and beyond as they are able to boost production

(Emily Waltz, Nature News).

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

November 7, 2017 at 5:21 pm

Science Policy Around the Web – November 3, 2017

leave a comment »

By: Liu-Ya Tang, PhD

20171103_Linkpost

source: pixabay

Cancer Research

Genomic studies track early hints of cancer

Does getting cancer mean the end of the world? No, if it is found early and treated early. Early detection of cancer greatly increases the chances for successful treatment and survival. Researchers have been striving to develop new approaches for diagnosing cancer, as effective screening tests for early detection do not exist for many types of cancer.

Liquid biopsy is one of the new techniques developed in recent years. Different from traditional biopsies which are usually based on tissues, liquid biopsy is a test done on blood, urine or sputum samples. For example, blood based biopsies are to detect circulating tumor cells, DNA or other substances released by tumor cells. One prominent advantage of liquid biopsy over traditional biopsy is that liquid biopsy is less invasive. In addition to being used in cancer diagnosis, liquid biopsy can also monitor a patient’s response to a treatment plan. After several years’ research at the bench, liquid biopsy is now being applied to diagnose and treat cancer patients.

Liquid biopsy can detect cancer at early stage, but some researchers are aiming to detect cancer before its onset. Two recently-funded projects of genomic studies also use DNA sequencing techniques, similar to those used in liquid-biopsy, but they are using the technology to get one step ahead of cancer. To understand the molecular events that trigger benign tumors to become malignant, researchers are interested in creating a “pre-cancer genome atlas” by sequencing DNA from precancerous tissues. This approach will be applied to lung, breast, prostate and pancreatic cancer. It is particularly important for pancreatic cancer as no early detection method is available, leading to high mortality. Moreover, many pancreatic tumors seem to be driven by mutations in the same genes, which could make the disease predictable if known mutations can be detected at the precancerous stage.

As part of the National Cancer Moonshot Initiative, cancer prevention and early detection are as important as cancer treatment in fighting cancer. If the researchers can successfully identify the early hints of cancer development, it would possibly make cancer more predictable or preventable. Avrum Spira at Boston University in Massachusetts, a leader of both projects, believes her work “could herald a change in how researchers approach cancer prevention.”

(Heidi Ledford, Nature News)

 

The Scientific Workforce

NIH-Funded Network to Foster Diversity: Achievements and Challenges

In October 2014, the National Institutes of Health launched a $250 million diversity initiative, which included a 5-year $22 million grant to support the National Research Mentoring Network (NRMN). The purpose of NRMN is to increase workforce diversity by bringing more historically underrepresented population, which includes blacks, Hispanics and Native Americans, into biomedical research. The mentoring network covers the full spectrum of research community, from undergraduates to senior faculty members. The content of mentoring is not limited to training in grant writing or application, but also various aspects of professional development.

One advantage of NRMN is its expansion of the traditional way of mentoring, where young scientists learn the profession through their contact with supervisors. The mentees could suffer if good mentoring doesn’t come naturally to their mentors or when the extent and quality of mentoring are not tracked. The mentoring needs are particularly high in less research-intensive institutions because potential mentors might be scarce. To overcome these, NRMN was created to provide a mentoring resource to mentees, which is very important to minority students with less access to high quality mentoring. By signing up online, a mentee will be matched by NRMN system based on mentee’s interests and professional goals. As of June 30, NRMN has 3713 mentees and 1714 mentors, compared to only 37 mentees and 16 mentors who signed up during the first year. Demographically speaking, blacks and Hispanics make up 57% of the mentee group compared to only 11% of the Bachelor of Science holders nationwide. NRMN has featured success stories from Rachel Ezieme, who is the daughter of Nigerian immigrants, and Crystal Lee, who is a Native American from the Navajo tribe.

In addition to its success, some challenges still exist. One big challenge is recruiting more mentors. As good mentoring is not directly tied to professional achievement, senior faculty members may not take the extra effort to seek the opportunity to mentor. Dr. Hannah Valantine, who is the National Institutes of Health’s chief officer for scientific workforce diversity, commented that “the burden is enormous”. As evidenced by NRMN data, 58% of the mentors are women, however less than 40% of the tenured faculty positions are held by a woman in the United States. A solution to this could be “tie NIH funding to a university’s commitment to mentoring”, which was suggested by Dr. Karen Winkfield at Wake Forest University in Winston-Salem, North Carolina. A second problem is associated with the evaluation of the program. Dr. Keith Norris, who is leading an official assessment team for NRMN, tries to identify good mentoring and determine which element of NRMN is beneficial for professional development. He said that there’s a large degree of variation in how the mentees and mentors have interacted. Additionally, mentoring relationship can be very short or can last for several years. To address this, Dr. Norris plans to apply “high-touch interventions” and hopes that the results can generalize to the entire population.

(Jeffrey Mervis, ScienceInsider)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

November 3, 2017 at 6:44 pm