Science Policy For All

Because science policy affects everyone.

Archive for May 2018

Science Policy Around the Web – May 29, 2018

leave a comment »

By: Cindo O. Nicholson, Ph.D.

20180529_Linkpost

source: pixabay

Biohacking

As D.I.Y. gene editing gains popularity, ‘Someone is going to get hurt’

The tools to delve into gene editing and engineering as a hobby has become more accessible to the public. This can be attributed to the necessary equipment becoming cheaper and the widely-shared expertise in molecular biology techniques like polymerase chain reactions (PCR), DNA restriction mapping, and the new craze CRISPR-Cas9 gene editing. All of this has resulted in the growth of a Biohacking community, i.e. a community of citizen scientists with a shared interest in do-it-yourself genetic engineering projects.

Though members of the Biohacking community share the belief that there should be open-access to genetic engineering technology, there are those that believe that there is the potential for something catastrophic to occur. The biggest fear is that someone will develop and unleash a fast-spreading, rapidly-mutating, and lethal biological agent. The knowledge to make an infectious virus starting with DNA fragments that are pasted together have been published by the open-access journal PLOS One. However, it should be noted that the same knowledge could be used to engineer life-saving vaccines from synthesized DNA fragments instead of extracting and passaging infectious agents from infected tissue. Nevertheless, the question becomes how are U.S. authorities regulating the use of gene-editing technologies by individuals that are not federally funded?

There are currently multiple agencies responsible for regulating various types of research, and would be responsible for mandating the ethical use of gene-editing technologies by labs funded by their grants. However, not all scientific endeavors rely on government funding. In 2013 there was a public crowdfunding campaign through Kickstarter that raised almost half a million dollars for the engineering of a glowing plant. There have been instances of the F.B.I. reaching out to some “whitehat” biohacking labs and many of these biohacking labs have guidelines that must be adhered to by members or risk being kicked out. However, once kicked out an individual is still free to continue their activities on their own and in secret. With no real way to keep track of the unregulated use of synthetic biology, the U.S. and the world is vulnerable to those who would nefariously use these technologies.

(Emily Baumgaertner, The New York Times)

Food Science

As  lab grown meat advances the US calls for regulation

The regulation of lab-grown meat (also known as “clean meat”) is getting serious consideration by the U.S. House of Representatives. A draft spending bill from the House appropriations panel includes a statement instructing the U.S. Department of Agriculture (USDA) to issue rules on the manufacturing and labeling of lab-grown meats. Lab-grown meat is made from cells taken from live animals like poultry or cattle that is grown into muscle tissue that can be pressed into burger patties or breaded to make nuggets. Advocates of lab-grown meat state that among its benefits are sparing the lives of animals, and its environmental friendliness since lab-grown meat does not generate greenhouse gases like methane and requires less land.

The impending arrival of these lab-grown meat on the market brings to the fore a few questions such as what actually counts as meat, and is it the responsibility of the USDA or is it the FDAs (Food & Drug Administration’s) for regulating these products. Lab-grown meats are made from the cells of animals and as such are more similar to the cell-based products already regulated by the FDA. In fact, inspecting the cell culture facilities where lab-grown meat is made would lie in the realm of expertise of FDA inspectors. By contrast, USDA inspectors are more familiar with inspecting animal slaughter houses.

Some argue that the proposal for the USDA to regulate cellular agriculture is premature because of insufficient knowledge on the strengths and weaknesses of this method of food production. Others believe that using a spending bill to mandate agencies to come up with new regulations is wrong, especially without input from the small businesses that will be regulated.

This debate about who should, and how to regulate the marketing of lab-grown meats is another example of regulation lagging behind innovation. Why is this frequently the case? The first lab grown beef patty to be taste-tested was in 2013, which means there was at least 5 years to preemptively brainstorm how to regulate, decide which federal agency is best suited to issue rules, and come up with language necessary for a proposal.

(Kelly Servick, Science Magazine News)

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

May 29, 2018 at 11:11 am

Posted in Linkposts

Tagged with , , , , ,

Science Policy Around the Web – May 22, 2018

leave a comment »

By: Patrice J. Persad, PhD

Species Conservation

Massive Eradication Effort Ends Rodents’ Reign Of Terror On Forbidding Isle

In an era when biotechnologies, such as gene drives and in-vitro fertilization, pulsate as pending alternate strategies for species conservation, two seemingly outdated tactics emerge victoriously: man power and canine power. Because of collaborations, for almost ten years, between the South Georgia Heritage Trust (SGHT) and Friends of South Georgia Island, the South Georgia pipit, a native avian species, regained habitat from plaguing invaders—rodents, predators of both chicks and adult birds. After scaling approximately 1,500 miles (area of 400 square meters) of the South Atlantic island’s icy, merciless terrain, “a blessing incognito,” conservation project human members and three dogs, expert ”rodent sniffers,” confirmed the region to be absent of rats and mice. This is giving the South Georgia pipit something to merrily sing about.

How exactly did the 200-year dynasty of the rodent collapse on South Georgia Island? With helicopters furnished by the Friends of South Georgia Island, an American-headquartered organization, pilots circulated poison targeted to the invasive species. Geographical barriers also trapped the whiskery mammals; Goliath-sized glaciers stalled rodents from scurrying to and populating other places on the island. Project members two years later then positioned low-tech chewy apparatuses smothered with tantalizing bait—sticky substances like vegetable oil and sweet peanut butter. These served as checks to record any remaining rodents; any captured teeth impressions signaled rodent infestation. The trinity of dogs, Will, Ahu, and Wai, roamed with their handlers and sniffed amongst the other native wildlife—elephant seals, penguins, and fur seals—while on their quest to determine the deadly invaders’ survival. Fortunately, the rats were history, and this event marked the conservation efforts as successes.

Where one chapter ends, another starts. This characterizes the neverending book of conservation. To permanently keep rodents off the island, the SGHT prudently enforces safeguards. Travellers to South Georgia face examination of their persons and belongings. International governmental officials transfer these individuals to land on miniature vessels from major sea vessels. This is to permit ease of keeping eyes (and canine noses) on any vagabond rats and mice since the vessel area to hide decreases.

As human beings (and dogs) work to restore South Georgia pipits’—and other seabirds’—home into their wings, hope awashes and renews the wildlife conservation front. Given that the triumphant primary actors, the SGHT and Friends of the South Georgia Island, are non-profit organizations, this shows that federal agencies or other government institutes may not be the only ones to fly to species’ rescue. With funding, proper planning, perseverance, and global cooperation [in this case, various networks spanning the United Kingdom (primarily Scotland, the SGHT’s location), the United States, New Zealand, and South Georgia], the inconceivable transforms to the imminent. As Professor Mike Richardson of SGHT envisions, the win over the rodents in South Georgia will inspire others—yes, even “mere” citizens—to take a stand in protecting both native species and their habitats across the hemisphere.

(Colin Dwyer, National Public Radio)

Environment

Air pollution inequality widens between rich and poor nations

Injustice again accompanies the impoverished throughout the world. According to the World Health Organization (WHO), poor air quality (more air pollutants) equals poor health, with the highest percentage (45%) of pollution-linked deaths (total worldwide: 7 million) corresponding to chronic obstructive pulmonary disease. 25% of these deaths corresponded to stroke, the second global leading cause of death. An interactive map of related air pollution annual mean measurements [micrograms of particulate matter (less than 2.5 micrometers in diameter) per cubic meter—PM2.5] highlights Southeast Asian, African, and Middle Eastern lower-income regions with the greatest numbers (PM2.5 > 70 micrograms per cubic meter). Individual cities with monstrous PM10 peaks (micrograms of particulate matter less than 10 micrometers in diameter per cubic meter) include Delhi (292), Cairo (284), Dhaka (104), and Mumbai (104). North America, specifically the United States and Canada, on the other hand, overall inhaled better air quality (PM2.5 < 10 micrograms per cubic meter).

When comparing rich areas to poor areas, what accounts for the disparate distributions of air pollution? In economically struggling communities, dwellers can only purchase cheap means of creating fire or generating heat for cooking and other everyday uses: coal, wood, or kerosene. Governmental policy setting standards and restrictions on PM10 and PM2.5 levels impacts air quality, too, such as the United States’ long-standing Clean Air Act and China’s recent air pollution regulations. However, despite high-income countries’ regulations and air quality management, these dominions, too, are not immune to miasma; well-to-do cities, such as Manchester and London, fail to fall under the WHO recommended PM2.5 threshold (10 micrograms per cubic meter). Thus, existing acts must be evaluated for shortcomings and amended, if not rewritten, for improvements. Jenny Bates, a Friends of the Earth member, suggests championing more research. Studies on air pollution levels during periods/intervals and effects of certain practices on these levels pave the trail for effective policy measures. Research will also uncover pollutant levels in countries—mainly those in Africa—currently missing these data.

(Jonathan Watts, The Guardian)

Written by sciencepolicyforall

May 22, 2018 at 8:02 pm

Posted in Uncategorized

Tagged with ,

Science Policy Around the Web – May 18, 2018

leave a comment »

By: Patrick Wright, Ph.D

Suicide Prevention

Gaps Remain in U.S. State Policies on Suicide Prevention Training

Suicide is the 10th leading cause of death in the United States, with 45,000 people dying by suicide in 2016 according to the Centers for Disease Control and Prevention. Despite this, there is not a universal requirement or standard of suicide prevention training across states, especially among healthcare professionals, according to a recent study in the American Journal of Public Health (AJPH) that aimed to assess the effectiveness of national guidelines by the U.S. Surgeon General and National Action Alliance for Suicide Prevention released in 2012. Given the proximity and dynamic at the healthcare professional-patient interface, clinicians and mental health experts are in a unique, critical position to explicitly tackle suicide in at-risk individuals. As of October 2017, all 50 states had a suicide prevention plan, but only 10 states—California, Indiana, Kentucky, Nevada, New Hampshire, Pennsylvania, Tennessee, Utah, Washington, and West Virginia—require healthcare professionals to complete suicide prevention training and intervene with appropriate intervention. Policies in seven states only encourage training, but do not require it. Even the duration and frequency of training varies extensively.

Jane Pearson, chair of the National Health Suicide Research Consortium, stated “When there’s someone in crisis you have to gather information very quickly and if you’re not asking the exact right questions you can miss someone’s intentions. The most pressing goal is to increase the person’s will to live so it’s greater than their will to die and buy time to get past the crisis, so they have a chance to work on problem solving.” Earlier work has shown that a majority of people who attempt suicide have seen a healthcare professional in the weeks and months prior to their suicide attempt, emphasizing the significance of potential opportunity in these healthcare professional-patient interactions.

The 2012 National Strategy for Suicide Prevention created by the Office of the U.S. Surgeon General and the National Action Alliance for Suicide Prevention outlined four strategic directions, including creating “supportive environments that will promote the general health of the population and reduce the risk for suicidal behaviors and related problems”, developing and implementing clinical and community-based preventive programs, providing treatment and care for high-risk patients, and surveying and evaluating suicide and its prevention nationwide.

Washington was the first state to mandate suicide assessment, treatment, and management training for healthcare providers, through the Matt Adler Suicide Assessment, Treatment, and Management Act of 2012 (House Bill 2366), with the state defining suicide assessment, treatment, and management training as one “of at least six hours in length that is listed on the Best Practices Registry of the American Foundation for Suicide Prevention and the Suicide Prevention Resource Center including, but not limited to: Applied suicide intervention skills training; assessment and management of suicide risk; recognizing and responding to suicide risk; or question, persuade, respond, and treat.”

The AJPH study poses that ensuring that suicide prevention training is disseminated universally among health care professionals is not limited only to legislation; accrediting bodies (e.g. American Psychological Association) share this burden in guaranteeing that graduates are prepared to identify and aid patients who may be at risk for suicide. The study concludes, “Better equipping health care professionals to assess and provide care to patients at risk for suicide may contribute to a meaningful decline in the rate of suicide across the nation, and it is the responsibility of policymakers, health care professionals, and citizens to advocate change.”

(Cheryl Platzman Weinstock, Reuters)

Animal Welfare

Animal Tests Surge Under New U.S. Chemical Safety Law

The Frank R. Lautenberg Chemical Safety for the 21st Century Act of 2016 (H.R. 2576) amended the 1976 Toxic Substances Control Act (TSCA) (S. 3149), the primary chemicals management law in the United States, to require the Environmental Protection Agency (EPA) to “minimize, to the extent practicable, the use of vertebrate animals in testing chemicals” and states “Any person who voluntarily develops information under TSCA must first attempt to develop the information by an alternative or nonanimal test method or testing strategy before conducting new animal testing.” It required the EPA to explicitly develop a strategic plan to promote the development and implementation of alternative test methods that do not require the use of animals. However, despite the goals of the Lautenberg Chemical Safety Act, there has reportedly been a recent increase in the number of animal tests and requested or required by the EPA.

In March 2018, the EPA released a draft of its strategic plan for public comment of their proposed long-term strategy for increasing the use of animal research alternatives, including computer modeling, biochemistry, and cell culture approaches. In response, People for the Ethical Treatment of Animals (PETA) and the Physicians Committee for Responsible Medicine (PCRM) quantified the number of EPA, TSCA-related animal tests and animals used over the last three years. They found that the number of animal tests requested or required by the EPA increased substantially last year, with the total number of tests and animals involved in testing jumping more than an order magnitude, from approximately 6500 across 37 rests required or requested to over 75000 animals across 331 tests. They issued a response letter, stating “The dramatic increase we have documented indicates that EPA is failing to balance its responsibilities to determine whether chemicals present unreasonable risks with its Congressional mandate to reduce and replace the use of vertebrate animals in chemical testing.”

Unfortunately, the underlying cause for this trend is not known. It is possible that the Lautenberg Chemical Safety Act’s stricter requirements on a larger range of chemicals compared to the original TSCA may be driving additional testing and subsequent data collection in order to comply. Moreover, Kristie Sullivan, PCRM’s vice president of research policy, said that EPA staff may need more training and funding of animal research-alternatives and “to stay abreast of new developments in toxicology, so that they can quickly incorporate new methods and kinds of data into their decision-making process.”

In contrast, implementation may be slow due to the EPA’s need to adequately pursue alternatives while adapting to the new law. Daniel Rosenberg, an attorney with the Natural Resources Defense Council, emphasized the importance of taking whatever time is necessary to validate alternative testing strategies: “We need to ensure that the alternative testing methods that are implemented are able to actually identify toxicity, exposure and potential adverse effects of chemicals.”

The comment period on EPA’s draft strategy for reducing animal tests closed earlier this month, with the agency required to release its final plan by the end of June 2018.

(Vanessa Zainzinger, Science)

Written by sciencepolicyforall

May 22, 2018 at 7:31 pm

Science Policy Around the Web – May 11, 2018

leave a comment »

By: Mohor Sengupta, PhD

Tablets Nutrient Additives Dietary Supplements Pills

source: Max Pixel

Drug prices

Why Can’t Medicare Patients Use Drugmakers’ Discount Coupons?

With high drug prices, affordability of specialized medicines is a matter of concern for many individuals, especially those on life-saving brand-name drugs.

Manufacturers of brand-name medicines provide discount coupons to people with private health insurance. Such discounts are denied for people with federal healthcare plans such as Medicare or Medicaid. For example, for one patient on Repatha (a cholesterol reducing drug), the co-payment is $618 per month with the Medicare drug plan, but it is only $5 for patients with commercial insurance plans. This discrepancy has resulted in a “double standard” because arguably, the discount is denied to the people who need it most, that is the retired population subscribing (compulsorily) to federal healthcare programs.

Drug manufacturers have an incentive to offer discounts on branded medicines as they increase the likelihood of purchase and results in greater access to and demand for the products. While these discount coupons are immensely beneficial for life-threatening conditions for which generic drugs are not available, a 2013 analysis has shown that lower cost generic alternative and FDA approved therapeutic equivalent was available for 62% of 374 brand-name drugs.

The federal government has argued that with the discount coupons, patients might overlook or be discouraged from buying cheaper variants of the brand-name drug. Even if a patient chooses to use a brand-name drug with a discount coupon over cheaper alternative, their health insurance plan still has to pay for the drug. That amount maybe more than Medicare or Medicaid may be willing to pay. This has resulted in the federal anti-kickback statute which prohibits drug manufacturers to provide “payment of remuneration (discounts) for any product or service for which payment may be made by a federal health care program”.

One important question is why do drug makers sell the brand-name drugs at a much higher price bracket when generic, cheaper options are available? In the present scenario, insurance companies should make the judgement about whether they are willing to cover such brand-name drugs for which generic alternatives are available. Often doctors prescribe brand-name drugs without considering their long-term affordability by patients. It is the responsibility of doctors and insurance providers alike to determine the best possible drug option for a patient.

Taking in both sides of the picture, use of discounts must be exercised on a case basis. It must be enforced for specialized drugs against which generic alternatives are not available and which are usually used for severe or life-threatening conditions. Currently for people with such conditions and on federal healthcare plans, affordability is a major challenge.

(Michelle Andrews, NPR)

 

EPA standards

EPA’s ‘secret science’ rule could undermine agency’s ‘war on lead’

Last month the Environmental Protection Agency (EPA) administrator, Scott Pruitt issued a “science transparency rule” according to which studies that were not “publicly available in a manner sufficient for independent validation” could not be used while crafting a regulation. This rule is at loggerheads with Pruitt’s “war on lead” because a majority of studies on lead toxicity are observational, old and cannot be validated without consciously exposing study subjects to lead.

Lead is a potent neurotoxin with long term effects on central nervous system development. It is especially harmful to children. There are several studies showing lead toxicity, but many do not meet the inclusion standards set by the EPA’s the new science transparency rule. Computer models developed to assess lead toxicity, which played important role in EPA’s regulations on lead in the past, have amalgamated all these studies, including the ones that cannot be validated. If the science transparency rule is retroactive, it would mean trashing these models. An entire computer model can be rendered invalid if just one of its component studies doesn’t meet the transparency criteria.

Critics say that the transparency measure will be counter-effective as far as lead regulations are concerned. “They could end up saying, ‘We don’t have to eliminate exposure because we don’t have evidence that lead is bad’”, says former EPA staffer Ronnie Levin. Another hurdle is the proposed data sharing requirement. Lead based studies tend to be epidemiological and authors might be unwilling to share confidential participant data.

Bruce Lanphear of Simon Frazer University in Canada is skeptical of EPA’s intensions because the agency has not imposed similar transparency measures for chemical companies like pesticide producers.

Finally, this rule could set different standards for lead safely levels in different federal agencies. Currently Center for Disease Control and Prevention (CDC) and Department of Housing and Urban Development (HUD) consider 5 micrograms per milliliter of lead in blood as the reference level. The EPA rule could lead to a new reference level, leading to discrepancies when complying with agencies across the U.S. government.

(Ariel Wittenberg, E&E News)

 

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

May 11, 2018 at 10:24 pm

Science Policy Around the Web – May 8, 2018

leave a comment »

By: Saurav Seshadri, PhD

20180508_Linkpost

source: pixabay

Environment

EPA Cites ‘Replication Crisis’ in Justifying Open Science Proposal

The U.S. Environmental Protection Agency (EPA) may soon be using far less scientific evidence to inform its policy positions.  EPA administrator Scott Pruitt recently announced that, in an effort promote reproducibility and open access to information, the EPA will no longer consider studies whose underlying data or models are not publicly available.  However, such studies often represent the ‘best available’ data, which the EPA is legally obliged to consider, and form the basis of, among others, policies limiting particulate matter in the air.  Several studies that support the health and economic benefits of lower particulate limits do so by using detailed medical information whose disclosure would compromise patient confidentiality.  The so-called HONEST (Honest and Open New EPA Science Treatment) Act, put forth by House Republicans, aims to suppress such ‘secret science’; its detractors say that it’s a poorly disguised gift to industry interests, conveniently timed to take effect just before a scheduled review of pollution limits.

Opposition to the policy has been building steadily.  A letter signed by 63 House democrats, asking for an extension to the open comment period for the policy, has so far been unsuccessful. A separate letter, signed by almost a thousand scientists, and comments from several professional associations, have also been ignored – perhaps unsurprisingly, given Pruitt’s parallel effort to bar relevant scientists from EPA advisory boards.  The scientist behind the article calling attention to the ‘reproducibility crisis’ cited by Pruitt has also spoken out, writing that simply ‘ignoring science that has not yet attained’ rigorous reproducibility standards would be ‘a nightmare’.

Perhaps the most effective response has come from scientists who are outpacing the bureaucracy.  In a pair of papers published last year, a biostatistics and public health group at Harvard used air quality data, Medicare records, and other public sources to reiterate the health risks posed by air pollution.  Such studies could not be excluded by the new EPA policy and may influence regulators to keep particulate limits low.  Another potential roadblock to implementing changes could be the controversy surrounding Pruitt himself.  The administrator has been the target of several federal probes, following a series of scandals regarding his use of government funds for purposes such as a 24-hour security detail, soundproof office, and first class travel.  Bipartisan calls for his resignation have made his future at the EPA, and the quick implementation of a Republican agenda there, uncertain.

(Mitch Ambrose, American Institute of Physics)

Science funding

NIH’s neuroscience institute will limit grants to well-funded labs

With a budget of $2.1 billion, the National Institute of Neurological Disorders and Stroke (NINDS) is the fifth largest institute at NIH.  Yet each year many investigators are constrained by a lack of funds, while some large labs have accumulated so many grants that their principal investigator can only spend a few weeks per year on a given project.  To address this disparity, NINDS recently announced a plan to revamp implementation of an existing NIH policy, in which grant applications from well-funded labs must go through an additional review by a special council. While the current secondary review rarely rejects such applications, NINDS’ policy takes two steps to make the process more stringent: first, it increases the number of labs that would undergo review, to include labs that would cross the $1 million threshold with the current grant; second, it sets higher standards for review, requiring applications from such labs to score in the top 7% of all proposals to be successful.

Responses to the idea have been tentative, despite widespread support for its objective.  One potential cause for concern is its perceived similarity to the Grant Support Index (GSI), a previous NIH initiative with a similar goal (i.e., reallocating resources to sustain less-established but deserving researchers). The GSI sought to achieve this goal by placing a cap on the number of grants that a lab could receive, using a point system. However, this caused an uproar among scientists, who, among other issues, saw it as punishing or handicapping labs for being productive – it was quickly revised to create the Next Generation Researchers Initiative, a fund earmarked for early and mid-stage investigators, for which each institute is responsible for finding money.  The new policy appears to be a step towards meeting this obligation, and not, NINDS insists, a return to the GSI.

The impact of the new policy will probably be clearer after NINDS’ next round of grant reviews takes place, in January 2019.  So far, only the National Institute of General Medical Sciences (NIGMS) has a comparable policy, which has been in place since 2016.  The success of these approaches may well shape future cohorts of NIH-funded scientists – cutoffs and uncertainty are not unique to neuroscience, and other institutes are likely to be paying close attention.

(Jocelyn Kaiser, Science)

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

May 8, 2018 at 6:11 pm

Science Policy Around the Web – May 4, 2018

leave a comment »

By: Rachel Smallwood Shoukry, PhD

20180504_Linkpost

source: pixabay

Environment

Hawaii might be about to ban your favorite sunscreen to protect its coral reefs

The state legislature in Hawaii has just passed a bill banning over-the-counter sale of sunscreens that contain oxybenzone and/or octinoxate, which encompasses about 70% of sunscreens. The ban was proposed and passed due to concern over the effect of the two chemicals on coral reefs. If the bill is signed into law by the governor, it will be the first law banning sunscreens to protect a marine environment. They hope that by eliminating these sunscreens from being purchased in Hawaii, they can greatly reduce the amount of the chemicals that end up in the water due after being washed off of swimmers or showered off.

Proponents of the ban cite emerging research that has shown these two chemicals to be harmful to coral reefs. Though only a few studies have been published at this point, their data indicates that the chemicals deplete the reefs of nutrients and cause them to be bleached white. They also pose potential threat to other aquatic life. Researchers found oxybenzone in the waters off of Maui in levels sufficient to cause bleaching in warm waters. Coral reefs are vital for marine ecosystem health, protect coastlines from waves, and are important for tourism, contributing billions of dollars to the tourism industry each year. They have been greatly damaged and threatened in recent decades, with scientists predicting they will soon disappear completely unless significant interventions take place. Locations around the world are implementing measures to preserve their reefs.

There are many opponents of the ban, however. Some scientists feel that the effect of the chemicals is a small contribution among a large number of factors. Rising ocean temperatures, over fishing, other ocean pollutants, and invasive aquatic species all contribute to dying reefs. The Hawaii Medical Association and sunscreen manufacturers believe that more scientific study should be done to merit such drastic action, especially since oxybenzone and octinoxate are some of the most common sunscreen ingredients for blocking UVA and UVB rays, which are known to cause skin cancer. There are sunscreens without these chemicals, however, and the increasing awareness of the threats to coral reefs has inspired the development of natural alternatives.

(Lindsey Bever, The Washington Post)

Privacy

In Hunt For Golden State Killer, Investigators Uploaded His DNA To Genealogy Site

Progress in the field of genetics has numerous exciting possibilities and implications. These possibilities are accompanied, however, with many difficult questions to answer. Police recently used a publicly-available genetic database to find and arrest the Golden State Killer – one of the country’s most notorious serial killers and rapists. Although DNA forensics were only in their earliest stages at the time when the Golden State Killer was just winding down his activity, police obtained his DNA from several of the crime scenes.

After having no luck with law enforcement genetic databases, police uploaded his DNA to GEDmatch.com; this site allows people to upload raw genetic data and find others whose data match theirs to some degree, and it warned users that they were not guaranteeing the privacy or security of their data. The matches on the site led the police to a relative, and they were able to look through that person’s family tree and identify a likely suspect based on a profile built during the investigation of the crimes. They then obtained the suspect’s DNA from a discarded item, and when tested, it came back as a match, and Joseph James DeAngelo was arrested.

This story has a good ending – a murderer coming to justice – but it does pose questions that have to be considered in the era of easily obtainable genetic data when millions of people are submitting their DNA to be analyzed. The family member of DeAngelo who submitted a genetic sample probably did not realize that it would lead to the arrest of a relative, and DeAngelo likely also had no idea his relative had done this or that his identity could be traced through it. Some geneticists have cautioned against this very thing, as well as other scenarios that would be equally undesirable (and a point of concern even for law-abiding individuals). While identifying criminals is advantageous, this situation does highlight the fact that users probably do not consider all of the potential ways their data could be utilized and all the potential parties who could access it, either legitimately or through illegal means. There are federal and state laws that prohibit discrimination based on genetics, and an individual’s genetic information is now considered protected health information and is thus protected under HIPAA. However, these large databases that have millions of users’ genetic data primarily regulate themselves beyond that, and even more so when an individual chooses to share their genetics. Experts recommend reading the terms of service very carefully and giving serious thought to any decision regarding sharing of your genetic information.

(Laurel Wamsley, NPR)

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

May 4, 2018 at 8:50 pm

Science Policy Around the Web – May 1, 2018

leave a comment »

By: Liu-Ya Tang, PhD

20180501_Linkpost

source: pixabay

Artificial Intelligence

With €1.5 billion for artificial intelligence research, Europe pins hopes on ethics

While artificial intelligence (AI) brings convenience to modern life, it may cause some ethical issues. For example, AI systems are generated through machine learning. Systems usually have a training phase in which scientists “feed” them existing data and they “learn” to draw conclusions from that input. If the training dataset is biased, the AI system would produce a biased result. To put ethical guidelines on AI development and catch up with the United States and China in AI research, the European Commission announced on April 25 that it would spend €1.5 billion to AI research and innovation until 2020.

Although the United States and China have made great advances in the field, the ethical issues stemming from AI may have been neglected as both practice “permissionless innovation”, said Eleonore Pauwels, a Belgian ethics researcher at the United Nations University in New York City. She spoke highly of Europe’s plan, which is expected to enhance fairness, transparency, privacy and trust. But the outcome is still unknown. As said by Bernhard Schölkopf, a machine learning researcher at the Max Planck Institute for Intelligent Systems in Tübingen, Germany, “We do not yet understand well how to make [AI] systems robust, or how to predict the effect of interventions”. He also mentioned that only focusing on potential ethical problems would impede the AI research in Europe.

What are the reasons why the AI research lags behind the United States and China? First of all, Europe has strong AI research, but a weak AI industry. Startup companies with innovative technologies, which are oftentimes risky, cannot receive enough funds as the old industrial policies favor big, risk-averse firms. So the commission’s announcement underscores the importance of public-private partnerships to support new technology development. The second reason is that salaries are not high enough to keep AI researchers in academia as compared to the salaries in the private sector. To solve this problem, a group of nine prominent AI researchers asked governments to set up an intergovernmental European Lab for Learning and Intelligent Systems (ELLIS), which would be a “top employer in machine intelligence research” and offer attractive salaries as well as “outstanding academic freedom and visibility”.

(Tania Rabesandratana, Science)

Public health

Bill Gates calls on U.S. to lead fight against a pandemic that could kill 33 million

Pandemic diseases, mainly caused by cholera, bubonic plague, smallpox, and influenza, can be devastating to world populations. Several outbreaks of viral diseases have been reported in scattered areas around the world, including  the 2014 Ebola epidemic, leading to growing concerns about the next wave of a pandemic. During an interview conducted last week, Bill Gates discussed the issue of pandemic preparedness with a reporter from The Washington Post. Later, he gave a speech on the challenges associated with modern epidemics before the Massachusetts Medical Society.

The risk of a pandemic is high, as the world is highly connected and new pathogens are constantly emerging as consequences of naturally occurring mutations. Modern technology has brought on the possibility of bioterrorism attacks. In less than 36 hours, infectious disease and pathogens can travel from a remote village to major cities on any continent to become a global crisis. During his speech, Gates cited a simulation done by the Institute for Disease Modeling, which estimates that nearly 33 million people worldwide could be killed by a highly contagious and lethal airborne pathogen like the 1918 influenza. He said “there is a reasonable probability the world will experience such an outbreak in the next 10-15 years.” The risk becomes higher when local government funding for global health security is not adequate. The U.S. Centers for Disease and Prevention is planning to dramatically downsize its epidemic prevention activities in 39 out of 49 countries, which would make these developing countries even more vulnerable to the outbreaks of infectious diseases.

Gates expressed this urgency to President Trump and senior administration officials at several meetings, and he also announced a $12 million Grand Challenge in partnership with the family of Google Inc. co-founder Larry Page to accelerate the development of a universal flu vaccine. He highlighted scientific and technical advances in the development of better vaccines, antiviral drugs and diagnostics, which could provide better preparation for, prevention of and treatment of infectious disease. Beyond this he emphasized that the United States needs to establish a strategy to utilize and coordinate domestic resources and take a global leadership role in the fight against a pandemic.

(Lena H. Sun, The Washington Post)

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

May 1, 2018 at 5:53 pm