Science Policy For All

Because science policy affects everyone.

Posts Tagged ‘environment

Science Policy Around the Web – April 2, 2019

leave a comment »

By: Patrice J. Persad Ph.D.

Image by Jason Gillman from Pixabay

Worrisome nonstick chemicals are common in U.S. drinking water, federal study suggests

What lurks in our drinking water—and all its effects on organismal health—may be more of a mystery than what resides in the deep recesses of our oceans. In a recent investigation conducted by the United States Geological Survey and the Environmental Protection Agency (EPA), manmade per- and polyfluroalkyl substances (PFAS) tainted drinking water samples were analyzed. PFAS, which put the “proof” in water-proof items, are substances of concern, or, more aptly, contaminants of emerging concern (CECs), given their potential carcinogenicity and permanence in ecosystems. Perfluorooctane acid (PFOA), a PFAS discontinued in production domestically, was at a concentration over 70 nanograms per liter (ng/l) in a sample. A trio of other PFAS surpassed this concentration level, as well. A standard level issued by federal agencies has yet to transpire. However, the Centers for Control of Disease(CDC) attests that the existing cut-off of 70 ng/l is unacceptable in that it is not sufficiently low, or conservative, with respect to human health. 

The Environmental Working Group(EWG) suspects that over 100 million individuals in the U.S. drink water with PFAS. Citizens currently advocate for authorities to test drinking water samples and disclose PFAS concentrations. Without setting standards, accountability for future detriments to health is up in the air. Only through discussion with the public, policy makers, the research community, and parties formerly or currently producing PFAS can we set safeguards to protect our water supply plus well-being. 

(Natasha Gilbert, Science)


To Protect Imperiled Salmon, Fish Advocates Want To Shoot Some Gulls

In recreating the fundamental question “Who stole the cookies from the cookie jar?”, nature’s version spins off as “Who stole the juvenile salmon from Miller Island?” In this spiraling whodunit mystery, an unexpected avian culprit surfaces: the gull. According to avian predation coordinator Blaine Parker, surveys revealed that a fifth of imperiled salmon were whisked away by gulls near channels flowing out of dams. Gulls also spirited away these juvenile fish from other avian predators, such as Caspian terns. Parker maintains that not every gull is a perpetrator of decreasing the species’ numbers; gulls can assist with the population control of other birds who feast on the juveniles. Therefore, he supports killing the individual gulls disturbing juvenile salmon booms—lethal management.

Although there has been precedent of sacrificing avian species for the security of juvenile salmon, several entities denounce lethal management of wayward gulls affecting the young fish’s survival rates. The Audubon Society of Portlandpoint out that the Army Corps. of Engineers’ modifications to dams for warding away gulls, or other airborne predators, are slipshod and ineffective, if not inexistent. The U.S. Army Corps., despite this criticism, avows that killing specific gulls is only a final resort. From Parker and these organizations’ opposing viewpoints, a new mystery migrates to the surface. Will killing avian predators populating dams and waterways have a significant impact on the endangered salmons’ survival? Research collaboration on ecological impacts may be a way to tell or reassess the futures of both juvenile salmon and gulls. 

(Courtney Flatt, Northwest Public Broadcasting/National Public Radio



Have an interesting science policy link? Share it in the comments!

Advertisements

Written by sciencepolicyforall

April 3, 2019 at 10:32 am

Science Policy Around the Web – March 18, 2019

leave a comment »

By: Allison Cross, Ph.D.

Source: Pixabay

Scientists track damage from controversial deep-sea mining method

The extraction of rare and valuable metals and minerals from the deep sea is highly attractive to mining companies.  Scientists, however, have long raised concerns about potential harmful effects of these activities on marine ecosystems.  Next month, the mining company Global Sea Mineral Resources is scheduled to harvest precious metals and minerals on the seafloor in the remote Pacific Ocean for eight days with a team of scientists working alongside them.  The scientists will be using deep-sea cameras and sensors to monitor sediment plumes created by the mining activity.  

Scientists are concerned that sediment plumes created during deep sea mining could extend tens or hundreds of meters above the seafloor and “bury, smother and toxify” the marine communities in these regions.  The research exhibition scheduled for next month is intended to help scientists understand the potential impact of deep-sea mining and inform the development of an international code of conduct for deep-sea commercial mining.  

The code of conduct for deep sea commercial mining will be created by the International Seabed Authority (ISA), an organization founded in 1994 to organize, regulate and control all mining activity in international waters.  The ISA is planning to finalize the code by 2020, allowing companies that have been granted licenses to extract minerals from the deep sea to begin full scale mining in the Pacific Ocean.  

Though the experiment scheduled for next month will provide key insight into how long it takes for sediment plumes to resettle, and how far they can travel, the experiment is just too short to gauge potential long-term effects of mining activities.  Craig Smith, an oceanographer at the University of Hawaii at Manoa in Honolulu cautions “We will not really understand the actual scale of mining impacts until the effects of sediment plumes from full-scale mining are studied for years”.

(Olive Heffernan, Nature Briefing)

U.S. blocks U.N. Resolution on Geoengineering

Last week, during the fourth session of the UN Environment Assembly (UNEA) in Nairobi, the United States, Saudi Arabia, and Brazil joined together to block a resolution aimed at studying the potential risks of geoengineering.  “Geoengineering”, also referred to as climate engineering or climate intervention, aims to mitigate effects of global warming using techniques like solar radiation management and carbon dioxide removal.

Geoengineering technologies are not yet operational and while proponents believe these techniques could help curb the impact of climate change, opponents worry about the potential risks of these techniques on both people and nature. Notably, one proposed method of solar radiation managementinvolves using aerosols to reflect a portion of inbound sunlight back out to space. Research in this area is still in its infancy and some worry that infusing the atmosphere with aerosols could lead to undesired side effects, like severe weather.  

The proposal raised at the UNEA meeting last week, backed by Switzerland and nine other nations, aimed to direct the U.N. Environment Programme to study the implications of geoengineering and compile a report by next year on current scientific research in this area. 

While there is some consensus that issuesof geoengineering technologies need to be explored, countries disagree on who should be overseeing these efforts. It has been reported that the United States prefers questions about geoengineering to be dealt with by the Intergovernmental Panel on Climate Change (IPCC), rather than by UNEA. The IPCC is reported to be assessing geoengineering as a part of its next report set to be published in 2021 or 2022. 

(Jean Chemnick, Scientific America)

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

March 19, 2019 at 7:45 pm

Intellectual property theft and its effects on US-China trade relations

leave a comment »

By: Neetu Gulati, Ph.D.

Source:Wikimedia

China and the US are currently in the midst of a trade war that, if not resolved my March 1, 2019, will lead to another increase in tariffs by the US. This trade war, which started over the US accusing China of stealing intellectual property from American companies, has already affected the economy of the two countries and could have global effects. The US has evidence that information including biomedical research breakthroughs, technological advances, and food product formulations have been stolen. In response to these illicit trade practices, the US imposed tariffs on Chinese imports, leading to the beginning of the trade war.

So how did we get here? 2019 marks forty years of diplomatic relations between the United States and China, which officially began on January 1, 1979. Since relations began, the two countries have benefited from ongoing trade, and China has become the largest goods trading partner with the US. Bilateral economic relations have increased from $33 billion in 1992 to over $772 billion in goods and services in 2017.  Despite strong economic ties, relations between the two countries have come under strain in recent years. The US State Department has identified concerns over military conflict in the South China Sea, counter-intelligence and security issues, and the trade deficit, among other issues. These issues came to a head in April 2018 when President Donald J. Trump issued a statement that China had stolen America’s intellectual property and engaged in illegal trade practices. In response, the US imposed additional tariffs on approximately $50 billion worth of Chinese imports. China then countered with tariffs on US imports, and thus a trade war between the two countries began.

To understand how intellectual property, or IP, fits into the trade war, it is important to first understand what it is. According to the World Intellectual Property Organization, IP “refers to creations of the mind, such as inventions; literary and artistic works; designs; and symbols, names and images used in commerce.” More simply, IP is something created or invented through human intellect, but not necessarily a tangible product. These products often have important scientific implications, as the umbrella of IP can cover genetically engineered crops, newly developed technologies and software, and new therapeutics, just to name a few. IP is legally protected through means such as patents, trademarks, and copyright, which allow people to gain recognition and financial benefits from their creations. These protections are country-specific, and the US Patent and Trademark Office gives guidance about protecting IP overseas, including in China. The process of transferring IP from the creator to another entity, often for distribution purposes, is known as technology transfer. This process is at the heart of the accusation of theft of American IP.

According to a seven-month long investigation done by the United States Trade Representative (USTR), China’s unreasonable technology transfer policies meant they did not live up to the commitments made when joining the World Trade Organization. The report found that Chinese laws require foreign companies to create joint ventures with domestic Chinese companies in order to sell goods within the country. The investigation by USTR found that “China’s regulatory authorities do not allow U.S. companies to make their own decisions about technology transfer and the assignment or licensing of intellectual property rights.  Instead, they continue to require or pressure foreign companies to transfer technology as a condition for securing investment or other approvals.” By pushing for technology transfer, these laws opened up American companies to theft of their IP. Stolen IP has included things like software code for a wind turbine, genetically modified corn seeds, the idea behind a robot named Tappy, and even the formulation for the chemical that makes Oreo filling white.

Beyond stealing information for goods entering China, it is also possible that Chinese workers in the United States may be stealing IP and sending it back to their home country. For example, a Chinese scientist known as ‘China’s Elon Musk’ was accused by his former research advisor of stealing research done at Duke University and replicating it in China for his own gain. A former assistant director of counterintelligence at the FBI suspects that the Chinese scientist was sent by the Chinese government intentionally to steal IP. This was not an isolated incident, either. According to a report from an advisory committee to the National Institutes of Health (NIH), research institutions in the US may have fallen victim to a small number of foreign researchers associated with China’s “Talents Recruitment Program,” which the National Intelligence Council identified as an effort to “to facilitate the legal and illicit transfer of US technology, intellectual property and know-how.” This comes mere months after the NIH announced that it had identified undisclosed financial conflicts between US researchers and foreign governments. Without giving details of specific countries, NIH Director Francis Collins reported to a Senate Committee hearing that “the robustness of the biomedical research enterprise is under constant threat.” Nevertheless, these threats should not hinder the research enterprise. During a hearing in April 2018, House Science Committee Chair Lamar Smith remarked, “on the one hand, we must maintain the open and collaborative nature of academic research and development. On the other, we must protect our research and development from actors who seek to do us harm.”

The balance between research collaboration and theft is delicate. Information sharing is increasingly necessary as scientific pursuits become more interdisciplinary in nature, and can lead to more productivity in research. However, voluntary collaboration is different from unwilling or coerced transfer of ideas. The ability of US scientists and entrepreneurs to innovate and create new IP is an important driver of the American economy, and further allows for the ability to research new scientific pursuits. Not only does IP theft undermine the incentive and ability for Americans to innovate, it has had drastic negative effects on the American economy, with annual losses estimated to be between $225 billion and $600 billion according to a report put out by the IP Commission. These losses directly affect those who own and/or license IP, as well as those who are associated with these companies or individuals. This can then lead to downsizing or cutting jobs, further harming American science and technology industries. It is for this reason that the US responded so strongly against the evidence of IP theft.

In response to the accusations from the US, Chinese President Xi Jinping promised to resolve the “reasonable concerns” of the US regarding IP practices. The Chinese government announced punishments that could restrict Chinese companies from state funding support due to IP theft and at the G20 Summit in December 2018, the Presidents of the two nations agreed to a 90-day financial truce, which will end March 1, 2019. 

The two countries are currently working on a trade deal to end the escalating tariffs, which would lessen tensions between the world’s two largest economies. The US wants China to commit to buying more American goods and services, and to agree to end the practice of requiring American companies to give technology transfers in order to do business in China. Without hashing out details, China has agreed to increase imports of U.S. agriculture, energy, industrial products and services. Delegations from the two countries will meet again in mid-February in China to continue negotiating. Trump was optimistic that the two nations would be able to make a deal before the deadline, saying, “I believe that a lot of the biggest points are going to be agreed to by me and him.”  

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

February 7, 2019 at 9:39 pm

Saving the Chesapeake Bay – Home to 18 million people

leave a comment »

By: Hsiao Yu Fang, Ph.D.

6655589977_7c79f475f0_b
Source: Flickr

The Chesapeake Bay is the largest U.S. estuary, where freshwater from rivers and streams flows into the ocean and mixes with seawater, making it a rich environment with abundant wildlife. Every year, the Bay produces 500 million pounds of seafood. The entire Chesapeake Bay watershed, which includes six states (New York, Pennsylvania, Maryland, Virginia, West Virginia, and Delaware) and the District of Columbia, is home to 3,600 species of plants and animals and more than 18 million people. Importantly, the actions of these 18 million people directly affect the health of the Bay. To quote the movie Finding Nemo, “All drains lead to the ocean.” Due to the combination of water-born nutrient pollution that comes from human-produced waste and runoff from cities and farms, the Bay has been listed on the country’s “impaired waters” list for decades. Thankfully, recent policy measures to regulate the environmental impact of human activity on the Bay have shown profoundly promising results that with further efforts could fully restore the health of the Bay.

At one point, the conditions of the Chesapeake Bay seemed almost irreversible. Years of population growth and pollution led to a significant decline in animal species, affecting commercial and recreational fishing as well as tourism. Scientists have shown that about one-third of the nitrogen in the Chesapeake comes from air pollution. Pollution in the air emitted from power plants and vehicles is carried over long distances via weather conditions and eventually deposits into the Bay’s waters. As air pollution can travel thousands of miles, the region over which air pollutants are capable of impacting the Bay is known as the airshed; this area is about nine times as large as the Bay watershed. Excess nitrogen and phosphorus pollution in the Chesapeake cause a biological chain reaction that results in “dead zones” – areas with minimal amounts of oxygen. This phenomenon worsens in the summer, when heat and pollution fuel algae blooms, blocking sunlight and depleting life-sustaining oxygen underwater. Aquatic life including fish, crabs, and oysters suffocate in these areas of the Bay affected by dead zones. The Bay used to yield tens of millions of bushels of oysters. Today the annual catch has fallen to less than one percent of historic records.

There have been several attempts through the years to restore the Bay. The Clean Water Act of 1972 reduced industrial pollution to the Bay, though it fell short of its promises of transforming the Bay into “fishable, swimmable” waters. In 1984, the six states within the Bay watershed embarked on another cleanup plan, which again failed to show lasting improvements. In 2010, the Chesapeake Clean Water Blueprint was established, which is the largest water cleanup plan ever managed by the US government. Using the powers granted by the Clean Water Act, the Environmental Protection Agency (EPA) issued new pollution limits for nitrogen, phosphorus, and sediment feeding into the Bay. Subsequently, the six Bay states and the District of Columbia announced formal plans to meet the EPA limits by 2025. What makes the Blueprint unique compared to previous failed attempts is that it will impose penalties on states that fail to act.  Each state is required to reach two-year incremental milestones of pollution reduction. Ideally, once the Blueprint fully achieves its goals, the Bay should no longer be on the impaired waters list.

Almost a decade has passed since the restoration efforts of the Chesapeake Clean Water Blueprint began, and already the Bay shows the potential for becoming a transformative environmental success story. Today, the Bay appears more resilient and capable of adapting to excess pollution loads. Recent studies have shown that the Bay is beginning to replenish oxygen in its waters; repairing what were once underwater dead zones. The Chesapeake Bay Foundation’s (CBF) 2018 State of the Bay Report’s Habitat Indicator Scores show that the resilience of the Bay, quantified as the growth of underwater grasses and resource lands, is slowly increasing from their 2016 levels, despite the record-breaking summer storms of 2018.

While progress has been made in restoring the Bay, more is needed. Bipartisan support from the federal government and from federal-state collaborations is essential to the Bay’s further recovery. The Bay’s overall health remains fragile and additional improvement is not assured. In fact, CBF’s2018 State of the Bay Report released this month showed a decline in the Bay’s health for the first time in a decade. This was due to extreme storm-related weather conditions in 2018 that carried high concentrations of nitrogen, phosphorus, and debris into the Bay.

The Chesapeake Bay’s health has vital impacts on people’s health, jobs, and access to clean drinking water. The forests in the Bay watershed produce safe, filtered drinking water for 75 percent of the watershed’s residents, which is nearly 13 million people. If more action is not taken now, the future cost of inaction will be more dire and expensive than current restoration efforts. The Chesapeake Clean Water Blueprint might be the best and last chance to restore the Bay. Simple, individual actions like conserving water and energy in our daily activities, volunteering in stream and river cleanups, and contacting local representatives and advocating for the importance of protecting the Bay can also go a long way towards contributing to the well-being of the Bay. “Treasure the Chesapeake” is not just a slogan on a license plate – these words underlie a great environmental recovery project, as well as a potential model for water pollution clean-up projects around the world.

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

January 23, 2019 at 3:54 pm

Science Policy Around the Web – October 26, 2018

leave a comment »

By: Mohor Sengupta, Ph.D.

rig-2251648_1280

Source: Pixabay

Environmental Problems

A 14-year-long oil spill in the Gulf of Mexico verges on becoming one of the worst in U.S. history

In the year 2004, hurricane Ivan leveled an oil production platform in the Gulf of Mexico, owned by Taylor Energy. Its striking magnitude destroyed the colossal platform which had drilled into several oil wells. The result was a huge mound of muck, filling the broken steel structure and leaking oil. To date, efforts to seal off the leakage have not been successful.

Taylor Energy at first denied that there was any leakage and then underreported the extent of the leakage. According to current estimates, about 700 barrels of oil are leaking per day, with each barrel holding 42 gallons of oil. The company has kept this information a secret for many years, and few people are aware of the actual level of spillage. The Taylor Energy spillage in fact pre-dates the Deepwater Horizon oil spill (also called the BP leak), so far the largest marine oil spill in history at 168 million gallons. While BP has coughed up $66 billion for fines, legal settlements and cleanup, Taylor Energy is a comparatively smaller operation and financially too cash-strapped to afford cleanup on such a large scale.

In these actions Taylor Energy flouted both the EPA’s Oil Pollution Act of 1990, which mandates that spillage must be reported to the U.S. Coast Guard National Response Center (NRC), and the Clean Water Act of 1972, which created a structure for regulating water pollutants. Taylor Energy was taken to court by environmentalists, and Taylor Energy and the NRC have been jointly found accountable in presenting false numbers and data. In an assessment submitted to Taylor Energy in 2009 by Waldemar S. Nelson and Company, a private firm, risks involved in ingesting fish from the affected area were discussed. A recent and independent analysis by the Justice Department showed that the original estimate of 1 to 55 barrels of leakage per day provided by NRC was inaccurate. After several spillage tests Oscar Garcia-Pineda, the author of the article, concluded that his results didn’t tally with those reported by NRC and the actual rate of spillage was 48 to ~1700 barrels per day.

These disturbing findings have arrived at a delicate time for environmental protection policy. Earlier this year, the Trump administration proposed a wide expansion of leases to the oil and gas industry. This would render all off-shore areas on the continental shelf, including those along the Atlantic coast, amenable to drilling. Oil and gas representatives are lobbying for this cause and have provided financial justifications including billions of dollars’ worth of annual economic growth, increased jobs and lower heating costs. However, multiple governors representing states across the four planning areas, from Maine to the Florida Keys, are opposed to this proposal.

Reports show that on average there are 20 uncontrolled releases of oil per 1000 wells under state or federal governments. In Louisiana alone, approximately 330,000 gallons of oil are spilt from off-shore and on-shore rigging platforms. With changing climate patterns, hurricanes on the Atlantic are predicted to be more intense in future, and given the government’s plans to extend rigging along the Atlantic coast, a bleak prospect looms ahead.

(Darryl Fears, Washington Post)

Health Supplements

The Problem with Probiotics

The healthy balance or maintenance of the natural flora of the gut, also called the gut microbiome,is essential for a healthy digestive system. Antibiotics have been shown to disrupt the gut-microbiome, resulting in diseases such as diarrhea and infections with Clostridium difficile. As an antidote, it has been common practice to pop in “good bacteria”, or probiotics, while on antibiotic treatment. These probiotics are essentially a mixture of supposedly healthy gut microbiota and are meant to replace those disrupted by the antibiotic.

Although people commonly take probiotics, this class of product is not regulated by the FDA and there are rising concerns about the standard of manufacture and quality of these commonly sold over-the-counter health supplements. Most recently, Dr. Pieter A. Cohen cautioned against overlooking the harmful effects of widely marketed probiotics in his recent article published in “JAMA Internal Medicine”.

There have been several studies discussing the benefits of probiotics, so much so that the journal “Nutrition” recently published a systematic review of systematic reviews. In a nutshell, all the studies ever done on efficacy of probiotics have produced very limited positive results and only pure microbial strains were used as the probiotic supplement in these studies. On the other hand, there has been no evidence to show that probiotics have been beneficial in treating conditions such as Crohn’s Disease, chronic diarrhea, ulcerative colitis or liver disease, all related in some way to the gut microbiome.

Safety assessment studies have found probiotics to be contaminated with unwanted microbial strains, and without FDA regulation of the manufacturing process production doesn’t often follow a well-defined pipeline. It is not known what kinds of health hazards might be caused by these contaminants, warns Dr. Cohen, and they can be lethal. In a notorious case, the death of an infant was attributed to a contaminated dietary supplement.

Unfortunately, none of these events have deterred Americans from using probiotics. Almost four million people, or 1.6 percent of adults in the United States used probiotics in 2012 and the global market for probiotics is steadily on the rise. In this situation, it is of great importance for dietary supplements be given the rigorous assessment and quality control checks that a prescription drug undergoes. There should be increased efforts to make consumers aware of adulterations in probiotics.

(Aaron E. Carrol, New York Times)

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

October 26, 2018 at 12:36 pm

Science Policy Around the Web – September 28, 2018

leave a comment »

By: Patrice J. Persad, Ph.D

20181002_Linkpost

source: pixabay

The Environment

Long-banned toxin may wipe out many killer whales

“The past can come back to haunt, or hurt, you,” one adage forewarns. If “the world” replaces “you” in this line, then the saying aptly describes recent findings regarding the enduring effects of polychlorinated biphenyls (PCBs) on marine species, namely killer whales (Orcinus orca), or orcas. In the 1970s, the United States banned PCBs, organic constituents in mediums, such as flame-resistant insulation and hydraulic fluids. According to research studies, these compounds led to immune- and reproductive-compromised conditions, along with cancer, in organisms including humans. However, it took nearly half a century after PCBs went into commercial use for the country to halt using them. Other countries followed suit banning PCBs, with the latest enactment to go into effect at least a decade ago.

From a Science report published last month, we learned one harrowing fact: although most nations eschewed PCBs, the negative impacts on the endangered killer whale populations live on. PCBs take a long time to break down, and, consequently, these pollutants can amass in prey species and the predators that eat them over time. The levels are especially high in the killer whale, an apex predator at the top of the food-chain. PCB concentrations increase exponentially from lower to upper trophic levels through a process known as biomagnification. Killer whales’ prey—ranging from seals, sea lions, penguins, dolphins, sharks, smaller fish, and even whales—accumulate PCBs as they digest the microorganisms that absorb PCBs as a consequence of runoff from industrial plants or insecure dumping sites near water ways.

Dr. Jean-Pierre Desforges and his team, the aforementioned study’s authors, constructed statistical models based on global killer whales’ PCB concentrations in blubber (mg/kg lipid weight) and PCB concentrations corresponding to mortality from immune- and reproductive-related disorders. From surveying 19 killer whale populations around the planet, the research group predicted declines in population sizes stemming from PCB-induced reproductive and immune complications for the next century (100 years). Overall results revealed that health complications arising from PCBs will contribute to the decline of more than half (> 50%) of killer whale populations. For killer whales comprising the highest PCB exposure groups, those living near the United Kingdom, Brazil, Japan, Strait of Gibraltar, and the Northeast Pacific (Bigg’s), Desforges and colleagues predict a “complete collapse.”

Humans, too, are at risk for PCB contamination and subsequent health complications or cancer. A proportion of countries are prominent consumers of dolphins, sharks, other fish species, and whale species—all higher trophic level organisms with elevated PCB concentrations. Garbage and contaminants in the environment, the world, cycle back as garbage and contaminants in wildlife species and people’s bodies.

(Elizabeth Pennisi, Science)

Wildlife Conservation

Discovery of vibrant deep-sea life prompts new worries over seabed mining

September’s Deep-Sea Biology Symposium, highlighted the biotic treasure trove that the underwater Clarion-Clipperton Zone (CCZ). The CCZ, a six million km2 plot of sea floor in the Pacific Ocean, which harbors a series of ecosystems—thriving “Atlantises.” As testament to its biodiversity, Dr. Craig Smith’s team uncovered 154 marine worm species (most unknown), gummy squirrels (wispy-looking sea cucumbers), and squid-like worms. Another biologist, Dr. Adrian Glover, ran into rare, miniscule invertebrates (including Porifera), and xenophyophores (organisms whose running moniker may well be “slimeballs”). Dr. Diva Amon, at the symposium, discussed images of whale skull fossils adorned with metal remnants; these fossils may be 1 – 16 million years old and represent six different whale species. The noted metal on the skull fossils hint that these mammals may consume trace metals to upkeep buoyancy mechanisms.

Although researchers are steadily unearthing the eastern CCZ’s biological secrets many companies wish perform massive mining of the zone’s seabeds for economic profit, which are thought to contain precious metal elements (manganese and cobalt). The International Seabed Authority (ISA), the regulatory entity whose jurisdiction is underwater mining, allowed 29 companies 17 years ago to investigate mining in seabeds—17 of which are part of the CCZ. The year 2020 is the anticipated deadline for the ISA to issue definite regulations on global sea mining. Even though companies must conduct evaluations on the environmental impact mining might have on deep-sea life, outside scientists are relentless in advocating for the establishment of wildlife preserves in the eastern CCZ.

With the life and fossil record that have and are yet to be surveyed, the eastern CCZ presents an opportunity for nations, researchers, and companies to work together. Dr. Amon champions policies directing companies to disclose fossil discoveries in mining sites for future scientific analyses with proposed candidate United Nations Educational, Scientific and Cultural Organization (UNESCO). Meanwhile, Dr. Smith is coaxing the ISA to promote monitoring of pollutants, which can have unforeseen ecological impacts, in open waters above CCZ areas that companies are testing mining or planning to mine.

(Amy Maxmen, Nature News)

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

October 2, 2018 at 5:38 pm

Science Policy Around the Web – May 11, 2018

leave a comment »

By: Mohor Sengupta, PhD

Tablets Nutrient Additives Dietary Supplements Pills

source: Max Pixel

Drug prices

Why Can’t Medicare Patients Use Drugmakers’ Discount Coupons?

With high drug prices, affordability of specialized medicines is a matter of concern for many individuals, especially those on life-saving brand-name drugs.

Manufacturers of brand-name medicines provide discount coupons to people with private health insurance. Such discounts are denied for people with federal healthcare plans such as Medicare or Medicaid. For example, for one patient on Repatha (a cholesterol reducing drug), the co-payment is $618 per month with the Medicare drug plan, but it is only $5 for patients with commercial insurance plans. This discrepancy has resulted in a “double standard” because arguably, the discount is denied to the people who need it most, that is the retired population subscribing (compulsorily) to federal healthcare programs.

Drug manufacturers have an incentive to offer discounts on branded medicines as they increase the likelihood of purchase and results in greater access to and demand for the products. While these discount coupons are immensely beneficial for life-threatening conditions for which generic drugs are not available, a 2013 analysis has shown that lower cost generic alternative and FDA approved therapeutic equivalent was available for 62% of 374 brand-name drugs.

The federal government has argued that with the discount coupons, patients might overlook or be discouraged from buying cheaper variants of the brand-name drug. Even if a patient chooses to use a brand-name drug with a discount coupon over cheaper alternative, their health insurance plan still has to pay for the drug. That amount maybe more than Medicare or Medicaid may be willing to pay. This has resulted in the federal anti-kickback statute which prohibits drug manufacturers to provide “payment of remuneration (discounts) for any product or service for which payment may be made by a federal health care program”.

One important question is why do drug makers sell the brand-name drugs at a much higher price bracket when generic, cheaper options are available? In the present scenario, insurance companies should make the judgement about whether they are willing to cover such brand-name drugs for which generic alternatives are available. Often doctors prescribe brand-name drugs without considering their long-term affordability by patients. It is the responsibility of doctors and insurance providers alike to determine the best possible drug option for a patient.

Taking in both sides of the picture, use of discounts must be exercised on a case basis. It must be enforced for specialized drugs against which generic alternatives are not available and which are usually used for severe or life-threatening conditions. Currently for people with such conditions and on federal healthcare plans, affordability is a major challenge.

(Michelle Andrews, NPR)

 

EPA standards

EPA’s ‘secret science’ rule could undermine agency’s ‘war on lead’

Last month the Environmental Protection Agency (EPA) administrator, Scott Pruitt issued a “science transparency rule” according to which studies that were not “publicly available in a manner sufficient for independent validation” could not be used while crafting a regulation. This rule is at loggerheads with Pruitt’s “war on lead” because a majority of studies on lead toxicity are observational, old and cannot be validated without consciously exposing study subjects to lead.

Lead is a potent neurotoxin with long term effects on central nervous system development. It is especially harmful to children. There are several studies showing lead toxicity, but many do not meet the inclusion standards set by the EPA’s the new science transparency rule. Computer models developed to assess lead toxicity, which played important role in EPA’s regulations on lead in the past, have amalgamated all these studies, including the ones that cannot be validated. If the science transparency rule is retroactive, it would mean trashing these models. An entire computer model can be rendered invalid if just one of its component studies doesn’t meet the transparency criteria.

Critics say that the transparency measure will be counter-effective as far as lead regulations are concerned. “They could end up saying, ‘We don’t have to eliminate exposure because we don’t have evidence that lead is bad’”, says former EPA staffer Ronnie Levin. Another hurdle is the proposed data sharing requirement. Lead based studies tend to be epidemiological and authors might be unwilling to share confidential participant data.

Bruce Lanphear of Simon Frazer University in Canada is skeptical of EPA’s intensions because the agency has not imposed similar transparency measures for chemical companies like pesticide producers.

Finally, this rule could set different standards for lead safely levels in different federal agencies. Currently Center for Disease Control and Prevention (CDC) and Department of Housing and Urban Development (HUD) consider 5 micrograms per milliliter of lead in blood as the reference level. The EPA rule could lead to a new reference level, leading to discrepancies when complying with agencies across the U.S. government.

(Ariel Wittenberg, E&E News)

 

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

May 11, 2018 at 10:24 pm