Science Policy For All

Because science policy affects everyone.

Posts Tagged ‘public health

Science Policy Around the Web – June 8, 2018

leave a comment »

By: Sarah L. Hawes, PhD

20180608_Linkpost

source: pixabay

Treatment Guidelines

Good News for Women With Breast Cancer: Many Don’t Need Chemo

Completion of a large, international clinical study headed by Dr. Joseph A. Sparano of Montefiore Medical Center in New York spells out excellent news for early-stage breast cancer patients. These patients are typically asked to endure both chemo and endocrine therapies after tumor removal. Endocrine therapy to block the hormone estrogen results in side effects similar to menopause and can increase risk of uterine cancer. Notoriously toxic chemo therapy can damage heart and nervous tissue, compromise patient immune systems, and increase risk of leukemia.

Since 2004, genetic test such as Oncotype DX Breast Cancer Assay have provided a small number of women confidence that chemotherapy is not needed to treat their cancer. Following surgical removal of small, non-metastasized breast tumors, tissue is genetically tested to determine whether chemotherapy is advisable as a next step. A very low Oncotype DX tumor score (≤10) indicates cancer with a low risk of recurring, so that chemotherapy is not needed. A very high tumor score of (≥25) indicates a more persistent cancer, and the need for chemotherapy to suppress its recurrence. But most breast cancers return an intermediate tumor score – falling between 10 and 25.

Not knowing what else to do, physicians treating patients with intermediate scores have dutifully followed the 2000 National Cancer Institute recommendation that all pre-metastasis breast cancer patients receive chemo therapy to avoid recurrence and metastasis. Now, Dr. Sparano’s study indicates that many do not need it.

The study, called TAILORx, will be published in The New England Journal of Medicine. It began in 2006 and – with funding from the US and Canadian governments, philanthropic groups, and the company which makes Oncotype DX – followed over 9,000 breast cancer patients aged 18 to 75 for a median duration of seven years. Seventy percent of these women had intermediate tumor scores. With their informed consent, participating women were assigned at random to receive either endocrine therapy alone, or else in combination with chemotherapy.

Over the course of the study, the rates of survival and clearance of cancer were no different between these two groups. This means that for a majority of patients whose cancer is detected at an early stage, toxic chemo therapy provides no added benefit and can be safely avoided. The study results also clarified that, for women younger than 50 (median age at diagnosis is 62 in the US), chemotherapy is still advisable if tumor scores are above 16.

Coauthor Dr. Ingrid A. Mayer from Vanderbilt expressed excitement, noting that with these research findings “we can spare thousands and thousands of women from getting toxic treatment that really wouldn’t benefit them. This is very powerful. It really changes the standard of care.”

(Denise Grady, New York Times)

Basic Science

Scientists race to reveal how surging wildfire smoke is affecting climate and health

A record 2017 wildfire season inflicted serious health and economic challenges throughout the western United States. Over two million acres burned in California and Montana alone, flooding the closest towns with smoke carrying nearly 20 times the acceptable limit of particulate matter set by the Environmental Protection Agency for weeks on end.

The frequency of wildfires is projected to increase, yet a fundamental understanding of their billowing byproduct – smoke – is lacking. Thankfully the next two years will see over $30 million in spending by two cooperative research programs investigating the chemistry, physics, and environmental and public health implications of wildfire smoke. One research campaign is being funded and run by the National Science Foundation (NSF); the other is a joint endeavor by the National Aeronautics and Space Administration (NASA) and the National Oceanic and Atmospheric Administration (NOAA).

“This is definitely the largest fire experiment that has ever happened,” says Carsten Warneke of NOAA’s Earth System Research Laboratory in Boulder, Colorado. Old data on wildfire smoke lacks detail, being largely garnished by satellite observation of established fires. The new research will gather data at all atmospheric levels, using satellites as well as aircraft to fly through and sample smoke plumes and employing researchers on the ground to test low-lying smoke. Sampling will begin in the first 24 hours to monitor fast dynamics of smoke chemistry as it passes through different atmosphere levels and interacts with clouds, potentially seeding ice crystals and impacting weather. In addition to wildfires, researchers will study the smoke released by controlled burns for farming and forestry.

The goal of these large scale research campaigns, Warneke says, is “to do the whole picture at one time and understand how the whole thing plays together.” Newly generated data will enable scientists to predict the rates and types of pollutants released based on the composition of land burned – including density of vegetation or man-made structures – and what that means for climate, weather, and human health.

(Warren Cornwall, Science Magazine)

Have an interesting science policy link? Share it in the comments!

Advertisements

Written by sciencepolicyforall

June 8, 2018 at 6:03 pm

Science Policy Around the Web – May 18, 2018

leave a comment »

By: Patrick Wright, Ph.D

Suicide Prevention

Gaps Remain in U.S. State Policies on Suicide Prevention Training

Suicide is the 10th leading cause of death in the United States, with 45,000 people dying by suicide in 2016 according to the Centers for Disease Control and Prevention. Despite this, there is not a universal requirement or standard of suicide prevention training across states, especially among healthcare professionals, according to a recent study in the American Journal of Public Health (AJPH) that aimed to assess the effectiveness of national guidelines by the U.S. Surgeon General and National Action Alliance for Suicide Prevention released in 2012. Given the proximity and dynamic at the healthcare professional-patient interface, clinicians and mental health experts are in a unique, critical position to explicitly tackle suicide in at-risk individuals. As of October 2017, all 50 states had a suicide prevention plan, but only 10 states—California, Indiana, Kentucky, Nevada, New Hampshire, Pennsylvania, Tennessee, Utah, Washington, and West Virginia—require healthcare professionals to complete suicide prevention training and intervene with appropriate intervention. Policies in seven states only encourage training, but do not require it. Even the duration and frequency of training varies extensively.

Jane Pearson, chair of the National Health Suicide Research Consortium, stated “When there’s someone in crisis you have to gather information very quickly and if you’re not asking the exact right questions you can miss someone’s intentions. The most pressing goal is to increase the person’s will to live so it’s greater than their will to die and buy time to get past the crisis, so they have a chance to work on problem solving.” Earlier work has shown that a majority of people who attempt suicide have seen a healthcare professional in the weeks and months prior to their suicide attempt, emphasizing the significance of potential opportunity in these healthcare professional-patient interactions.

The 2012 National Strategy for Suicide Prevention created by the Office of the U.S. Surgeon General and the National Action Alliance for Suicide Prevention outlined four strategic directions, including creating “supportive environments that will promote the general health of the population and reduce the risk for suicidal behaviors and related problems”, developing and implementing clinical and community-based preventive programs, providing treatment and care for high-risk patients, and surveying and evaluating suicide and its prevention nationwide.

Washington was the first state to mandate suicide assessment, treatment, and management training for healthcare providers, through the Matt Adler Suicide Assessment, Treatment, and Management Act of 2012 (House Bill 2366), with the state defining suicide assessment, treatment, and management training as one “of at least six hours in length that is listed on the Best Practices Registry of the American Foundation for Suicide Prevention and the Suicide Prevention Resource Center including, but not limited to: Applied suicide intervention skills training; assessment and management of suicide risk; recognizing and responding to suicide risk; or question, persuade, respond, and treat.”

The AJPH study poses that ensuring that suicide prevention training is disseminated universally among health care professionals is not limited only to legislation; accrediting bodies (e.g. American Psychological Association) share this burden in guaranteeing that graduates are prepared to identify and aid patients who may be at risk for suicide. The study concludes, “Better equipping health care professionals to assess and provide care to patients at risk for suicide may contribute to a meaningful decline in the rate of suicide across the nation, and it is the responsibility of policymakers, health care professionals, and citizens to advocate change.”

(Cheryl Platzman Weinstock, Reuters)

Animal Welfare

Animal Tests Surge Under New U.S. Chemical Safety Law

The Frank R. Lautenberg Chemical Safety for the 21st Century Act of 2016 (H.R. 2576) amended the 1976 Toxic Substances Control Act (TSCA) (S. 3149), the primary chemicals management law in the United States, to require the Environmental Protection Agency (EPA) to “minimize, to the extent practicable, the use of vertebrate animals in testing chemicals” and states “Any person who voluntarily develops information under TSCA must first attempt to develop the information by an alternative or nonanimal test method or testing strategy before conducting new animal testing.” It required the EPA to explicitly develop a strategic plan to promote the development and implementation of alternative test methods that do not require the use of animals. However, despite the goals of the Lautenberg Chemical Safety Act, there has reportedly been a recent increase in the number of animal tests and requested or required by the EPA.

In March 2018, the EPA released a draft of its strategic plan for public comment of their proposed long-term strategy for increasing the use of animal research alternatives, including computer modeling, biochemistry, and cell culture approaches. In response, People for the Ethical Treatment of Animals (PETA) and the Physicians Committee for Responsible Medicine (PCRM) quantified the number of EPA, TSCA-related animal tests and animals used over the last three years. They found that the number of animal tests requested or required by the EPA increased substantially last year, with the total number of tests and animals involved in testing jumping more than an order magnitude, from approximately 6500 across 37 rests required or requested to over 75000 animals across 331 tests. They issued a response letter, stating “The dramatic increase we have documented indicates that EPA is failing to balance its responsibilities to determine whether chemicals present unreasonable risks with its Congressional mandate to reduce and replace the use of vertebrate animals in chemical testing.”

Unfortunately, the underlying cause for this trend is not known. It is possible that the Lautenberg Chemical Safety Act’s stricter requirements on a larger range of chemicals compared to the original TSCA may be driving additional testing and subsequent data collection in order to comply. Moreover, Kristie Sullivan, PCRM’s vice president of research policy, said that EPA staff may need more training and funding of animal research-alternatives and “to stay abreast of new developments in toxicology, so that they can quickly incorporate new methods and kinds of data into their decision-making process.”

In contrast, implementation may be slow due to the EPA’s need to adequately pursue alternatives while adapting to the new law. Daniel Rosenberg, an attorney with the Natural Resources Defense Council, emphasized the importance of taking whatever time is necessary to validate alternative testing strategies: “We need to ensure that the alternative testing methods that are implemented are able to actually identify toxicity, exposure and potential adverse effects of chemicals.”

The comment period on EPA’s draft strategy for reducing animal tests closed earlier this month, with the agency required to release its final plan by the end of June 2018.

(Vanessa Zainzinger, Science)

Written by sciencepolicyforall

May 22, 2018 at 7:31 pm

Science Policy Around the Web – May 1, 2018

leave a comment »

By: Liu-Ya Tang, PhD

20180501_Linkpost

source: pixabay

Artificial Intelligence

With €1.5 billion for artificial intelligence research, Europe pins hopes on ethics

While artificial intelligence (AI) brings convenience to modern life, it may cause some ethical issues. For example, AI systems are generated through machine learning. Systems usually have a training phase in which scientists “feed” them existing data and they “learn” to draw conclusions from that input. If the training dataset is biased, the AI system would produce a biased result. To put ethical guidelines on AI development and catch up with the United States and China in AI research, the European Commission announced on April 25 that it would spend €1.5 billion to AI research and innovation until 2020.

Although the United States and China have made great advances in the field, the ethical issues stemming from AI may have been neglected as both practice “permissionless innovation”, said Eleonore Pauwels, a Belgian ethics researcher at the United Nations University in New York City. She spoke highly of Europe’s plan, which is expected to enhance fairness, transparency, privacy and trust. But the outcome is still unknown. As said by Bernhard Schölkopf, a machine learning researcher at the Max Planck Institute for Intelligent Systems in Tübingen, Germany, “We do not yet understand well how to make [AI] systems robust, or how to predict the effect of interventions”. He also mentioned that only focusing on potential ethical problems would impede the AI research in Europe.

What are the reasons why the AI research lags behind the United States and China? First of all, Europe has strong AI research, but a weak AI industry. Startup companies with innovative technologies, which are oftentimes risky, cannot receive enough funds as the old industrial policies favor big, risk-averse firms. So the commission’s announcement underscores the importance of public-private partnerships to support new technology development. The second reason is that salaries are not high enough to keep AI researchers in academia as compared to the salaries in the private sector. To solve this problem, a group of nine prominent AI researchers asked governments to set up an intergovernmental European Lab for Learning and Intelligent Systems (ELLIS), which would be a “top employer in machine intelligence research” and offer attractive salaries as well as “outstanding academic freedom and visibility”.

(Tania Rabesandratana, Science)

Public health

Bill Gates calls on U.S. to lead fight against a pandemic that could kill 33 million

Pandemic diseases, mainly caused by cholera, bubonic plague, smallpox, and influenza, can be devastating to world populations. Several outbreaks of viral diseases have been reported in scattered areas around the world, including  the 2014 Ebola epidemic, leading to growing concerns about the next wave of a pandemic. During an interview conducted last week, Bill Gates discussed the issue of pandemic preparedness with a reporter from The Washington Post. Later, he gave a speech on the challenges associated with modern epidemics before the Massachusetts Medical Society.

The risk of a pandemic is high, as the world is highly connected and new pathogens are constantly emerging as consequences of naturally occurring mutations. Modern technology has brought on the possibility of bioterrorism attacks. In less than 36 hours, infectious disease and pathogens can travel from a remote village to major cities on any continent to become a global crisis. During his speech, Gates cited a simulation done by the Institute for Disease Modeling, which estimates that nearly 33 million people worldwide could be killed by a highly contagious and lethal airborne pathogen like the 1918 influenza. He said “there is a reasonable probability the world will experience such an outbreak in the next 10-15 years.” The risk becomes higher when local government funding for global health security is not adequate. The U.S. Centers for Disease and Prevention is planning to dramatically downsize its epidemic prevention activities in 39 out of 49 countries, which would make these developing countries even more vulnerable to the outbreaks of infectious diseases.

Gates expressed this urgency to President Trump and senior administration officials at several meetings, and he also announced a $12 million Grand Challenge in partnership with the family of Google Inc. co-founder Larry Page to accelerate the development of a universal flu vaccine. He highlighted scientific and technical advances in the development of better vaccines, antiviral drugs and diagnostics, which could provide better preparation for, prevention of and treatment of infectious disease. Beyond this he emphasized that the United States needs to establish a strategy to utilize and coordinate domestic resources and take a global leadership role in the fight against a pandemic.

(Lena H. Sun, The Washington Post)

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

May 1, 2018 at 5:53 pm

Science Policy Around the Web – April 27, 2018

leave a comment »

By: Michael Tennekoon, PhD

20180427_Linkpost

source: pixabay

Productivity of Science

Is Science Hitting a Wall?, Part 1

Scientific research is hitting a wall- that’s the view from a recent study published by 4 economists.  The famous metric where the density of computer chips doubles every 2 years, now takes 18 times the number of researchers to accomplish. This pattern also extends to other areas of research as well. For example in medicine, “the numbers of new drugs approved per billion U.S . dollars spent on R&D has halved every 9 years since 1950”. In general, while research teams appear to be getting bigger, the number of patents being produced per researcher has declined. Alarmingly critics argue that some fields may even be regressing- for example the over-treatment of psychiatric and cancer patients may have caused more harm than the benefits.

But why would science be hitting a wall? One major factor could be the reproducibility crisis– the problem where many peer reviewed claims cannot be replicated thus calling into question the validity of the original research findings.  Researchers suggest that intense competition for funding and jobs, has resulted in the need to conduct innovative “high risk” research, in as short of a time as possible. While this type of research can gain plenty of press, they often lack the appropriate scientific rigor that ensure the findings are reliable. However, the perceived slow-down in research productivity could also be a result of the natural advancement of science- the low hanging fruit problem. Said another way, most of the easier problems have already been solved, leaving only problems that require vast scientific resources to solve.

On the other hand, researchers in some fields can rightfully pushback and argue that scientific progression is not stalling but is in fact accelerating. For example, technologies such as CRISPR and optogenetics have been able to produce a multitude of new findings particularly in the areas of neuroscience and genetics research. However, it must be noted, that even with these new technologies, the end product for general society is still relatively disappointing.

Given these concerns how scientific research moves forward raises some tough questions for the field. Given funding limitations, how much do we, as a society, value ‘pure science’- the effort to understand rather than manipulate nature? Scientific curiosity aside, in purely economic terms, is it worth understanding the out of Africa hypothesis of human origins, or sending humans to different planets? Is it worth investing in the latest innovative technology that produces new findings with limited applicability to human health? Scientists and the general society must be open to weighing the costs and benefits of scientific enterprises and deciding the avenues of research worth pursuing.

(John Horgan,  Scientific American)

Vaccine Ethics

The vaccine dilemma: how experts weigh the benefits for many against risks for a few

Cost-benefit analysis. Sure, it’s easy to do when you’re on an amazon shopping spree. But what about when millions of lives are at stake? And what if those millions of lives are of children, unable to give informed consent? Not so easy anymore, but that is the job of the Strategic Advisory Group of Experts (SAGE) for the World Health Organization, who last week decided to scale back the use of a new vaccine to protect against dengue.

2 years ago, SAGE concluded the vaccine was safe to use in children in places with high dengue infection rates, despite theoretical concerns the vaccine may increase the risk of developing a severe form of dengue in some children. Towards the end of last year, the vaccine’s manufacturer, Sanofi Pasteur, released new data validating these theoretical concerns.   How likely was this to happen? It was estimated that in a population where 70% of individuals had dengue at least once, the vaccine would prevent 7 times as many children from needing hospital care than would be needed as a result of the vaccine. If 85% of individuals had had dengue, that figure becomes 18 to 1. Those numbers were deemed not worth the risk.

What goes into making these decisions?

One factor is the prevalence of the disease. For example, the oral polio vaccine had the ability to prevent millions of children from becoming paralyzed, but it could also cause paralysis in a rare number of cases. In the 1950s and 1960s when polio was highly prevalent, it made sense to recommend this vaccine but as polio became nearly non-existent towards the end of the 20th century, using the oral vaccine was no longer prudent.

However, dengue is still rampant in today’s world, so what is different in this case?

Public perception. The modern world is highly litigious and has access to a wide variety of information, both facts and fake. This has resulted in a very skeptical perception of science where negative press for one vaccine could cause collateral damage for many other vaccines, unlike what would have happened a few decades ago. For example, in the 1950s, it was discovered that children were given a polio vaccine that mistakenly contained live viruses. This left 51 children in the US paralyzed, and killed 5. However, polio vaccinations resumed and the company responsible (Cutter Laboratories) went on and polio was virtually eradicated. On the other hand, RotaShield, a vaccine to protect against rotavirus (a virus that causes bowel blockage), had a very different experience. Approved in 1998, it was suspended one year later after the CDC estimated that for every 10,000 children there would be an extra 1 or 2 children who would get intussusception (a type of bowel blockage) over what would normally be seen. While in developing countries, the number of lives saved would have been far more than the extra cases of intussusception, the vaccine was still suspended. A safer rotavirus vaccine only made it to market in 2006. During this time, it is estimated that 3 million children died from rotavirus infections. (Note- risk of  rotavirus infections still persist even when the vaccine is given, but at far lower rates).

Given the tremendously difficult decisions that need to be made with the implementation of vaccines and the impact that public perception can have on these decisions, society has a responsibility to become more informed about the potential benefits and drawbacks of vaccines and must actively tease apart fact from fiction.

(Helen Branswell, STAT)

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

April 27, 2018 at 3:26 pm

Science Policy Around the Web – December 1, 2017

leave a comment »

By: Kelly Tomins, BSc

20171201_Linkpost

source: pixabay

Fake Drugs

Health agency reveals scourge of fake drugs in developing world

The World Health Organization (WHO) released two concerning reports detailing the prevalence and impact of substandard and falsified medical products in low and middle income countries. Although globalization has led to the increase in e-commerce of medicine, making life-saving treatments available to a broader population, it also created a wider and more accessible market to dispense fake and harmful medicines for profit. Despite this, there was a lack of a systematic method of tracking falsified medicines on a global scale. Thus, the WHO created the Global Surveillance and Monitoring System for substandard and falsified medical products (GSMS). With this program, medicine regulatory authorities can enter information about fraudulent drug incidences into a centralized database, making it easier to understand global trends and to possibly identify the source of harmful products. The WHO also conducted an extensive literature search of nine years’ worth of medicine quality studies to assess rates of fake medicines.

Their dual analysis showed that falsified medicines are heavily prevalent, particularly in low-and -middle income countries. They estimate that an incredulous 10.5% of medicines in these countries are falsified or substandard, representing $30 billion in wasted resources. Low income countries are the most vulnerable to this type of exploitation, given their higher incidence of infectious disease and their likelihood of purchasing cheaper alternatives to more reliable and tested medicines. In addition, these countries are more likely to lack the regulatory framework and technical capabilities to ensure safe dispensing of medicines. However, reports of fake drugs were not limited to developing countries. The Americas and Europe each accounted for 21% of the reported cases, highlighting how this is a global phenomenon.

Antimalarials and antibiotics are the two products most commonly reported as substandard or falsified, with 19.6% and 16.9% of the total reports respectively. These findings are especially concerning given a recent finding that the number of malaria infections increased the past year, despite a steady global decrease from 2000-2015. In addition, the number of deaths from the disease have not decreased for the first time in 15 years. By providing an insufficient dose to eradicate the malaria parasite from the body, the use of substandard or falsified antimalarials can foster the emergence of drug-resistant strains of malaria, like those recently found in several Asian countries. Overall, the WHO estimates that falsified products may be responsible for 5% of total deaths from malaria in sub-Saharan Africa.

Despite the clear need for action to ensure drug safety around the world, there are an abundance of challenges to making this possible. The supply chain of drug manufacturing, from the chemical synthesis of the drug, to the creation of packaging and the shipping and dissemination, can span multiple countries with extremely variable regulatory procedures and oversights. The need for strengthened international framework and oversight is necessary to ensure patients receive the drugs they think they are getting and preventing hundreds of thousands of deaths each year.

(Barbara Casassus, Nature)

Biotechnology

AI-controlled brain implants for mood disorders tested in people

Mood disorders have been traditionally difficult to treat due to the often-unpredictable onset of symptoms and the high variability of drug responses in patients.  Lithium, a popular drug used to treat bipolar disorder, for example, can cause negative side effects such as fatigue and poor concentration, making it more likely for patients to elect to stop treatment. New treatments developed by the Chang lab at Massachusetts General Hospital and Omid Sani of UCSF hope to provide real-time personalized treatments for patients suffering from mood disorders, such as depression and PTSD. The treatment utilizes a brain implant that can monitor neural activity, detect abnormalities, and then provide electrical pulses to a specific region of the brain when needed. These electrical pulses, known as Deep Brain Stimulation (DSB), have already been used to treat other disorders such as Parkinson’s disease. Other groups have tried to use DSB in the past to treat depression, but patients showed no significant improvement. In those studies, however, the pulses were given constantly to a single portion of the brain. What is unique about this treatment is that the pulses are only given when necessary, or when the implant receives signals that the brain is producing abnormal neural activity. The researchers have also found ways to map various emotions and behaviors to specific locations in the brain. They hope to utilize that information in order to more finely tune a person’s behaviors. In addition, the algorithms created by the labs to detect changes in the brain can be modified for each patient, providing an alternative to the one-size-fits all pharmacological approaches currently used.

Despite the promising and appealing aspects of this personalized treatment, it also raises several ethical issues regarding privacy and autonomy. First off, with such detailed maps of neural activities, the patient’s mind is practically an open book to their doctor. They have little agency of what emotions they would want to share or, more importantly, hide. Also, the patient may feel a lack of autonomy over their treatment, as the implant itself decides when the patient is displaying an unwanted mood or behavior. The algorithms could also potentially change the patient’s personality for worse by limiting the spectrum or intensity of emotions that a patient can feel. Any type of manipulation of brain activity could be viewed as worrisome from an ethical standpoint, and although promising, this proposed treatment should undergo intense scrutiny in order to maintain autonomy for the patients.

(Sara Reardon, Nature)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

December 1, 2017 at 4:03 pm

Science Policy Around the Web – August 29, 2017

leave a comment »

By: Allison Dennis, BS

Linkpost_8292017

Source: pixabay

Science funding

1 Million fewer dollars available for studying the health impacts of coal mining

The National Academies of Sciences, Engineering and Medicine, was instructed to stop its ongoing research into the potential health effects of surface mining by the U.S. Department of the Interior on August 18, 2017. The US$1 million study was established on August 3, 2016, “at the request of the State of West Virginia,” by the Office of Surface Mining Reclamation and Enforcement (OSMRE). OSMRE, an office within the U.S. Department of the Interior, selected the National Academy of Science to systematically review current coal extraction methods, the framework regulating these methods, and potential health concerns. Critics of the study point to the findings of a similar review undertaken by the National Institute of Environmental Health Sciences that were made public on July 21, 2017, which determined that the current body of literature was insufficient to reach any conclusions regarding the safety of mountaintop removal on nearby communities.

Mountaintop removal, a form of surface mining, employs the use of explosives to efficiently expose coal deposits that would otherwise require a large number of workers to extract over time. The excess soil and rock that has been blasted from the mountain is placed in adjacent valleys, leading to alterations of stream ecosystems, including increases in selenium concentrations and declines in macroinvertebrate populations.

The people of rural Appalachia experience significantly higher rates of cancer than people in the rest of the U.S., of which environmental exposures are only one potential risk factor. Widespread tobacco use, obesity, and lack of accessible medical care are all believed to underlie the cancer epidemic in Appalachia, culminating in a tangled web of risk.

It is unclear how the money from this study will be repurposed. The Obama administration cancelled a study of surface mining to redirect funds towards examining the little known effects of hydraulic fracturing.

(Lisa Friedman and Brad Plumer, The New York Times)

Cancer treatments

For breast cancer patients the cost of peace of mind may be both breasts

Between 2002 and 2012 the rates of women with a breast cancer diagnosis opting for a double mastectomy increased from 3% to 12%. In a majority of these cases, a lumpectomy may be medically sufficient. However for many women, this choice may stem from a personal pursuit of peace of mind rather than the advice of their doctors. The mastectomy procedure can extend time of recovery from a few days, in the case of a lumpectomy, to 4 to 6 weeks. Yet for many women, undergoing a lumpectomy followed by 5 to 7 weeks of radiation therapy would offer the same long-term survivorship. Additionally, 1 in 8 women with invasive cancer in a single breast is electing to remove both breasts.

The reasons for this increase is unknown. While the procedure has not been demonstrated to increase survivorship, the procedure itself is relatively risk free. Breasts are not vital organs, and improvements in reconstruction methods have provided women with a natural-looking, cosmetic replacement. For many women the cost of feeling their struggle with breast cancer is behind them is the removal of both breasts. Double mastectomies, along with the reconstruction surgeries they normally require, are usually covered by insurance.

Breast cancer is the most commonly diagnosed cancer type in the U.S. Mortality from the disease decreased by 1.9% per year from 2003 to 2012. Yet, for many women facing breast cancer, the choice of a double mastectomy may feel like the only empowering choice, one their doctors are willing to let them make.

(Catherine Caruso, STAT News)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

August 30, 2017 at 8:57 pm

Science Policy Around the Web – August 15, 2017

leave a comment »

By: Liu-Ya Tang, PhD

20170815_Linkpost_1

Picture Source: pixabay

Public Health

Obesity and Depression, Entwined or Not?

It might seem that obesity and depression are not related since they are diseases from different parts of the body; however, health care practitioners have observed that these two diseases have a close relationship. The development of obesity and depression can be a vicious cycle, one favoring the other. Extra weight brings anxieties to obese people, which can cause poor self-image and social isolation. These are known contributors to depression. On the other hand, people experiencing depression tend to overeat and avoid exercising. According to the federal Centers for Disease Control and Prevention, about 43 percent of people with depression are obese, compared with 36.5 percent of the general population. People with obesity have a higher risk to develop depression, and vice versa, according to one 2010 study.

Both obesity and depression are chronic diseases that are hard to treat, placing a big burden on the health care system. Obesity rates in the United States are among the highest in the world. Obesity alone costs almost $150 billion per year in direct expenses, and this number is estimated to increase about $1.24 billion each year till the year 2030. The cost of treating depression is even higher, which is more than $200 billion every year. So it is urgent to find ways to treat both diseases more effectively if they are bidirecitonally comorbid.

When depression and obesity coincide, the combination of physical and mental health interventions becomes important, which has been supported by several studies. Researchers from the University of Texas-Southwestern found that patients’ depression were alleviated when they did weekly exercise sessions, which were prescribed by physicians. Another study from Duke University found that the rate of depression in obese women was decreased by 50 percent simply by helping them control their weight. The combinatorial treatment has been adopted. Dr. Sue McElroy, a psychiatrist in Mason, Ohio, screens patients for weight and BMI, and treats obesity and depression together. She tailors her prescription, as some antidepressants can cause weight gain. Her “self-taught” method was welcomed by her patients. However, this is not a general practice in treating patients with both symptoms. To benefit patients’ health and reduce cost for curing obesity and depression, the whole health care system needs a change.

(Shefali Luthra, Kaiser Health News)

 

The ACA

What do people and health-policy experts think about repealing the ACA?

Since March, the Trump administration has strived to repeal and replace the Affordable Care Act (ACA), but the Senate rejected this repeal, as 3 republican senators voted “no” last month. How do people feel about repealing the ACA? What do most people say the Trump administration should do after the Senate failed to repeal? There were two reports about it.

The first one was about a survey conducted Aug. 1-6 by the Kaiser Family Foundation, which capture the opinions of 1,211 adults. Their analysis found that a majority of people (78 percent) think that the government should make the ACA work better. Grouping this majority by Political Party ID, reveales 95 percent are Democrats, 80 percent re independents and 52 percent are Republicans. Even 51 percent of President Trump’s supporters think both parties should work together to improve the health law.

The second report said that a coalition of liberal and conservative health-policy leaders is making suggestions for how to strengthen the existing ACA law, aligned with a favorable view in the public. The nine group members are from think tanks, universities and advocacy groups, who can be influential in health-policy formation of the government. The coalition was founded when it appeared that the Republican-controlled Congress would pass a repeal of the ACA without a replacement plan. It took the group eight months to come up with a five-point set of principles. It says that the government should continue providing subsidies to insurers that extend plans to 7 million lower-income customers and strong incentives for Americans to carry health insurance. The latter will help the cost of expensive care be shared by a stable insurance pool with healthy customers. They also urge the government to bring health plans to about two dozen counties, which would be left providerless in the ACA marketplace for 2018. The group said they intend to present their idea to Republican and Democratic lawmakers. “We are trying to model bipartisanship so incremental steps can be taken,” said by Ron Pollack, chairman emeritus of the liberal consumer-health lobby Families USA.

To prevent the potential collapse of health insurance market, the Senate is planning a bipartisan hearing on health care in September. In the House, a group of around 40 Republicans and Democrats known as the Problem Solvers Caucus aims to making urgent fixes to the ACA law. On September 27th, insurers will sign contracts with the federal government over what insurance plans to sell on the marketplace for 2018, which pushes Congress to come up with a solution before then.

(Phil Galewitz, Kaiser Health News, and Amy Goldstein, The Washington Post)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

August 15, 2017 at 6:27 pm