Science Policy For All

Because science policy affects everyone.

Archive for the ‘Essays’ Category

Homegrown Apocalypse: A Guide to the Holocene Extinction

leave a comment »

By: Andrew Wright BSc

Homegrown Apocalypse: A Guide to the Holocene Extinction

One of the unifying factors of mass extinctions is a rapid change in global average temperature. The end-Ordovician extinction, the second largest, occurred when newly forming mountains made of silicate rock quickly absorbed atmospheric CO2. The global average temperature plunged, leading to the formation of enormous glaciers, drastically lower ocean levels, and much colder waters. Since complex life was still relegated to the oceans, this killed 86% of all species. The most well-known extinction is the end-Cretaceous or K-Pg event caused in part by a massive asteroid impact in Chicxulub, Mexico. The immediate impact, roughly one billion times stronger than the atomic bombings of Japan, was devastating in its own right. However, the subsequent ejection of sulfate-bearing rock into the atmosphere was the real killer, dropping global temperatures by 2-7°C, inhibiting photosynthesis, and acidifying the oceans. Coming right after a period of global warming, this extinction killed about 76% of all species.

            These extinctions pale in comparison to the end-Permian extinction, also known as the Great Dying. When Pangea was the sole continent, an enormous pool of lava called a flood-basalt plain slowly erupted over what is modern-day Siberia. Over 350,000 years, magmatic rock up to a mile thick solidified and covered an area roughly half the size of the United States. This igneous cap forced underground lava to move sideways and spread in paths called sills. As the lava traveled, it vaporized increasing amounts of carbonates and oil and coal deposits, leading to an immense build-up of CO2. Once the sills reached the edge of the cap, these gases were violently expelled, ejecting up to 100,000 gigatons of CO2. The immediate effect was a global average temperature increase of roughly 5°C. Subsequently, oceanic methane hydrate (or methane clathrate) crystals, which become unstable at high temperatures, broke down. Since methane is 20-80 times more potent than CO2as a greenhouse gas, global average temperature increased a further 10°C, bringing the total to 15°C. This left the planet barren, desertified most of Pangea, strongly acidified the oceans, killed 96% of marine life, and 90% of all life on Earth.

            We are currently living through the beginnings of the sixth mass extinction event, known as the Holocene. Species are dying off 10-100 times faster than they should and that rate is accelerating. Insects, including pollinators, are dying off so quickly that 40% of them may disappear within decadesOne in eight birds are threatened with extinction, 40% of amphibians are in steep decline, and marine biodiversity is falling off as well. At current rates, half of all species on Earth could be wiped out by the end of the century. 

What is the commonality between our present circumstances and the past? As with previous mass extinctions, global average temperature has increased. Since 1880, global average temperature has increased by 0.8°C and the rate of warming has doubled since 1975. This June was the hottest month ever recorded on Earth, with global average temperature reaching 2°C above pre-industrial levels. Greenland lost two billion tons of ice in one day. This increase in temperature is because we are currently adding 37.1 gigatons of CO2 per year to the atmosphere, and that number is rising

            From the most recent International Panel on Climate Change (IPCC) report, we know that the best outcome is to keep the increase in global average temperature below 1.5°C. Instead, let us consider what would happen if current trends stay the same and CO2 emissions continue to increase at similar rates until 2100. This is known as the RCP 8.5 model. Under this paradigm, atmospheric CO2 levels will rise from 410 parts per million (ppm) to 936 ppm. The global average temperature will increase by 6°C from pre-industrial levels. That puts the Earth squarely within the temperature range of previous mass extinction periods. 

Given this level of warming the following can be expected to occur: first and foremost, the extreme heat on the planet will massively decrease glaciation, causing a surge in ocean levels. Since water expands as it gets warmer, ocean levels will increase even further to about 12ft higher than current levels. This means most coastal areas will perpetually flood while others will be completely underwater. Unfortunately, non-coastal areas won’t be free from hardship as high air temperature will cause desertification, crop die-off, drought, and widespread wildfires. Secondly, as the ocean absorbs CO2 from the atmosphere, it will become increasingly acidic. So far, the pH of the ocean has only changed by 0.1, but under an RCP 8.5 model, that decrease could be as high as a 0.48 reduction in pH. Since this measurement is on a logarithmic scale, this means that the oceans will be acidic enough to break down the calcium carbonate out of which shellfish and corals are built. Warmer water cannot hold oxygen as effectively as cold, meaning many water-breathing species will suffocate. In combination, these two factors will serve to eliminate a huge source of the human food supply. Finally, since weather patterns are based on ocean and air currents and increasing temperatures can destabilize them, massive hurricanes, dangerously cold weather systems, and flood-inducing rainfall will become the norm. 

One parallel to the end-Permian extinction might result as well. Over millions of years, methane clathrate re-stabilized in the permafrost of Siberia and in the deep ocean floor. But in what has been termed the clathrate gun hypothesis, if methane clathrate destabilizes again at high temperatures, then the resultant methane emissions and planetary warming could form a positive-feedback loop, releasing even more crystallized methane until we end up in another “great dying”. While short-term warming probably won’t cause a runaway temperature increase, a 6°C increase in global average temperature might. New research suggests methane release may not even be necessary as the ocean is reaching a critical point in the carbon cycle where it could rapidly expel an amount of CO2on par with flood-basalt events. Moreover, like the end-Permian extinction, anthropogenic climate change is occurring on a near instantaneous geological time scale and species, including our own, will not have the requisite time to adapt.

Of course, none of these effects exists in a vacuum. They will be alongside increasing deforestation for agriculture, plastic and chemical pollution, and resource extraction. The end result would be a planet with less space, little food, mass migration, and devastating weather. So, what can be done to stop this scenario from coming true? The latest IPCC report essentially places humanity at an inflection point. Either CO2output is cut in half by 2030 and humans become carbon neutral by 2050, or the planet is irrevocably thrust past the point of no return. 

This timeframe may seem short, but it takes into account that even if civilization were to completely stop emitting greenhouse gasses today, it would take hundreds of years for global average temperature to  go back down since it takes time for the ocean to absorb CO2from the atmosphere. Like any problem of scale, there is no one solution to reaching carbon neutrality and it will take a multivariate approach. Some solutions include enacting carbon tax measures, subsidizing and implementing renewable energy (while divesting from new coal and oil production), an increased reliance on nuclear power, large-scale reforestation, livestock reduction, and carbon-sequestration technology. Some of these efforts have come a long way and some have gone in the wrong direction.

This is, of course, a global problem to be solved. At a time when the United States has signaled its intention to withdraw from the Paris Climate Accord as soon as possible and states are rejecting carbon cap-and-trade measures, other nations are moving ahead with unprecedented boosts in renewable energy and bold commitments to reducing greenhouse gas emissions. India, the third-largest polluter after the United States, is on track to surpass its Paris Accord commitments. Should the United States re-engage with and lead the international effort to tackle what is an existential threat, then it is not improbable that the end of this century could be a pleasant one. So, if the idea of living through a global extinction event is disconcerting, one can be assured that the problem is still just barely a solvable one. 

Have an interesting science policy link? Share it in the comments!

Advertisements

Written by sciencepolicyforall

July 11, 2019 at 4:24 pm

How human health depends on biodiversity

leave a comment »

By: Lynda Truong

Image by V Perez from Pixabay 

By many measures, the Earth is facing its sixth mass extinction. The fifth mass extinction, a result of a meteorite approximately 10 km in diameter, wiped out the dinosaurs and an estimated 40-75% of species on Earth. This time around, the natural disaster that is threatening life on Earth is us.

In May, the United Nations released a preliminary report on the drastic risk to biodiversity (not to be confused with the recent report on the drastic consequences of climate change).  The assessment, which was compiled by the Intergovernmental Science-policy Platform on Biodiversity and Ecosystem Services (IPBES), draws on information from 15,000 scientific and government sources with contributions from 145 global experts. It projects that one million species face risk of extinction. Scientists have estimated that the historical base level rate of extinction is one per million species per year, and more recent studies suggest rates as low as 0.1 per million species per year. At the established base level rates, it would take one to ten million years to see the same magnitude of extinction the planet currently faces. This accelerated rate of extinction can be linked to a variety of man-made causes, including changes in land and sea use, direct exploitation of organisms, climate change, pollution, and the introduction of invasive species. 

For some, that may not seem important. If humans are not on the endangered species list, why should it matter? As the IPBES Global Assessment indicates however, healthy ecosystems provide a variety of services, including improving air quality, purifying drinking water, and mitigating floods and erosions. The vast canopies of rainforests worldwide sequester 2.6 billion tons of carbon dioxide a year. Plants and soil microbes found in wetlands can remove toxins from water, including explosive chemicals such as nitroglycerin and trinitrotoluene (TNT). Mangrove forests serve as an important buffer against ocean storm surges for those on land. Nature is a powerful resource, and declines in biodiversity have broad implications for global development and health. 

The importance of biodiversity on global health is immediately apparent in middle- and low-income countries, which rely heavily on natural remedies and seasonal harvests for health and nutrition. The loss of entire species of plants can eliminate valuable sources of traditional medicine for indigenous communities. Genetically diverse crops are more resilient to pest and disease, ensuring a stable food supply and bolstering food security. Beyond this, ecosystem disturbances also have complex implications for infectious disease, which are often endemic to developing nations. 

However, these effects are also seen in first world countries. A well cited example for the impact of biodiversity loss on infectious disease involves Lyme disease, which is endemic to parts of the United States. The white footed mouse is a common carrier of Lyme disease, and in areas with high densities of these mice, ticks are likely to feed on the mice and subsequently transmit the disease to humans. However, the presence of other mammals that the tick can feed on dilutes the disease reservoir, lowering the likelihood of an outbreak (commonly referred to as the “dilution effect”). While biodiversity has complicated effects on the spread of infectious diseases, drastic changes to ecosystems often provide a breeding ground for disease vectors and lead to increases in transmission.

In addition to the direct effects of declines in biodiversity have on global health, an often-neglected aspect of its importance for health is as a resource for biomedical science. The IPBES assessment reports that 70% of cancer drugs are natural or inspired by natural sources such as traditional medicines. This merely scratches the surface of the influence of nature on modern biomedical research. 

Much like the communities that rely on natural products as medicine, many drug compounds produced by pharmaceutical companies are derived from nature. Morphine has been one of the most revolutionary drug compounds in history, effectively treating both acute and chronic pain. The compound was originally isolated from the opium poppy, and its chemical structure has since been modified to reduce negative effects and improve potency. While the current opioid crisis in the United States has highlighted the importance of moderate use, morphine and its analogues are some of the most useful and reliable pain relievers in modern medicine. Similarly, aspirin has been regarded as a wonder drug for its analgesic, anti-inflammatory, and cardioprotective effects. Aspirin is a chemical analogue of salicylic acid, a compound originally isolated from willow tree bark. 

Beyond general pain relief, many naturally derived drugs have also been useful for disease treatment. Quinine, the first effective antimalarial drug, was extracted from the bark of cinchona trees, and quinine and its analogues are still used to treat malaria today. Penicillin, serendipitously discovered in a fungus, has been useful for treating bacterial infections and informing modern antibiotic development. These medicines and many more have been crucial to the advancement of human health, yet could have just as easily been lost to extinction.

On a more fundamental level, scientific research has benefited from many proteins isolated from nature. Thermophilic polymerases, isolated from a bacterium residing in hot springs, are now an essential component of polymerase chain reactions (PCR) – a common laboratory technique that amplifies segments of DNA. This method is critical in molecular biology labs for basic research, and forensic labs for criminal investigations.Fluorescent proteins, which have been isolated from jelly fish and sea anemone, revolutionized the field of molecular biology by allowing scientists to visualize dynamic cellular components in real time. More recently, CRISPR/Cas systems were discovered in bacteria and have been developed as a gene editing tool capable of easily and precisely modifying genetic sequences. These basic tools have vastly improved the scope of biomedical research, and all of them would have been close to impossible to develop without their natural sources.

In addition to medicines and tools, nature has often informed biomedical research. Denning bears are commonly studied for potential solutions to osteoporosis and renal disease. Their ability to enter a reduced metabolic state where they do not eat, drink, or defecate for months at a time provides valuable insight into how these biological processes may be adapted to benefit human disease and physiology. Even more interestingly, there are a few species of frogs that become nearly frozen solid in winter, and thaw fully recovered in spring. In this frozen state, much of the water in their body turns to ice, their heart stops beating, and they stop breathing. When temperatures rise, they thaw from the inside out and continue life as per usual. Crazy cryonics and immortality aside, the freeze/thaw cycles could inform improved preservation for organ transplants.

Nature is a much better experimentalist than any human, having had billions of years to refine its experiments through the process of evolution and natural selection. Depleting these living resources, which provide invaluable benefits to human health and ecosystems, lacks foresight and is dangerously reckless. The techno-optimist approach of ceaseless development in the blind belief that whatever problem humanity encounters can be solved with research and innovation neglects to account for the dependency of research and innovation on nature. Most biomedical scientists, most physicians, and much of the general public have probably devoted a minimal amount of consideration to the importance of biodiversity. But for the one million species currently at risk, and for the hundreds of million more yet to be discovered, it’s worth a thought.

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

June 7, 2019 at 9:51 am

Gene editing- Regulatory and ethical challenges

leave a comment »

By: Chringma Sherpa, Ph.D.

Image by Colin Behrens from Pixabay 

When power is discovered, man always turns to it. The science of heredity will soon provide power on a stupendous scale; and in some country, at some point, perhaps, not distant, that power will be applied to control the composition of a nation. Whether the institution of such control will ultimately be good or bad for that nation, or for humanity at large, is a separate question.

William Bateson, English biologist who coined the term “genetics.”

On November 25, 2018, in an allegedly leaked YouTube video, He Jiankui, a scientist at the Southern University of Science and Technology in Shenzhen, China, revealed the birth of the first gene-edited babies using a technology called CRISPR. There has been a general consensus in the scientific community that heritable changes should not be made to prevent the off-target and unwanted genetic changes artificially produced in an individual during gene editing to be passed on to his/her offspring(s). He became the first scientist to publicly violate this consensus resulting in an international scandal and criminal/ethics investigations into both He and his collaborators.

In the wake of He’s CRISPR-babies scandal, scientists worldwide are debating on the ethical and regulatory measures that would discourage another wayward and rogue scientist like He from attempting such an irresponsible feat.  At the 2nd international summit on human gene editing that convened two days after He’s video became public, He presented his work. The summit was well attended by ethicist and journalist besides scientists. At the summit, David Baltimore of the California Institute of Technology, who chaired the organizing committees for both the 1st and 2nd international summits on human gene editing read one of the conclusions from the 1st summit held at Washington DC in 2015 – “It would be irresponsible to proceed with any clinical use of germline editing unless and until (i) the relevant safety and efficacy issues have been resolved, based on appropriate understanding and balancing of risks, potential benefits, and alternatives, and (ii) there is broad societal consensus about the appropriateness of the proposed application”. Baltimore called He’s work outright irresponsible on the basis of the statement from the 1st summit. At the summit, many other ethical and safety-related questions were raised which He failed to answer or did not answer convincingly. 

He’s scandal has driven various organizations to draft new guidelines and sanctions aimed at preventing unethical and unapproved use of genome editing.  China has imposed new laws requiring human gene editing projects to be approved by China’s health ministry first to avoid fines and blacklists. Both the 2nd human gene editing summit and the WHO panel that convened in March 2019, have proposed a central registry of human gene-editing research and called for an international forum/ committee to devise guidelines for human gene editing based on common norms and differences of opinions between countries.  To allow time for the creation and effective implementation of new regulations, the WHO also called for a global moratorium on heritable editing of human eggs, sperm, or embryos for the next five years. Supporting the WHO panel’s recommendations, Francis Collins, director of the National Institute of Health, said that “NIH strongly agrees that an international moratorium should be put into effect immediately”. However, not all scientists are in favor of a moratorium, as they believe it might stifle the growth of a technology that might be safe and beneficial in the near future. Jennifer Doudna of the University of California, Berkley, one of the co-inventors of CRISPR gene editing, says that she prefers strict regulation that precludes the use of germline editing until scientific, ethical, and societal issues are resolved over a moratorium. David Baltimore agrees with Doudna stating that the word moratorium was intentionally not used in both the human gene editing summits as a moratorium would be hard to reverse.  Science historian Ben Hurlbut of Arizona State University, who had numerous discussions with He before Lulu and Nana were created, thinks a blanket moratorium on clinical germline editing would have prevented He from proceeding. Both the two human gene editing summits and a 2015 essay by Baltimore, Doudna, and 16 co-authors had already outlined numerous guidelines for clinical germline editing. According to Hurlbut, He weighed these criteria and believing that his procedure met all the guidelines proceeded. A categorical prohibition of germline editing would not have allowed him to use his subjective judgment and act out of self-interest. 

The modern debate over CRISPR editing is not the first time the scientific community has come together to discuss game-changing biological technologies, and it is heavily informed by two prior events. In 1970, Paul Berg and his postdoctoral researcher David Jackson used the recombinant DNA technology to create the first chimeric DNA. This invention created an uproar among the scientists and the general public who feared that this technology would lead to the creation of uncontrollable and destructive superbugs, the exaggerated versions of which can be seen in some science fiction movies. Yielding to the opinions and sentiments of the fellow scientists, Berg held himself from cloning such recombinant DNAs and in 1974, he pleaded for a voluntary moratorium on certain kinds of recombinant DNA research until their safety issues have been resolved.  He also moved quickly to organize the Asilomar conference (Asilomar II) in 1975 that bore semblance to the 2nd human gene editing conference in that it invited not only the scientists but lawyers, ethicists, writers, and journalists to weigh in on the risk-benefit analysis of the Recombinant DNA technology. On the recommendation of Asilomar conference, Donald Fredrickson, then director of the National Institutes of Health (NIH), initiated the formation Recombinant DNA Advisory Committee (RAC) to act as a gatekeeper of all research that involved recombinant DNA technology. The scope of the committee, which was composed of stakeholders, including basic scientists, physicians, ethicists, theologians, and patients’ advocates was later expanded to encompass the review and approval of human gene therapy research. Due to the redundancies of regulatory oversights between the US Food and Drug Administration (FDA) and RAC, RAC was reinstated as only an advisory body providing advice on the safety and ethical issues associated with emerging biotechnologies in 2019.

While this is a successful example of scientific self-regulation, the second event resulted in a major setback in the field of gene therapy. On September 13, 1999, Mark Batshaw and James Wilson of University of Pennsylvania supervised the administration of adenovirus to an 18-year-old Jesse Gelsinger in a gene therapy clinical trial. Gelsinger died of liver and kidney failure and brain damage three days later. Like the birth of CRISPR babies, Gelsinger’s death was an instance where new technology was used prematurely without a thorough assessment of its safety profile. It is suspected that both the clinical applications headed by He and Wilson might also have been motivated by fame and financial gain; He and Wilson both had financial stakes in private biotechnology companies that would benefit from these human trials. In the aftermath of Gelsinger’s death, Wilson was banned from carrying out FDA regulated clinical trials for the next five year, nearly all gene therapy trials were frozen, and many biotechnology companies carrying out these trails went bankrupt. This was a dark period in the history of gene therapy, and it would take almost another decade of introspection, reconsideration, and more basic experimentation for gene-therapy to re-emerge as a viable therapeutic strategy.

Figure 1: The regulatory status of human germline gene modification in various countries. Thirty-nine countries were surveyed and categorized as “Ban based on legislation” (25, pink), “Ban based on guidelines” (4, faint pink), “Ambiguous” (9, gray), and “Restrictive” (1, light gray). Non-colored countries were excluded in this survey. Adapted from Araki, M. and Ishii, T (2014): “International regulatory landscape and integration of corrective genome editing into in vitro fertilization” Reproductive Biology and Endocrinology, 2014 12:108

Scientists at both the Asilomar and human gene editing conferences passionately debated the safety of the relevant technologies but deliberated on the discussion of the big ethical issue associated with these technologies – the ultimate creation of designer babies. That gene editing sits on the slippery slope to eugenics was recognized since the days of Charles Darwin and Gregor Mendel when the study of genes and heredity was still in its infancy and the discovery of DNA as the genetic material was half a century away. One of the earliest proponents of genetic manipulation for human benefits was Francis Galton, Charles Darwin’s cousin. Galton proposed an unnatural and accelerated selection of beneficial traits by marriage between people of desirable traits. The danger that someday some rogue scientists might use germline gene editing technology in favor of eugenics lurks in the mind of those who understand the potential of the currently available gene editing technologies. However, more fearful is the idea that the wave of positive eugenics would soon give way to negative eugenics – elimination of undesirable traits as it did around World War II as exemplified by the famous case of Carrie Buck, a woman who was designated “mentally incompetent” and involuntarily sterilized. 

Various countries have their own regulation and legislation on germline editing to prevent any backlash from this powerful technology. Figure 1 presents a summary of the regulatory landscape of germline gene modification surveyed in thirty-nine countries by Araki Motoko and Tetsuya Ishii.  In the US, Congress has shown strong support against germline gene editing. In 1996, it passed a rider as part of the annual appropriations bill that prohibits the use of federal funds for any research involving human embryo. In another appropriations bill passed in 2015, Congress banned the FDA from considering applications involving the therapeutic modification of the human germline. 

Human gene editing holds great promises in treating many life-threatening and previously intractable diseases. Only when this discipline of science is held to high ethical standards and regulated sensibly at international, national, and a personal level, shall we reap the benefits of this powerful technology.

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

May 29, 2019 at 9:25 am

Posted in Essays

Tagged with , , ,

How are we welcoming our next generation-The first 1000 golden days?

leave a comment »

By: Deepika Shrestha , Ph.D.

Source: Wikimedia

The most important period for a child’s development, especially for the brain and immune system, is the first 1000 days of life. The Developmental Origins of Health and Disease (DoHAD) hypothesis suggests that the roots of many complex diseases and behavioral risks originate very early – between the time window of pre-conception to early postnatal periods. Indeed, genetic, epigenetic and environmental evidence indicate that most adulthood health and disease risk have been coded during fetal development in the intrauterine environment, lending support for the DoHAD hypothesis

The increased prevalence of an obesogenic environment and rate of chronic diseases in mothers increases the risk of childhood obesity and future cardiometabolic risk in the offspring. Moreover, increased risks can be transgenerational within families. This is also the case for Autism, Attention Deficit Hyperactive Disorder (ADHD), and other mental and psychological disorders. Given that over 50% of all mothers are overweight or obese, there is a growing cause of concern about the quality of fetal growth and development. Further, recent statistics revealed a sobering picture of increased incidence of various maternity-related illnesses such as postpartum depression (CDC Reports: as high as 1 in 5 women) and maternal mortality (CDC MMR 2019 report: 700 deaths per year- out of which relatively 3 in 5 deaths preventable, 31.3% deaths occurred during pregnancy, 16.9% on the day of delivery, and 51.8% over 1 year of post-partum days) in the US. These numbers show that many women in the US are among the most vulnerable and need sufficient support from family, society and governmental policy. There have been successful campaigns to institute policies raising awareness of issues concerning fetus growth, development and maternal/infant nutrition in developing countries, such as Golden 1000 Days. The United States needs similar programs special focusing on maternal care, especially on the nutritional, psychological, mental and financial needs of pregnant mothers and women in reproductive age groups. 

Another program has also been put in place to improve nutrition in mothers and children. The Women, Infant and Child (WIC) program provides supplemental nutritional support to roughly 8 million low-income mothers and young children under 5 years of age. With approximately 6 billion USD of funding in 2016, the current WIC program is the result of an update in 2009 after rigorous review by the Institute of Medicine (IoM) to reflect the latest nutritional science as well as public health concerns. However, recent evidence indicates that food and nutrients supplements through the WIC program might not match the nutritional need of the participants as it fails to account for women’s prepregnancy obesity status, gestational weight gain, and gestational diabetes.  For instance, concentrated fruit juice may increase the risk of gestational diabetes risk and is not a healthy food option. This important policy needs fair re-evaluation based on the updated scientific evidence for nutritional needs.

Another point of concern for expecting mothers is the lack of psychological care. Mothers-to-be undergo extensive physiological and psychological changes during pregnancy. Therefore, this 1000-day window is a sensitive time period —a time where pregnant women require support and potential intervention. Recent data highlight increasing trends of maternity related illnesses, be it postpartum depression or maternal mortality. More importantly, in the US these issues disproportionately affect women of color or low socioeconomic status.  Alarmingly, 42% of mothers  are sole or primary earners and may lack adequate financial support from their spouse and family. The Pregnancy Discrimination Act and Family and Medical Leave (FMLA) act was put in place to protect pregnant women in the job place for 12 weeks after childbirth. However, the United States is one of the few countries in the world with almost no access to paid parental leave—only 14% of civilian workers have access to any amount of paid parental leave in 2016, a slight increase from 11% in 2010

Access to paid parental leave currently serves as elite benefits and is dependent upon company policy. The most generous policies afford 16 weeks for birth mother, 8 weeks for birth father, 8–16 weeks for adoptive parents (16 for primary, 8 for secondary) according to PL+US’s report. About 23% of mothers go to work within 10 days of giving birth and are disproportionately from low-income families.In addition to maternal paid leave, recent mothers often require considerable sick leave for the first year, and providing a flexible policy could be a steppingstone towards helping the psychological as well as physiological health of a child. Furthermore, it is no secret that inspected and reputable day care facilities takes a major chunk of the family income, and are unaffordable to many families.

In addition, there is also a need for Newborn Rights. Irrespective of socioeconomic status, each baby has inborn rights and deserve equal family bonding time and breast-feeding needs. Children born to poor maternal care during pregnancy and lactating period are at increased risk of having neurological problems, poor school achievement, early school dropout, low-skilled employment, and providing poor care to their own children, thus contributing to the intergenerational transmission of poverty and malnutrition. On the contrary, children who get good nutrition and care in their first 1000 days are ten times more likely to overcome life threatening childhood disease, have higher educational retention in schools and are likely to earn more than 21% in wages as adults and also to have healthier families on their own. Therefore, there is an unmet need for a stronger policy that invests in children and their families from the very beginning and helps each child to be a healthy and contributing member of the society in their adulthood. It takes a major process and significant effort to raise a child into a healthy adult who is mentally, spiritually and physically fit to keep going in a productive society.

Paid maternity leave and insurance coverage needs support from Government/Congress to mothers or family unit regardless of the beneficiary’s work status. Investing in this policy may cost taxpayer a small percent of GDP (Gross Domestic Products) but will have a huge return in the long run when Health is valued as development index. There is ample evidence to show that countries that fail to invest in the well-being of women and children in the first 1,000 days lose billions of dollars to lower economic productivity, health issues, societal inequality and higher health costs. This is a main point of concern given that the US is lagging far behind other developed nations in this human development index (HDI). Currently, there is a huge disparity of investment in the first 1000 days based on socioeconomic status, and there is a clear and unmet need for structural and policy intervention. Issues related to the maternity period such as nutritional aspect, mental health, and paid maternity (or paternity) leave should not be considered only a women’s issue. Therefore, more than ever, there is a heightened need for research resources to understand maternity health issues and also concrete plans to address these issues.

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

May 24, 2019 at 2:21 pm

Breast cancer screening: How do we maximize benefit while minimizing harm?

leave a comment »

By: Catherine Lerro, Ph.D., M.P.H.

Image by Bruno Glätsch from Pixabay 

Breast cancer is the most commonly diagnosed tumor in US women, and the second leading cause of cancer death in women with an estimated 268,600 diagnoses and 41,760 deaths predicted for 2019. Despite these seemingly sobering numbers, mortality due to breast cancer has declined over the past several decades. Today, women diagnosed with early stage disease are about 99% as likely to be alive five years after their diagnosis as cancer-free women. These declines in cancer death have been largely attributed to both improvements in treatment and successful implementation of mammography (breast x-ray) screening programs, considered a hallmark of preventative cancer care. Some researchers estimate that upward of 380,000 breast cancer deaths have been averted since 1989 due to mammography and improved breast cancer treatment. In fact, the Affordable Care Act has provisions to ensure that women with private health insurance, public health insurance (e.g. Medicare, Medicaid), or health insurance purchased through a state exchange are covered for breast cancer screening. 

The idea behind mammographic screening is that breast cancers diagnosed at an early stage are more likely to respond well to treatment, preventing cancer-related death. While not all cancers have population-wide screening programs, breast cancer is a good candidate for screening. First, breast cancer is common enough to warrant subjecting women to a mammogram at regular intervals during a defined period of known risk. If a disease is very rare, it is likely not a good candidate for population-wide screening because the costs would outweigh the potential benefit. Second, there must be a good test available that is both sensitive and specific. In other words, the test should detect as many true cases as possible, while minimizing the number of patients with false-positives that require more invasive testing such as a biopsy. Finally, there must be some benefit to detecting disease early. For breast cancer, women with early stage disease may be more easily treated and have better prognosis compared to women with distant-stage disease.

Currently, mammograms are recommended for much of the adult female population in the US over the age of 50. Many different organizations release breast cancer screening guidelines on a regular basis including (but not limited to) the US Preventive Task Force, the American Cancer Society, the American College of Obstetricians and Gynecologists, and the American College of Radiology. While the recommendations share some similarities, there are important differences and no one guideline is universally accepted. For example, for women ages 50-74, the US Preventative Task Force recommend biennial mammograms, while the American College of Radiology recommends yearly mammograms. These differences may arise from the data used to develop the guidelines and how the data are valued. For example, the US Preventive Task Force counts mortality reduction as the soul benefit of mammography and considers potential risks such as false-positive tests. The American College of Radiology considers other mammography benefits outside of morality reduction such as less aggressive treatment for early stage cancers. The American College of Radiology also have recently amended their guidelines to consider race, with the option to screen African American women, who are at greater risk of more aggressive breast cancers, starting at younger ages at the discretion of both the patient and physician. 

Understanding how and if breast cancer screening guidelines are integrated into clinical practice is a murkier area still. In recent years, most major guidelines recommend less routine screening and have endorsed a more individualized approach that involves discussion of the benefits and harms of screening and incorporates patient preferences and beliefs, especially for younger women. However, studies have found that despite these changes in recommendations, breast cancer screening in practice in the US has changed very little. This may be driven by US health system traits, such as fee-for-service payment systems and concerns about litigation. Furthermore, both clinicians and patients may overestimate the benefits and underestimate the harms of mammography, particularly for younger women.

The benefits of diagnosing breast cancer early cannot be overstated, as response to treatment and survival depends greatly on stage at diagnosis. However, the potential harms of screening are often overlooked. Of course, there are economic costs incurred for any wide-scale screening program. Just as importantly, we should seriously consider the physical and emotional costs of overdiagnosis and overtreatment. A 2018 report in the Journal of the American Medical Association found that for every 10,000 women screened for breast cancer, more than half under the age of 60 will experience a false positive test result. Almost 10% of women will undergo at least one unnecessary biopsy. Additionally, the authors demonstrated that through screening more women were potentially overdiagnosed (cancers diagnosed and treated that would have never become clinically evident) than deaths were averted. There may be psychological consequences to false positive test results, including both short-term and long-term anxiety. Unnecessary biopsy and overdiagnosis could potentially have long-term physical health consequences that would otherwise be avoided. 

How do we improve mammography screening in the US, maximizing the benefits while minimizing the risks? What is clear is that there is no simple solution. In a health system that largely favors more testing at potential cost to patients, institutional changes in how health insurance reimburses clinicians for care should consider looking beyond fee-for-service models. The newest breast cancer screening guidelines also favor individualized approaches, prioritizing screening among high-risk women and educating patients about the potential benefits and harms of screening with full consideration of their own medical history and preferences. Clinicians may consider tools that utilize detailed patient information to assess an individual patient’s risk of breast cancer, as well as tools soliciting patient preferences that support shared decision-making. Finally, it is important that all women requiring regular mammograms have access to breast cancer screening and high-quality treatment regardless of age, race, geographic location, or socioeconomic status, in order to minimize disparities in stage at diagnosis and breast cancer survival. 

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

May 15, 2019 at 11:30 am

Posted in Essays

Tagged with , , ,

Living in America with a chronic disease: Drug prices here and why they are so high.

leave a comment »

By: Mohor Sengupta Ph.D.

Image by Liz Masoner from Pixabay 

The USA has the highest average prices on drugs compared to all other developed nations across the globe. The average expenditure on drugs per person is around $1200 per year in the U.S., while it is roughly $750 in Canada, according to a 2014 survey. Let us look at a specific example. Nexium is a drug that helps reduce stomach acidity. It is manufactured by AstraZeneca in Sweden and sold to customers in the U.S., Canada, U.K., Australia, New Zealand, India and Turkey. The 40 mg pill costs $3.37 in Canada, $2.21 in the U.K., Australia and New Zealand, less than 37 cents in India and Turkey and $7.78 in the U.S. Specialty medicines, like those used for cancer can cost $10,000 a month in the U.S

Fred Smith, whom I interviewed recently, is a 26-year-old freelance musician and trumpet instructor. Shortly after his 26thbirthday, his health insurance coverage under his mother’s provider plan ended. He went on to buy his medical insurance from the private provider Blue Cross Blue Shield only to realize that he had to pay nine times the cost for each of two medicines, Vyvanse and Viibryd, and 18 times the cost for a third medicine, Adderall, compared to the amount paid while on his mother’s insurance. 

So why do Americans pay more for their medicines? 

  • Drug manufacturers in the U.S. can set the price of their products. 

While this is not the norm elsewhere in the world, federal law in the U.S. does not allow FDA or public insurance providers to negotiate drug prices with manufacturers. Medicare Part D is a 2003 legislation that prevents the nation’s largest single-payer health system from negotiating drug prices. Medicaid, which is the public healthcare program for people with limited income and resources, must cover all FDA-approved drugs, irrespective of the cost. However, drug makers must provide rebates to the government for drugs billed to Medicaid. In general, the biggest cost of medicines is borne by Medicare and private insurers. Private insurance providers do not usually negotiate prices with drug manufacturers. This is because middlemen or third-party pharmacy benefits managers that administer prescription drugs, such as CVS Health, receive payments from drug companies to shift market share in favor of these insurers. These deals also leave consumers with a limited choice. 

Drug makers in the U.S. not only set their own prices but they are also authorized to raise prices. Martin Shkreli became the “most hated man in America” overnight when he raised the price of a generic anti-parasitic drug Daraprim from $13.5 a pill to $750 a pill, a 5000% increase. Mr. Shkreli explained to critics that the hike was warranted because Daraprim is a highly specialized medicine and likened it to an Aston Martin previously sold at the price of a bicycle. He added that the profits from the price increase would go into improving the 62-year-old recipe of the drug. 

Deflazacort, a steroid used to treat Duchenne muscular dystrophy, is a generic compound that has been available worldwide for decades and costs $1000-$2000 per year. Yet, Illinois-based Marathon Pharmaceuticals acquired FDA approval to sell deflazacort under the brand-name Emflaza at $89,000 per year. 

Speaking of generic drugs, here is the next big reason for unaffordable brand-name medicines. 

  • Government-protected monopolies for certain drugs prevent cheaper generics from entering the market. 

The U.S. has a patent system that allows brand-name drug makers to retain exclusive selling rights for 20 years or more. Makers of drugs for rare diseases can also enjoy indefinite monopoly of sale. Moreover, these rare drug makers can extend their solo market dominance by making minor and non-therapeutic modifications to the patented product, like changing the dye component in the coating. They also often pay generic manufacturers to delay their products from entering the market. 

Additionally, FDA approval of generics following expiration of brand-name drug patents can be a long process; it can take up to 3-4 years for generic drug manufacturers to get FDA approval. It is estimated that prices of generic medicines fall to 55% of the brand-name medicine price once two generics enter the market and 33% of the brand-name cost when five generics become available. 

However, why would a brand-name manufacturer applying for a patent cite an unaffordable price to begin with?

  • Unjustified cost of research and development are cited by drug makers. 

It is generally agreed among critics that drug makers put an unjust price on their product citing the research that went into producing it. Because most of the R&D is funded by the National Institutes of Health via federal grants or by venture capital, the cost of research cited by the drug makers is above exaggeration. In reality, companies spend no more that 10-20 percent of their revenue on the research. 

Sofosbuvir was made by Michael Sofia, a scientist with a Princeton-based pharmaceutical company called Pharmasset. He even received the 2016 Lasker-DeBakey Clinical Medical Research Award for inventing it. Sofosbuvir is recommended for management of hepatitis C. After Gilead Sciences acquired Pharmasset for $11 billion in 2011, it applied to FDA for a new drug combining Sofosbuvir and Ribavirin, first made in 1972 by scientists at International Chemical and Nuclear Corporation (currently Canada-based Bausch Health Companies). Gilead priced their product at $84,000 for a single course of treatment in the U.S. The pricing caused a huge controversy when patients on Medicaid were denied the drug until becoming seriously ill. Moreover, generic licensing agreements to produce Sofosbuvir in 91 developing countries, which bear the burden of more than half the world population with hepatitis C, came under fire when Gilead asked for prices unaffordable by consumers in these countries. 

This brings us to the final cause of high drug prices. 

Doctors are often unaware that their prescriptions could be cheaper for their patients if they purchased two generic medicines instead of the brand-name prescription drug that is just a combination of the two. Vimovo, manufactured by Horizon Pharma, is a drug used to treat symptoms of osteoarthritis, rheumatoid arthritis, and ankylosing spondylitis. It is a combination of two generic medicines, naproxen (brand-name Aleve) and esomeprazole (brand-name Nexium). Naproxen is the anti-inflammatory component (NSAID) and esomeprazole is the aforementioned stomach-acidity reducer. It is added to the combination to reduce side effects of the NSAID. Whereas a month’s supply of Aleve and Nexium cost one patient $40, his insurance company was billed $3252 for the same supply of Vimovo. Moreover, not everyone who uses NSAIDs experiences stomach problems and do not need the additional esomeprazole component. 

Several Americans do not fill their prescriptions because they cannot afford to. Data show that 36 million Americans between the ages of 18 to 65 did not fill their prescriptions in 2016. Many resort to buying medicines online from foreign sellers or get them imported. Both routes are illegal and therefore we do not know the exact percentage of the population participating in these practices. 

I interviewed Tammy Connor, who regularly gets her medications from abroad. Tammy takes Synthroid, a brand-name drug, which is used to manage symptoms of hypothyroidism. She has been procuring it from Canada at 1/3rd its U.S. price for many years. In the middle of 2018, the U.S. began blocking drug purchases from Canada, preventing her from continuing this cost-saving practice. Eventually, she got a referral to a U.K.-based drug company called Medix Pharmacy, where she pays 1/3rd the amount that she would have to pay if she purchased Synthroid from the U.S. “Ironically, Medix gets its Synthroid supply from Canada”, Tammy said.

Big Pharma” is a major lobbying group in the U.S. This is a group of a few gigantic pharmaceutical companies which have together kept their profit margins rising amidst public outcry of drug unaffordability. Big Pharma also includes corporations that push overpriced drugs to customers. With their deep pockets, they can spend astronomical amounts on advertising and lobbying. 

Unaffordable prices of life-saving medicines cause many people to skip taking necessary medications, thanks to the Big Pharma. Now, more than ever, is the time that something was done about this. 

Recommended links: 

  1. http://money.com/money/4462919/prescription-drug-prices-too-high/
  2. https://jamanetwork.com/journals/jama/article-abstract/2545691
  3. https://www.cnbc.com/2017/05/10/americas-10-most-expensive-prescription-drugs.html
  4. https://www.renalandurologynews.com/home/news/almost-1-in-10-americans-cant-afford-medications-says-cdc/

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

May 9, 2019 at 4:23 pm

Keep your head in the game… or don’t: The link between football and brain injury

leave a comment »

By: Saroj Regmi, Ph.D.

Image by WikiImages from Pixabay 

American football is a team sport that enjoys wide popularity and an extensive fan following. For over 30 years it has reigned as the most popular sport in the US. In recent years, however, it has remained at the forefront of controversy due to growing concern over long term health effects.

The safety of football was initially brought into question by a study in 2002 from  Dr. Bennet Omalu, a neuropathologist working in Pittsburgh. Dr. Omalu, whose efforts were portrayed in the popular movie Concussion starring Will Smith, discovered the link between chronic traumatic encephalopathy (CTE) and American football players. He performed an autopsy of former Pittsburgh Steelers player Mike Webster and established that CTE, a disease previously ascribed to boxers, also occurred in football players. Forensic analysis of Mike Webster’s brain, who struggled for years from mood disorders, depression, suicidal thoughts – symptoms associated with CTE – showed large accumulation of tau protein. Although the pathogenesis of CTE remains poorly understood, it is believed that clumping of the protein tau, also seen in Alzheimer’s patients, leads to death of brain cells. A series of publications have followed suit since 2002 and have included post-mortem analysis of the brains of some former NFL players including Terry Long, Justin Strzelczyk, Andre Waters and Tom McHale. From all these studies, the message is loud and clear: there is a strong link between tackle football and CTE.

With each new scientific report, the relationship between CTE and contact football became clearer. A recent study in the Journal of American Medical Association involving brains of  deceased people that played football at various levels, from high school to NFL, identified CTE in 87% of the players. Even more remarkably, it identified CTE in 99% of NFL players – a shocking number. In the study, the authors also argue that the disease risk and severity might be a result of age at first exposure to football and the duration of play as well as various other factors. This means that even limited exposure to contact football can significantly increase your chances of suffering from CTE. 

CTE, also referred to as “punch drunk syndrome”, as of yet is not treatable and research studies on the disease have been limited. Investigation of CTE pathogenesis is further complicated by the fact that a definitive diagnosis is only possible post mortem. Given the widespread impact of the disease, a recent push has been made by researchers to identify biomarkers of CTE in living patients. A recent collaborative study between the Concussion Neuroimaging Consortium and Orlando Health tested blood based biomarkers and were able to identify elevated levels of microRNAs in the blood of college football players. The report, published in Journal of Neurotrauma, demonstrated that these biomarkers were high in these players even prior to head injury for the season. This means that head injuries have a lasting effect and these biomarkers can identify head injuries incurred in previous seasons. Cognitive tests involving study participants demonstrated that the players who struggled with memory and balance had much higher levels of microRNAs than those who did not. Over the years, the researchers hope to use these microRNA biomarkers to identify at-risk athletes.

A recent report, published in The New England Journal of Medicine, has been a game changer in our understanding of CTE in NFL players. By taking brain scans of 26 former players at varying levels of symptoms associated with CTE, the study has taken an unbiased approach to analyze the severity of CTE in professional NFL players. The study used positron emission tomography (PET) scans to determine that NFL players had higher levels of abnormal tau protein in disease associated parts of the brain in comparison to men of similar age that had not played football. In contrast to some of the previous studies, the results of the report did not reveal a correlation between the severity of tau accumulation and the degree of cognitive issues associated with CTE. A correlation between tau accumulation and total years of playing football was seen. Therefore, while tau deposition can serve as a biomarker of CTE, levels of tau accumulation does not determine the severity of the disease. Interestingly, the study also found one former player that had levels of amyloid-beta deposition comparable to that of an Alzheimer’s patient. While the study provided a lot of answers, it also raised a wealth of different questions. It is still unclear whether tau accumulation is faster in people with repeated head trauma. Also, how the accumulation of tau leads to behavioral alterations associated with CTE remains a complete mystery.Although the report was careful to highlight that this imaging-based approach is still in its infancy and that it could take years to develop a proper diagnostic test for the disease, the results of the analysis are definitely encouraging. This is the first ever reported study to utilize tau imaging in living players. 

A major takeaway from these studies is that although CTE remains poorly characterized with symptoms ranging from forgetfulness to suicidal thoughts, it is almost invariably caused by concussions and head injuries resulting from contact football. What is terrifying is that CTE can occur not only in professional football players but also in high school students that play football. These reports bring into question the safety of the sport in its current state. With over a million high school students engaged in the sport, a radical rethinking of the game is required to make it a safe and fun activity that youngsters can partake in without risking their health. 

Recently, the Canadian league instituted a ban on full contact practices to reduce collision during practice. The league has also increased time between games so that the players are afforded a longer recovery time. Similar approaches have also been made by the Ivy League. There is also a need of policies to ensure that the general public is aware of the risks, particularly children and their parents. HEADS UP is one such program initiated by the CDC that provides online training course for health care providers and high school sports coaches. Efforts have also been made in recent years, at both state and federal levels to reduce concussion in the youth. Although not monumental, these efforts are an important step in the right direction.

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

May 1, 2019 at 4:39 pm

Posted in Essays

Tagged with , ,