Science Policy For All

Because science policy affects everyone.

Archive for the ‘Essays’ Category

Maternal Mortality on the Rise in the United States

leave a comment »

By Kathleen Huntzicker, ScB

Image by Sasin Tipchai from Pixabay

Despite leading the world in healthcare spending, the United States has the highest maternal mortality rate of any developed country. According to the Centers for Disease Control and Prevention (CDC), about two women die every day in the United States due to complications with a pregnancy or a delivery. The United States also holds the unfortunate distinction of being one of the only countries in the world with a rising maternal mortality rate – while most other countries have improved maternal health over time, the death rate of pregnant or recently-delivered American women in the United States has doubled since 1987. Growing activism has helped to raise awareness of these sobering statistics; however, the United States has yet to meaningfully curtail its rising number of deaths.

The CDC defines a pregnancy-related death as “the death of a woman while pregnant or within one year of the end of a pregnancy – from any cause related to or aggravated by the pregnancy or its management, but not from accidental or incidental causes.” Although the United States has done little to decrease the occurrence of these deaths over the past four decades, it has managed to successfully tracked pregnancy-related deaths for over 100 years. Since 1915, the National Vital Statistics System has published mortality rates for each state using data collected from death certificates. While these numbers help to illustrate gross trends over time, they can only provide limited insight into thewhat factors that might drive changes in mortality rate. Additionally, societal perceptions of what constitutes an “accidental” death have changed markedly since 1915. Just decades ago, a suicidedrug overdose following a pregnancy might have been considered an accident; however,, whereas doctors today recognize that the mental health of a woman is not independent of her pregnancy status. To provide more accurate and standardized data sets, state and local governments began forming Maternal Mortality Review Committees (MMRCs).forming Maternal Mortality Review Committees (MMRCs) with the goal of developing novel strategies to combat peri- and post-partum maternal death. BIyn 1968, 44 states andnd the District of Columbia had instituted their own versions of an MMRCs. Sadly, by 1988, this count had dropped to only 27. Upon the recommendation of the CDC, the United States then pioneered the Pregnancy Mortality Surveillance System in 1986 to collect death certificates and other related health information from all fifty states. This system was instituted in the hopes that a closer comprehensive study of all maternal deaths might fill in gaps in the current understanding of maternal mortality, allowing both lawmakers and medical professionals to better focus on the most vulnerable populations of women and to determine what types of community interventions might be most successful.

These monitoring systems have revealed that every woman does not experience the same odds of suffering a pregnancy-related death. In fact, the mortality rate for non-Hispanic black and Native American women is more than triple that of white, non-Hispanic women, with the disparity only increasing in older age groups. Additionally, recent data indicates that women who receive no prenatal care are three to four times more likelythree to four times as likely to suffer a pregnancy-related death than women who do attend prenatal doctor visits prior to delivery. Women without prenatal care represent 25% percent of all pregnant women in the United States, but 32% percent of black women and 41 %percent areof Native American women.25% of all pregnant women in the United States, but 32% of black women and 41% of Native American women. A study out of Mount Sinai School of Medicine found that maternal deaths were not consistent across hospitals, either, and that hospitals mainly serving minority women were more likely to experience a pregnancy-related death, even afterwhen controlling for the race of the patient. This suggests the racial disparities could depend not only on access to health care, but also inconsistencies in how pregnant women are treated from hospital to hospital.

One glaring example of such a medical inconsistency is the rate of cesarean sections across hospitals.medical centers. While many pregnant women might assume that their doctor’s decision to perform a c-section is dependent only on the health of the mother and child, collected statistics appear to suggest otherwise. C-section ratesRates of c-section by hospital vary drastically: a team from the University of Minnesota found that nationwide, rates for individual hospitals ranged as low as 7% percent and as high as 70 p%ercent7% and as high as 70%.. Even when treating women with low- risk pregnancies, some hospitals were more than fifteen times as likely to perform a c-section than others. These statistics are especially worrying given that c-sections can be linked to severe complications in deliveryc-sections are linked to severe complications in delivery, including hemorrhages, infection, and surgical injury. In fact, women who undergo c-sections are 80% percent more likely to experience severe complications than women who deliver vaginally. In the United States, about one in three women deliver via c-section, a rate more than 500 times higher than in the 1970s500 times higher than it was 50 years ago,, and about 50% higher than the rest of the world. With c-section rates continuing to rise, it might be reasonably expected that the rates of maternal morbidity and mortality could rise as well.

Though the statistics maymight seem grim, some policy measures appear to hold promise for improving future outcomes. California’s Maternal Quality Care Collaborative was successfulsucceeded in cutting its state’s maternal mortality rate in half, largely due to the addition of emergency delivery toolkits in over 200 hospitals across the state. The collaborative team targeted the two most preventable causes of maternal death – hemorrhages and adverse cardiovascular events –

by designing best-practice protocols for peripartum emergencies, and training hospital teams to carry them out. The CMQCC recommends that every maternity ward feature a cart with all the necessary equipment to treat an emergency hemorrhage, including sponges, clamps, and a Rusch balloon to staunch heavy uterine bleeding. These toolkits have now been implemented across eighteen states, to with much success. However, even in California, which now boasts the lowest maternal mortality rate in the country at four4 deaths per 100,000 women, racial and economic disparities unfortunately still exist, indicating that simply adding a hemorrhage cart to all hospitals cannot be the only answer to the rising maternal mortality rates in the United States.

It remains clear that the federal government, as well as state and local governments, must to do more to address the alarming rate of maternal deaths in the United States. Low-income and minority women are three to four times more likely to die as a result of pregnancy or childbirth, so it is imperative that any policy solution adequately address the vastly different standards of care that are received by American women dependent on their race, chosen hospital, or state of residence. With governmental access to increasingly larger data sets and growing public and political awareness of the issue, we can becan hopeful that the maternal mortality rate in the United States will decline in the coming years. Even so, the state of maternal health in the country is a sobering reminder of the stark health disparities that exist between races, neighborhoods, hospitals, and income levels – and, particularly in the case of maternal mortality rate, the unique health risks and dangers experienced by American women.

Written by sciencepolicyforall

February 22, 2020 at 8:50 am

The Challenge of Global Health Diplomacy

leave a comment »

By: Somayeh Hooshmand, PhD

Image by Jukka Niittymaa from Pixabay

Improving population health is the central concern for all human societies. In the past, it was enough for a nation to take action on their own to improve the health of their citizens. In today’s globalized world, there aren’t any borders or walls for a wide range of health issues. Migration and mobility of people within and between countries around the world, the high volume of trade, the flow of information and the flow of capital across geographic boundaries can spread diseases (such as polio, anthrax, HIV/AIDS, SARS, pandemic flu) and threats of bioterrorism quickly, thereby affecting many countries simultaneously. Consequently, many health issues cross national borders and cannot be resolved by any one country acting alone and other nations cannot pull away from action. It requires a wide array of activities between nations as well as collaboration between many sectors (both governmental and nongovernmental) and better communication among nations to strengthen economic growth, national security, human dignity and human rights, human security, social development as well as environment. 

Global health is focused on achieving improvements in the health and well-being of all people worldwide, and involves many disciplines both within and beyond the health sciences. In today’s world, heath is tied up with foreign policy and policy makers often need familiarity with the different policies and ongoing international political diplomacy and negotiations to address crises. To this end, the Oslo Ministerial Declaration was drafted in 2007, by seven foreign ministers including Brazil, France, Indonesia, Norway, Senegal, South Africa and Thailand, to promote and discuss the importance of integrating health issues into foreign policy. In the declaration, it is clearly stated that “We believe that health is one of the most important, yet still broadly neglected, long-term foreign policy issues of our time…We believe that health as a foreign policy issue needs a stronger strategic focus on the international agenda. We have therefore agreed to make ‘impact on health’ a point of departure and a defining lens that each of our countries will use to examine key elements of foreign policy and development strategies, and to engage in a dialogue on how to deal with policy options from this perspective.”

In the 21st Century, health has become increasingly relevant to foreign policysecurity policy, development strategies. As each nation has its own constitutional, political and financial differences according to their own standards and circumstances, it has become clear that new skills are needed to conduct global health diplomacy and negotiations in the face of other interests. 

The direct or indirect effect of economic, socio-cultural, and political factors on health requires more diplomats to enter the health arena to interact with the non-governmental organizations and non-state actors, scientists, and activist groups. On the other hand, public health experts need to be trained and have the practice and experience of diplomacy. Both public health experts and diplomats need to interact more productively to create effective outcomes throughout global health negotiations to solve global problems. Global health diplomacy (GHD) aims to promote international cooperation in solving health problems. It can be defined in a number of ways, though overall it is defined as “multi–level, multi–actor negotiation processes that shape and manage the global policy environment for health”.  Global health diplomacy is of considerable importance not only in the discipline of foreign policy but also within other disciplines such as international law, politics, economics and management. However, the challenges in diplomatic negotiations and foreign policy which support global health goals are vast and diverse and are greatly in need of effective leadership and collective action.

One of the challenges in global health diplomacy is the lack of a shared goal and differing political priorities between nations in deciding which health issues should be included explicitly in national foreign policy. Countries attempt to link health issues to foreign policy and national security threats in order to receive significant political support and funding. International resources are generally limited and new funding and development assistance for health is difficult to obtain. Policy makers need to be educated in identifying a particular health issue as a national or international priority and determining its importance relative to other public health issues in order to avoid drawing resources away from health issues of global importance.

However, some health issues – particularly infectious diseases like poliomyelitis – are widely considered as global concerns and inserted into foreign policy issues for countries, but they are not a national security threat. Although addressing these issues is not rooted in a concern for their economic or security impact, they require sustained support and resources to achieve a global good.

Another challenge facing health diplomacy is while the health sector wants to focus its attention more on improving the conditions that allow people to be healthier, foreign policy places importance on national security and economic growth as its top priorities. The health sector must balance the fact that while health is their central goal, it is often not the central goal of foreign policy and can have a negative impact on foreign policy. Therefore, they can sustain engagement and trust among each other by finding mutual benefit in the context of global health goals. 

Although, several governments have placed health issues more prominently in foreign policy decision-making over the past decade, some non-democratic nations have placed little significance in incorporating health issues into their foreign-policy agendas. 

Global health diplomacy plays an important role in advancing human security and ensuring human rights and dignity by linking health and international relations. Human security is concerned with human freedoms and human fulfillment. However, some human security issues have remained a major source of human security violations in undemocratic regimes, like mass atrocities, human trafficking, torture and genocide, war, bioterrorism, environmental degradation, and public health crises. One of today’s global health diplomacy challenges on human security arises from the fundamental values differences in the political character of democratic and nondemocratic countries in negotiations. Some of the key values of democratic governments – like transparency and freedom, pursuit of happiness, justice, and equality – would naturally lead to an increase in human security. These core values differ extremely from the values of authoritarian governments which are based on compliance, coercion and propaganda. Therefore, these two regime types have very little in common and health diplomacy cannot lead to lasting agreement and peace in addressing human security problems. In global health, promoting democracy could improve population health and can contributed to a significant power shift within global health diplomacy.

Have an interesting science policy link? Share it in the comments!

Challenges of Aging with Disability in the U.S.

leave a comment »

By: Letitia Y. Graves, PhD

Image by truthseeker08 from Pixabay 

According to the Centers for Disease Control and Prevention, 61 million adults in the US live with a disability, and 2 in 5 adults with disability are aged 65 and older. Under the Americans with Disabilities Act (ADA), disability is contextualized as a legal term rather than a medical term, defining persons with a disability as one “who has a physical or mental impairment that substantially limits one or more major life activities” and includes those who do not have a disability but are regarded as having a disability. However, this definition leaves significant room for interpretation because what one person “regards” as disability may not be the same as the next. 

Aging during post-retirement years is often deemed as a period of leisure, vacation or easy living. However, this is not the reality for millions of the aged and aging in the United States. Older adults must work in either a full or part-time capacity well into their 70s to cover the necessities of food, shelter, and health care. Statistics from the Urban Institute found that workforce participation rates for men ages 65 to 69 increased from 25% in 1993 to 37% in 2012, a 48% relative increase. The share of women participating in the labor force between 1993 to 2012 grew from 16% to 28%, a relative increase of 75%. However, workforce participation is shortened if there is a disability present that was sustained during prime working years. Costs associated with food security, housing, and medical care can be stressful, and these challenges appreciably increase when individuals are aging with disability.

Economic Insecurity

Over 25 million adults age 60+ are economically insecure, defined as having incomes below $22,000. Retiree benefits average $1300 per month, or roughly $16,000 annually. Based on the Supplemental Poverty Measure, which accounts for health care costs, the Census Bureau estimates 14% of adults ages 65 and older are impoverished. While programs like Social Security, Medicaid and Medicare provide insurance and income subsidies, researchers from Tax Policy Center report that based on current Medicare directives, out-of-pocket cost for health care are slated to rise in the coming years. In addition, these systems suffer from inefficiencies that result in fragmented and/or inequitable distribution. For example, Medicaid is a joint federal-state program, so states may not be able to contribute the amount needed for services equitably or consistently from year to year. This means that every state sets their own budget for supplementation, and that benefits vary state by state. Services that are commonly utilized by those aging and with disabilities (i.e. medical supplies, medications, durable medical equipment, care services) are disjointed and difficult to access with poor coordination across agencies which make this a further challenge to individuals and their caregiver. Medicare Part B covers medically necessary durable medical equipment with a prescription from the physician overseeing care. However, individuals still must pay 20% of the total cost, it must be “deemed” medically necessary and both physicians and suppliers must be enrolled participants in Medicare to qualify for the subsidy.

With the growing number of biotech and engineering start-ups aimed at increasing the assistive technology for those with disability, the options for individuals with functional and cognitive impairments are impressive. Examples of these include electric powered lifts, retinal controlled power wheelchairs, and smart houses with light dimmers when the door-bell rings for those with hearing impairments. These technologies, however, are not widely available to the public due to cost, and companies often do not participate in Medicare. Many of the novel or advanced assistive technologies that would allow aging adults to live independently or with moderate assistance at home are expensive and not available under Medicare Part B coverage. Additionally, some of these assistive devices are deemed “luxury” and not medically necessary, and thus are not covered by insurance. This is a barrier for individuals who are socioeconomically disadvantaged. Not only do these barriers impact the individual who is trying to live independently, they also impact the spouse or family caregiver who is trying to support their loved one in living a life with disability as independently as appropriate.

Caregiver Undertaking

Over the last decade there has been a major shift in caregiving with an emphasis on family caregivers. From 2003 to 2011, the number of unpaid caregivers increased from 9.2 million to 14.7 million. The average length of hospital stays have dramatically decreased, forcing post-acute care and chronic health management into the home where it is administered by spouses, family and/or skilled home health providers. The Many Faces of Caregiving study examined the relationships of 341 caregivers; 56% of caregivers were unpaid, 35% provided both unpaid care and financial support, with 40% of caregivers caring for parent(s), 16% grandparents, and 14% spouses. Correspondingly, caregiving often falls on women as the spouses or daughters of those who need care. Family caregivers are tasked with providing uncompensated medical or nursing tasks, such as administering medication, wound care, and providing transportation to appointments during prime their working years. This places substantial financial burdens on female caregivers, who are estimated to lose a total of $324,044 in wages and Social Security benefits. Moreover, caregivers have been found to be at higher risk for poor health because they neglect themselves in favor of their aging and/or disabled family member. Over time this is not sustainable, because when caregivers are not healthy the health of the family can suffer. 

This is especially relevant to aging individuals, individuals with disability and particularly those that are aging with disability. The programs that support aging at home, hospice and modified or adapted environments for those with cognitive or physical disabilities are beneficial. However, the challenges of these programs include inefficient accessibility, lack of choice in program services and/or limitations on the period of enrollment. This is not only frustrating for individuals but challenging for the familial caregiver(s) who may or may not be knowledgeable about the programs and services available. The care management and socioeconomic needs of individuals aging with disability tend to be significant and often disproportionately impact those in economic disadvantage.

Opportunities for Improvement

Knowing what we know about the cost, in health/wellness and financial capital, for aging adults, adults aging with disability, and their caregivers there must be a more comprehensive approach to allocation and accessibility of resources. Task forces within governmental and non-governmental agencies should come together to examine current policies and bridge programs to make them easier to understand for caregiver and to make coordination more seamlessl across agencies. Moreover, there needs to be more consideration for compensation of family caregivers that will not result in lost wages or contribution to retirement systems. With an increasing number of the workforce heading into retirement these are issues that need to be proactively examined. It is vital that we improve services for our growing aging population, many of whom will likely suffer from secondary health conditions and/or disability. Lastly, there needs to be a broader discussion around the criteria of “disability” . As it stands it is very broad, and has different significance among legal and medical authorities. A more discrete definition will provide much needed context for policy decision makers around the critically important discussion of financial and healthcare benefits. 

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

February 7, 2020 at 3:09 pm

Targeting the spread of unregulated Stem Cell and regenerative therapies

leave a comment »

By: Kellsye Fabian, PhD

Image by Darko Stojanovic from Pixabay

Advances in regenerative medicine research have generated significant public interest in therapies that have the potential to restore the normal function of cells, tissues and organs that have been damaged by age, disease, or trauma. Investment and enthusiasm in this field have propelled the development of regenerative therapies such as cell therapy, bioengineered tissue products, and gene therapy. While several hundreds of these treatments have progressed to clinical trials, the Food and Drug Administration (FDA) has approved only a few regenerative therapies. Of these, most are stem cell-based products derived from umbilical cord blood used to treat blood cancers and immune disorders, and three are gene therapies to treat cancer or blindness. 

Alarmingly, an increasing number of businesses and for-profit clinics have been marketing regenerative therapies, mostly stem cell products, that have not been reviewed by the FDA. In 2016, there were 351 stem cell businesses offering interventions that could be administered in 570 clinics. That number was estimated to have doubled in 2018. Most of these establishments tout that their products can treat or cure serious illnesses and/or provide a wide range of benefits. These claims are often unsubstantiated. Moreover, these unapproved interventions pose a great danger to patients and have resulted in serious complications including blindness, infections, cardiovascular complications, cancer and death.

 Some patients remain willing to take the risks, especially those with serious diseases who have exhausted all possible conventional treatment or those that may be searching for alternative therapies. These individuals often fall prey to the overly optimistic portrayals of stem cell products in the media advertisements from stem cell companies. 

For years, these unscrupulous businesses have avoided heavy regulations. Physicians, researchers and ethicists, have urged for stricter monitoring of regenerative therapies as the commercial activity related to these interventions expanded. In response, the FDA has increased its oversight of the field and has issued guidance relating to the regulation of human cells, tissues and cellular or tissue-based products (HCT/Ps) to ensure that commercialized regenerative therapies are safe and are founded on scientific evidence.

FDA increased oversight 

Since 2017, the FDA has increased oversight and enforcement of regulations against unscrupulous providers of stem cell products. In 2018, the FDA sought permanent injunctions against two stem cell clinics, California Stem Cell Treatment Center Inc and Florida-based US Stem Cell Clinic LLC, for selling unapproved stem cell products and for significantly deviating from current good manufacturing practice requirements that ensure the sterility of biological products. 

The case against California Stem Cell Treatment Centers began in August 2017, when the US Marshals Service, on behalf of the FDA, seized five vials of smallpox virus vaccine from a clinic affiliated with California Stem Cell Treatment Centers. The vaccine was provided by a company called StemImmune and was being combined with stromal vascular fraction (SVF), which are cells derived from patient adipose (fat) tissues that consists of a variety of cells, including a small number of mesenchymal stem cells. This combined product was then administered to cancer patients in California Stem Cell Treatment Centers intravenously or through direct injection into patients’ tumors. 

Cancer patients have potentially compromised immune systems and the use of a vaccine in this manner could pose great risks, such as inflammation and swelling of the heart and surrounding tissues, to the patients. In addition, California Stem Cell Treatment Center provided unapproved treatments to patients with arthritis, stroke, ALS, multiple sclerosis, macular degeneration, Parkinson’s disease, COPD, and diabetes. The injunction case against California Stem Cell Treatment Center is still pending.

US Stem Cell Clinic also marketed SVF to patients seeking for treatment for conditions such as Parkinson’s disease, amyotrophic lateral sclerosis (ALS), chronic obstructive pulmonary disease (COPD), heart disease and pulmonary fibrosis. Three women with macular degeneration, an eye disease that causes vision loss, went blind after receiving eye injections of SVF products from US Stem Cell Clinic. Following these events, in June 2019 a Florida judge ruled that the FDA is entitled to an injunction against US Stem Cell Clinic, meaning that the FDA has the authority to regulate them and stop them from providing potentially harmful products.

While this decision strengthened the position of the FDA as a regulatory body for regenerative medicine, businesses have found different tactics to continue selling unapproved products. After the court ruling, US Stem Cell Clinic stopped selling the fat-based procedure. However, it said that it would continue to offer other stem cell treatments. Instead of stem cells derived from fat, which was the topic of the injunction, the company would now harvest cells from patients’ bone marrow and other tissues to “treat” different conditions. Another company, Liveyon, was given a warning by the FDA in December of 2019 for selling unapproved umbilical-cord blood-based products that were tied to life-threatening bacterial infections. Liveyon has since halted the distribution of its products in the US but has opened a clinic in Cancun, Mexico where it has continues “treating” patients outside the scope of the FDA. Other companies have changed their terminology and marketing language to escape the FDA crackdown against stem cell clinics. Instead of using the phrase “stem cells” in their websites and advertising, they now use “cellular therapy” and “allografts.”

The FDA’s Regulatory Framework for Regenerative Medicine

The warnings and injunctions filed by the FDA against the aforementioned stem cell businesses were in conjunction with the comprehensive policy framework for regenerative medicine that the agency announced in November 2017. The policy framework aims to clarify which medical products are subject to the agency’s approval requirements and to streamline the review process for new regenerative therapies. 

In the case of cellular and tissue products/procedure, there is often a gray area concerning what should be considered medical products, which are under FDA oversight, and what should be considered an individualized treatment being performed by a doctor within their medical practice, which is not regulated by the FDA. Stem cell clinics have often used this ambiguity as justification to sell products without FDA approval. According to the new guidelines, for cells and tissue to be exempt from FDA regulation, several criteria should be met: 1) they must be taken from and given back to the same individual during the same surgery, 2) they must not undergo significant manufacturing (minimal manipulation), 3) they must perform the same basic function (homologous use) when re-introduced to the patient, 4) they must not be combined with another drug or device, and 5) the benefits and risks must be well understood. If any of these criteria are not met, the cell or tissue is considered a drug or biologic and is subject to pre-market review of the FDA. Some ambiguities still persist in the current form of the policy, such as what constitutes “minimal manipulation” and how to address nonhomologous use (i.e. the cells or tissues are used in ways other than its original function). The guidelines are an important starting point in determining which therapies are under the FDA’s purview and continued dialogue between the FDA and stakeholders involved in product development will provide more clarity about how products will be classified.

The policy framework also addresses how the FDA aims to implement the regenerative medicine provisions of the 21st Century Cures Act. Signed into law in 2016, the Cures Act is designed to expedite the development and review of innovative medical products. One of the new programs under this law is the Regenerative Medicine Advanced Therapy (RMAT) designation. A product is only eligible for RMAT designation if 1) it is a cell therapy, therapeutic tissue-engineering product, HCT/P, gene therapy, or combination product using any such therapy; 2) it is intended to treat, modify, reverse, or cure a serious condition; and 3) preliminary clinical evidence indicates that the therapy has the potential to address unmet medical needs for such condition. Stakeholders involved in product development strongly support the creation of this expedited review program. Meanwhile, others are concerned that the RMAT designation will lead to the approval of therapies based on fewer or smaller studies and, hence, treatment-related adverse events would emerge only after a product is on the market. But since RMAT therapies are intended to treat serious conditions, the risks may be acceptable and may be outweighed by the benefits to the patients. Nevertheless, postmarket studies would be essential and must be required to ensure the safety and efficacy of RMAT therapies. 

The establishment of these policy frameworks are definitely a step towards better regulation of the unbridled regenerative therapies. The increased enforcement of these new guidelines will hopefully dissuade unscrupulous businesses from taking shortcuts while encouraging legitimate companies to develop novel treatments. This will ensure that regenerative medicine will continue to be an exciting field that has the potential to provide innovative treatments that will improve human health. 

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

January 24, 2020 at 7:36 pm

Expedited Drug Approvals: When Speeding Saves Lives

leave a comment »

By: Maria Disotaur, PhD

Source: piqsels.com

Changes in laws and regulations have accelerated the drug approval process for rare and fatal diseases. Yet, some experts worry the process is now moving too fast, while others argue that slowing down the process could cost patients their lives. 

The first case of acquired immunodeficiency syndrome (AIDS) in the United States was reported in 1981. Ten years later, more than 250,000 Americans were living with the disease or had died from the epidemic. During this time, activist groups believed the drug approval process was unacceptably slow and possibly leading to the deaths of thousands of Americans. They demanded drugs be proven safe and effective at a faster rate because prior to 1992, the Food and Drug Administration’s (FDA) drug approval process could take two and a half to 8 years due to poor staffing and lack of resources within the agency. Protests at the FDA headquarters, led to the establishment of streamlined policies and regulations designed to speed the approval process for life-saving drugs for serious and often fatal diseases. 

In 1992, a series of complex regulations and processes were established to place life-saving drugs in the hands of patients as expeditiously as possible. Beginning with the Prescription Drug User Fee Act (PDUFA), the agency could charge pharmaceutical companies a $200,000 reviewer fee for a new drug approval application (NDA). This new policy increased agency funds, personnel, and reduced the amount of time to approve a new drug to approximately eighteen months. To further expedite the process the agency introduced accelerated approval and priority review. The former allowed the FDA to use a surrogate endpoint to approve a new drug for a serious medical condition with an unmet medical need. Priority review, required the FDA to review a drug within six months compared to the standard ten months, if the drug showed evidence of significant improvement in treatment, diagnosis, or prevention. These were followed by fast track designation in 1998 and breakthrough therapy designation in 2012, which were designed to expedite the development and review of life-saving drugs that, respectively, fulfilled an unmet need or were better than current market drugs. 

Since their introduction, these regulations have garnered two opposing groups: those that think the drug approval process is moving too fast and those that think it is not moving fast enough. Pharmaceutical companies, health professionals, and patient advocacy groups have argued that millions of Americans are suffering from rare and orphan diseases that require new or enhanced therapies. On the other hand, experts argue that the pathway to expedite drug approvals does not change the fundamental principles of testing the efficacy of a new drug through extensive preclinical research and clinical trials. A recent study published in the Journal of the American Medical Association (JAMA) points to some of the downfalls associated with the FDA’s accelerated approval process. The study looked at 93 cancer drugs approved by the FDA from 1992 to 2017 through the accelerated approval pathway and analyzed the results of confirmatory trials, which are phase 4 post-marketing trials required by the FDA to confirm the clinical benefit of a drug. The study showed that out of the 93 cancer drugs approved only 19 drugs had confirmatory trials that reported an improvement in the overall survival of patients. The study authors concluded that “it is important to recognize the clinical and scientific trade-offs of this approach” particularly since “the clinical community will have less information about the risks and benefits of drugs approved via the accelerated approval program” until confirmatory trials are completed to analyze the clinical benefit and survival for patients. Furthermore, others like Dr. Michael Carome, a physician and a director at the consumer advocacy group Public Citizen, have raised concerns about the medical value and cost of drugs that have limited scientific data. Particularly, the burden placed on families and patients for drugs that in some instances, or in the case of the article published by JAMA, 80% of instances, never prove effective and do not improve patient survival. This was the case for Eli Lilly’s drug Lartruvo, as reported by the Wall Street Journal last summer. In 2016, Lartruvo became the first drug approved by the FDA for soft-tissue sarcoma since the 1970’s. The drug was approved via the accelerated approval pathway after Elli Lilly completed a study with 133 patients showing that Lartruvo with chemotherapy treatment extended the median patient survival by 11.8 months compared to patients using chemotherapy alone. By April 2019, Elli Lilly announced it was removing Lartruvo from the market because in Phase 3 clinical trials, it did not show improvement in patient survival. The removal of Lartruvo from the market left patients dismayed and questioning the long-term effects of the treatment. Prior to Lartruvo’s approval, Dr. James Liebman, an oncologist from Beverly Hospital, and Dr. Hussein Tawbi, an associate professor at MD Anderson Cancer Center expressed concerns about the limited sample size and confounding results for Lartruvo and recommended the FDA delay the approval until other trials were conducted. At the time, the FDA acknowledged their concerns but also acknowledged that the treatments for advanced sarcoma were limited and that the drug could have a clinical benefit on the market. 

These critical decisions are becoming more routine as the FDA tries to meet the demands of doctors, patients, and lawmakers to approve drugs for fatal diseases at a faster rate. In 2009, only 10 drugs were approved through an expedited pathway – either fast-track, priority review, accelerated approval, or breakthrough therapy. Last year, this number increased to 43 drugs out of the 59 novel drugs approved. This jump can be partially accredited to the 21st Century Cures Act, which is designed to expedite the development of new devices and drugs and in some instances, the act allows the FDA to review anecdotal evidence such as patients’ perspectives when reviewing a new drug. Additionally, the increase can be attributed to new biological and diagnostic tools. For example, the use of flow cytometry and next generation DNA sequencing, which allows scientists to detect one cancer cell in a million healthy cells as reported by Scientific American last month. This new measure is called minimal residual disease (MRD) and scientists hope it can be used to accelerate clinical trials and the development of novel drugs. Currently, studies looking at B-cell acute lymphoblastic leukemia indicate MRD-positive patients are more likely to relapse and patients with more than 1 residual cancer cell in 10,000 cells lived approximately six months without relapse, while those that had less than 1 residual cancer cell in 10,000 lived an average of two years without relapse. Scientists hope this method can be used as a future surrogate endpoint and some patient advocacy groups believe that for some cancers like multiple myeloma there is enough evidence to use the method today. A study published in JAMA showed that multiple myeloma patients who were MRD-negative had reduced remission rates and had a 50% longer survival outcomes than those who were MRD-positive. The predictors of MRD indicate that for certain cancers scientists could use this as an alternative surrogate endpoint – it could speed up clinical trials compared to those that look at tumor shrinkage and overall survival. These types of new tools and diagnostics have been prompting patients and doctors to demand faster drug approvals, particularly for rare and life-threatening diseases. Furthermore, they have also compelled FDA officials and leaders like Janet Woodcock, director of the FDA’s Center for Drug Evaluation and Research, to acknowledge that new scientific tools and advancements will continue to speed up the drug approval process. Concurrently, the agency understands the demands of patients living with life-threatening illnesses and is preparing to enhance its best practices. In a recent-press release, the FDA announced it would begin the “reorganization of the office of new drugs with corresponding changes to the office of translational sciences and the office of pharmaceutical quality.” This strategic move aims to “create offices that align interrelated disease areas and divisions with clearer and more focused areas of expertise”. The goal is to enhance efficiency and provide FDA scientists with a better understanding of diseases that may require future FDA drug approval. These types of changes within the FDA infrastructure, along with biological advancements, will continue to impact the speed at which the FDA approves new drugs. Some individuals will argue that accelerating the process is reckless and a danger to vulnerable patients; however, for some of these patients accelerating the process can be the difference between life and death. For these patients, the question of expediting access to new treatments is not “Are we moving too fast?” it is “Can we afford not to?”

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

January 17, 2020 at 6:48 pm

Bias in Artificial Intelligence

leave a comment »

By: Thomas Dannenhoffer-Lafage, PhD

Image by Geralt from Pixabay

Artificial intelligence (AI) is a nebulous term whose definition has changed over time, but a useful modern definition is “the theory and development of computer systems that are able to performs tasks normally requiring human intelligence.” Modern AI has been applied to a wide variety of problems. Applications of AI include streamlined drug discoveryimage processingtargeted advertisementmedical diagnosis assistancehedge fund investment and robotics, and influences people’s lives more than ever before. These powerful AI systems have allowed for certain tasks to be performed more quickly than by a human, but AI also suffers from a very human deficiency: bias. While bias has a more general meaning in the field of statistics (a closely related field to AI), this article will specifically consider social bias, which exists when a certain group of people are given preference or disinclination.

To understand how AI becomes socially biased, it is critical to understand how certain AI systems are made. The recent explosion of AI breakthroughs is due in part to the increased capabilities of machine-learning algorithms, specifically deep-learning algorithms. Machine-learning algorithms differ from human-designed algorithms in a fundamental way. In a human designed algorithm, a person must provide specific instructions to the computer so that a set of inputs can be changed into outputs. In machine-learning algorithms, a person provides a set of data to an algorithm and that algorithm learns how to perform a specific task given the patterns within the data. This is the “learning” in machine-learning. There are many different types of tasks that a machine-learning algorithm can perform, but all are ultimately reliant on some data set to learn from. Reasons why deep-learning algorithms have become so powerful include the large amounts of data available to train algorithms, the availability of more powerful and inexpensive GPU cards that greatly improve the speed of AI algorithms, and the increased availability of online deep learning source codes.

An AI can be trained to be biased maliciously, but more concerning in that bias can be incorporated into AI unintentionally. Specifically, inadvertent bias can creep into machine-learning based AI in two ways: when bias is inherent to the data and when bias is represented in the tasks that the AI is performing. Inherent bias can occur when the data is not representative of reality, such as certain populations being inadequately represented. For example,  this occurred in a facial recognition software that was trained with more photos of light-skinned faces than dark-skinned faces and was less effective at identifying darker-skinned ones. This resulted in a program that was less effective at identifying dark-skinned faces, leading to errors and misidentifications. Bias can also be inadvertently included into data during featurization. Featurization is a process where sophisticated data is manually modified and proofed before it is presented to an AI to improve its learning rate and task performance. Human agents may unknowingly introduce bias into a dataset while performing featurization. Finally, bias often exists in the task that an algorithm is asked to perform. Tasks given to AI algorithms are usually decided upon for business reasons and questions of fairness are therefore typically not considered. 

When a machine-learning based AI is trained on a dataset that is biased it can lead to serious consequences. For instance, the recently introduced Apple credit card used AI to determine the credit worthiness of applicants. Issues were raised about the validity of this system when Steve Wozniak pointed out that his wife received ten times less credit than him, despite their credit profiles being nearly identical. There have also been issues of bias in AI systems used in school admissions processes and hiring platforms. For example, an AI algorithm was tasked with reviewing application materials for an open job position. It was determined that this algorithm was unknowingly biased against women applicants because of differences in language between male and female applicants. Bias was also an issue in an AI algorithm which was used to determine likelihood of recidivism amongst parolees. This system found that African Americans were at higher risk of recidivism than in reality. Since AI has been trusted with greater decision-making power than it in the past, it has the power to propagate bias at a much greater rate than ever before. 

Even though bias has been a known issue in AI for many years, it still is difficult to fix. One major reason for this is that machine-learning algorithms are designed to take advantage of patterns, or correlations, in data that may be impossible for a human to see to aid in its decision making. This could create problems because AI may use artefacts from certain data as indication that a certain decision should be made. For instance, certain medical imaging equipment may have slightly different image quality or deal with boundary conditions differently, which an AI algorithm may use to determine a diagnosis. Another issue is that, while you may not be directly providing an AI algorithm biased data (e.g. including race, age, etc), the AI may be able to infer it. For instance, you may provide height in a data set which an AI trains on, it may be able to determine if an applicant is male since men, on average, are taller than women. This problem of invisible correlations is compounded by the fact that AI are unable to explain their decisions and backtracking how decisions were made can be impossible. Finally, AI are designed to perform tasks as successful as possible and are not designed to take fairness into account at the design stage. 

Thankfully, different solutions to bias in AI algorithms have been proposed. One possibility is to include fairness metrics as part of the design process of AI. An example of this is known as counterfactual fairness. An algorithm satisfies counterfactual fairness when a prediction based on an individual’s data is the same as counterfactual data where all factors are the same except for the individual’s group membership. However, it has been shown that certain fairness metrics may not be satisfied simultaneously because each of the metrics constrain the decision space too greatly. Another solution is to test AI algorithms before deployment and ensure fairness by ensuring that the rate of false positives and false negatives are equal amongst protected groups. New technologies may also help fight AI bias in the future. One solution is Human-in-the-loop decision making which enables a human agent to review the decisions of an AI machine to fight false positives. New technologies may also help fight AI bias in the future, such explainable AI, which is able to explain its decisions to a human. Other solutions include having groups developing AI engage in fact-based conversations about bias more generally. This would include trainings that identify types of biases, their causes, and their solutions. A push for more diversity in the field of AI is a necessary step because team members who are part of majority groups can disregard differing experiences. Lastly, it is suggested that AI algorithms should be regularly audited to ensure fairness both externally and internally.

The discussion of AI bias within the field has come a long way in the last few years. Companies involved in AI development are now actively involving people whose main role is to fight AI bias. However, most of the regulation of biased AI still occurs internally. This has promted the involvement of actors outside of AI development, including the government and the lawyers, to look at AI bias issues as well. Recently, lawmakers have introduced the Artificial Intelligence Initiative Act which aims to establish means for responsible delivery of AI. The bill calls for NIST to create standards for evaluating AI, including the quality of the training sets. The NSF would be called to make responsible training programs for AI use and training that would address algorithm accountability and data bias. The bill does not propose guidelines or timelines for governmental regulations but rather just creates organizations to advise lawmakers and perform governmental research. Regulations would be imperative if AI were to move away from self-regulation. The decision-making power of AI has also gotten the attention of lawyers in the field of labor and employment, who fear that today’s AI now “have the ability to make legally significant decisions on their own.” Thus, there is more opportunity than ever for technologist to impact AI on the policy level by being involved in governmental organization creating the policy, educating lawmakers involved in AI law and regulations, alerting the public to bias and other issues, and working on industry-specific solutions directly.  

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

January 10, 2020 at 1:58 pm

Fixing America's STEM education gap

leave a comment »

By: Michelle Bylicky

Image by Steve Buissinne from Pixabay 

STEM education is more than a focus on science, technology, engineering and mathematics.  Ideally, STEM education is meant to combine these disciplines to provide students with the necessary tools to solve real world problems. The current goals for STEM education as set forth by the National Science and Technology Council are to: increase diversity and inclusion in STEM, to develop a STEM-literate public comfortable with technological advancement and to prepare individuals for STEM careers . At the center of each goal is the need for quality STEM education to provide all students with a groundwork to develop the critical thinking skills necessary for scientific literacy or a career in STEM.  

American students fail to demonstrate proficiency and interest in STEM subjects, indicating clear problems with current STEM education in the US. Only one in three 8th graders test at or above competence in math and science on the National Assessment of Educational Progress (NAEP), the Department of Education’s nationally representative assessment . The US is ranked 36th in math and 18th in science on the Programme for International Student Assessment (PISA) which ranks 70 countries . Meaning we are below the average in math. There is national concern that these low test scores indicate a failure to teach quality STEM education to preK-12 students. The issues impacting STEM education are obviously complex and will involve a multi-pronged approach.  

Numerous solutions have been offered to increase preK-12 STEM interest and proficiency particularly among traditionally disadvantaged groups. However, solutions which have focused solely on specific segments of students have typically yielded disappointing results. To improve mathematical literacy among students from economically disadvantaged backgrounds some state legislatures have encouraged more formal math education at pre-school levels. The reasoning is simple, to prevent failing students from quitting a STEM pathway, do not let them fail by offering early enrichment opportunities. While initial results from early education programs have shown some promise, previous attempts to bolster continued improvement in students have failed. By the end of elementary school, students who received intervention during preschool fail to show ongoing improvement compared to students who did not. 

One suggested explanation for this failure is that teachers in higher grades do not effectively build on the educational opportunities offered to children by programs such as Headstart or Building Blocks.  Preschool teachers may be trained to utilize a specialized curriculum that leads to improvements in preschool childrens’ cognitive skills. But if subsequent education is not also of a high quality children will cease to improve. Training preschool teachers is ineffective if teachers in higher grades do not receive additional training as well. 

There may be more efficient ways to encourage the pursuit of STEM, and that begins with educators. Teachers who suffer from high math anxiety themselves are more likely to have low achieving students in math. One explanation is that math teachers with high anxiety may lack the knowledge to effectively teach math, relying on teaching students through rote memorization rather than discourse. 

If this is true of math teachers, consider that roughly 38% of middle school and high school science teachers have performed any type of research. They can not be subject matter experts in a subject they have no experience in. It would be useful to provide continuing education to these science educators by offering opportunities to participate in scientific research. Previous research has indicated that allowing secondary teachers to perform research has improved their students’ achievement scores

Giving teachers an opportunity to perform research can help by allowing educators to understand how science is performed so they may incorporate more relevant projects into their scientific curriculum for students. Understanding how projects are developed: through collaboration, experimentation and modification of hypotheses before re-testing may improve their teaching methods by moving towards an appreciation of higher order learning. 

Similarly, there has been research at the collegiate level indicating that traditional teacher centered lectures are the least effective method for getting students to retain and utilize information. There is no reason then that this teacher centric lecture strategy should be more effective in the lower grades. Both educating educators on the utility of active teaching and helping them to develop active teaching methods for their classroom would be useful. Active learning or student centered learning refers to methods which require students to reflect on ideas being taught and utilize these ideas to problem solve. This can involve small group discussions, allowing students to work together collaboratively on a question in class or hands on projects. While memorization of facts may be necessary for understanding, the goal should be to encourage student discourse and thought whenever possible. In addition, active learning is associated with a more positive learning experience which can increase motivation to enter a STEM pathway.

Improvement in current preK-12 STEM education may require multiple alterations to the current educational environment in the US. Providing more opportunities for educators to engage in research and other professional development may be one method. This will allow instructors to develop their own science acumen which can guide how they teach students. Similarly educating teachers on the benefits of active learning and introducing simple methods to incorporate active learning into lessons will be beneficial for improving STEM education. If educators are not comfortable with the material that they are teaching then students will struggle.  

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

December 24, 2019 at 10:26 am

Posted in Essays

Tagged with , , ,